id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.01645 | Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models | Fine-tuning large models is highly effective, however, inference can be
expensive and produces carbon emissions. Knowledge distillation has been shown
to be a practical solution to reduce inference costs, but the distillation
process itself requires significant computational resources. Rather than buying
or renting GPUs to fine-tune, then distill a large model, an NLP practitioner
might instead choose to allocate the available budget to hire annotators and
manually label additional fine-tuning data. In this paper, we investigate how
to most efficiently use a fixed budget to build a compact model. Through
extensive experiments on six diverse tasks, we show that distilling from T5-XXL
(11B) to T5-Small (60M) is almost always a cost-efficient strategy compared to
annotating more data to directly train a compact model (T5-Small). We further
investigate how the optimal budget allocated towards computation varies across
scenarios. We will make our code, datasets, annotation cost estimates, and
baseline models available as a benchmark to support further work on
cost-efficient training of compact models. | Junmo Kang, Wei Xu, Alan Ritter | 2023-05-02T17:56:16Z | http://arxiv.org/abs/2305.01645v3 | # Distill or Annotate?
###### Abstract
Fine-tuning large models is highly effective, however, inference can be expensive and produces carbon emissions. Knowledge distillation has been shown to be a practical solution to reduce inference costs, but the distillation process itself requires significant computational resources. Rather than buying or renting GPUs to fine-tune, then distill a large model, an NLP practitioner might instead choose to allocate the available budget to hire annotators and manually label additional fine-tuning data. In this paper, we investigate how to most efficiently use a fixed budget to build a compact model. Through extensive experiments on six diverse tasks, we show that distilling from T5-XOL (11B) to T5-Small (60M) is almost always a cost-efficient strategy compared to annotating more data to directly train a compact model (T5-Small). We further investigate how the optimal budget allocated towards computation varies across scenarios. We will make our code, datasets, annotation cost estimates, and baseline models available as a benchmark to support further work on cost-efficient training of compact models.
## 1 Introduction
Increasing the size of pre-trained models can consistently improve performance on downstream tasks after fine-tuning, as seen in studies based on BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), BART Lewis et al. (2019), T5 Raffel et al. (2020), and the work on empirical scaling laws Brown et al. (2020); Lester et al. (2021); Hernandez et al. (2021). However, using large models for inference is expensive and contributes to carbon emissions Patterson et al. (2021). To address this, researchers have explored methods to compress large models through techniques such as knowledge distillation Hinton et al. (2015); Sanh et al. (2019); Gou et al. (2021), which is effective in reducing inference costs Magister et al. (2022) and improving the generalization of smaller student models Stanton et al. (2021). Nonetheless, the distillation process itself still requires significant computational, memory, and storage resources Xia et al. (2022).
In addition to compressing models, an alternative approach to improve performance without increasing inference costs is to simply label additional data for fine-tuning. Recent work has shown that a few hundred extra labels can sometimes lead to better performance than billions of additional model parameters Kirstain et al. (2022). This raises the question of how to most efficiently use a fixed budget to train a compact model which supports efficient inference while maximizing performance. One option is to use an available budget to hire annotators to label additional data and directly fine-tune a small model. Alternatively, the budget could be used to purchase or rent GPUs to fine-tune and distill a large teacher model (see Figure 1).
In this paper, we use the theory of consumer choice Becker (1965); Lancaster (1966); Bai et al. (2021) to investigate the question of when distillation is a cost-efficient strategy for model compression. Based on extensive empirical analysis,
Figure 1: An illustration of two practical strategies to build a compact fixed-size model. Given a fixed budget and a small amount of initially annotated data, (i) one can annotate more data to directly fine-tune a small model. (ii) Alternatively, one may leverage a larger model with more computational resources to distill its knowledge into a small model for efficient inference.
we provide recommendations on how to allocate a fixed budget for human annotation and computing resources to train a compact model. Our experiments across six NLP tasks reveal that distillation with unlabeled data is almost always a cost-efficient strategy for improving the performance of compact models when compared to annotation (see Table 2). Furthermore, our analysis shows that the optimal allocation of budget towards distillation increases as more labeled data becomes available (see SS4.1 and Figure 2). For smaller budgets, it is Pareto optimal (Abdolrashidi et al., 2021; Treviso et al., 2022) to use smaller amounts of unlabeled data for distillation, while increasing the amount of labeled data, as this leads to a more knowledgeable teacher. As the budget increases, it becomes economical to distill using larger unlabeled datasets, because the teacher model outperforms the student by a significant margin. Finally, we investigate the cost efficiency of data annotation with GPT-3.5(Ouyang et al., 2022) (Figure 6). We find that, although GPT-3.5 is cheaper than human annotators, fine-tuning T5-XXL and then distilling a small model is more cost-efficient than directly fine-tuning the small model with pseudo-labels from GPT-3.5.
We will make our code, datasets, annotation cost estimates, and baseline models available as a benchmark to support further work on cost-efficient training of compact models.
## 2 Study Design
In this section, we first describe how we formulate the problem for the cost-efficiency study (SS2.1). We then compare two strategies (SS2.2 & 2.3) for building compact models that incur different proportions of computational and human annotation costs. Finally, we explain how to estimate the annotation cost (SS2.4) and computational cost (SS2.5) involved in the two strategies.
### Problem Formulation and Assumptions
The main focus of this study is to fairly evaluate the two approaches (SS2.2 & SS2.3) under a fixed budget. When financial constraints are in place, practitioners may be faced with weighing options of allocating money towards _data_ or _compute_; we empirically investigate their trade-offs to maximize the resulting utility. To enable extensive studies, we simulate the process of labeling data using a variety of existing crowdsourced datasets, and the cloud GPU rentals that charge per hour of use.
We assume the NLP engineer's salary is a fixed cost, so their time spent building models and/or managing a group of annotators to label data are not a factor in determining the total cost. The only costs considered are the direct costs for human data labeling and GPU computation. No task-specific labeled data is initially assumed to be available for free, but we do assume that pre-trained models such as T5(Raffel et al., 2020), which are publicly available, have zero cost.
### Strategy 1: Building a Compact Model Directly with Annotations (Ann.)
This strategy directly fine-tunes a compact model (e.g., T5-Small (60M)), allocating the entire budget towards human annotation. This is considered the most straightforward approach practitioners would choose to train a compact model.
In particular, given a budget constraint, we prepare data that can be maximally annotated using the budget, and we train T5(Raffel et al., 2020) on the data under a unified text-to-text framework for all tasks (Table 1), maximizing the likelihood of a target text \(Y\) given an input text \(X\). The format for an input \(X\) and the corresponding target \(Y\) for each task is detailed in Appendix B.
Note that the most dominant cost associated with this strategy is the annotation cost. While the total cost of building this direct model can include the fine-tuning cost (i.e., computational cost), we
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Dataset** & **Task** & **\#Train** & **\%Label** & **Total §** \\ \hline \hline
**WLP**(Tabassum et al., 2020) & Named Entity Recognition & \(11,966\) & \(\text{S}0.260\) & \(\text{S}3,111\) \\
**StanceOSAURUS**(Zheng et al., 2022) & Stance Classification & \(12,130\) & \(\text{S}0.364\) & \(\text{S}4,415\) \\
**FEVER**(Thome et al., 2018) & Fact Verification & \(104,966\) & \(\text{S}0.129\) & \(\text{S}13,544\) \\
**MultiTPIT\({}_{\text{Id}}\)**(Dou et al., 2022) & Paraphrase Identification & \(92,217\) & \(\text{S}0.200\) & \(\text{S}18,443\) \\
**MultiTPIT\({}_{\text{Gen}}\)**(Dou et al., 2022) & Paraphrase Generation & \(49,673\) & \(\text{S}0.371\) & \(\text{S}18,443\) \\
**Natural Questions**(Kwiatkowski et al., 2019) & Question Answering & \(87,372\) & \(\text{S}0.129\) & \(\text{S}11,271\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data annotation costs for various NLP datasets/tasks.
found it negligible in most cases and thus omitted it, unless otherwise noted, for the sake of simplicity.1
Footnote 1: Fine-tuning T5-Small (60M) on 5K data, for example, takes less than half an hour, which costs approximately $1, based on the computational cost in §2.5.
### Strategy 2: Distilling from a Larger Model (Dist.)
As an alternative to annotating more data, one could allocate part of the budget towards computation to train a larger (e.g., T5-XXL (11B)) model on a smaller amount of data. The large model can then be distilled to produce a final compact model that also supports efficient inference.
Following recent work (Xia et al., 2022; Zhou et al., 2022), our study mostly focuses on task-specific model compression rather than general distillation (Sanh et al., 2019),2 however we provide analysis of general vs. task-specific distillation in Appendix F. General distillation requires significant computational resources; also task-specific and general distillation can be used together in a complementary fashion (Jiao et al., 2020).
Footnote 2: In general distillation, a pre-trained model is distilled before fine-tuning, such as DistillBERT.
Notably, even for Strategy 2, annotated data is needed to train the large teacher model. Therefore, we assume to have a certain number (\(N\)) of data initially annotated by spending some part of the budget, and fine-tune the larger model using this data in the same way as in §2.2. After that, a small model (i.e., student) is trained by distilling the larger model's (i.e., teacher) knowledge (Hinton et al., 2015), in which the teacher's probability distributions over a target sequence given a source input are used as soft labels. We adopt KL divergence loss, which compares two distributions, to make the student's distribution \(P_{S}\) follow the teacher's output distribution \(P_{T}\) with respect to task-specific unlabeled data3:
Footnote 3: For example, source sentences without target paraphrased sentences for a paraphrase generation task. Refer to Appendix D for details of the unlabeled data.
\[D_{KL}(P_{T}||P_{S})=\sum_{v\in V}P_{T}(v)\log\frac{P_{T}(v)}{P_{S}(v)} \tag{1}\]
where \(V\) is vocabulary space. Input and target tokens that are conditioned to produce probabilities are omitted above for brevity.
The total cost includes both the initial cost for \(N\) (the number of initially annotated training examples) and the computational cost for fine-tuning a large model and then distilling it into a compact model.
### Cost Estimation for Data Annotation
This study considers six diverse and practical NLP tasks, shown in Table 1. We estimate the annotation cost for each dataset based on mentions in the corresponding literature if available, correspondence with creators of the dataset, or prices of the Data Labeling Service from Google Cloud, following Wang et al. (2021)4. Detailed descriptions of our cost estimates for each dataset are provided in Appendix A.
Footnote 4: [https://cloud.google.com/ai-platform/data-labeling/pricinglabeling_costs](https://cloud.google.com/ai-platform/data-labeling/pricinglabeling_costs)
### Estimation of Computational Cost
This work assumes that computing resources are rented from Google Cloud for model training. We specifically consider NVIDIA A100 GPUs, each equipped with 40GB of VRAM, to fit a large model (e.g., 11B parameters) into them. The price of this, which includes a virtual machine and storage, is set at about $3.75 per 1 GPU hour. For extensive studies, we exploit our own resources, A40 GPUs that have been shown to be approximately 2x slower than A100 through benchmark results5 as well as our preliminary experiment that compares the training time. As a result, we estimate the computational cost as $1.875 per 1 GPU hour. This is a realistic price that practitioners would need to pay, unlike theoretical measures such as FLOPs, which do not reflect the real runtime (Xu and McAuley, 2022) and costs. An example breakdown of cost estimates for building compact models is provided in Appendix (Table 6).
Footnote 5: [https://lambdalabs.com/blog/nvidia-ftx-a40-benchmarks](https://lambdalabs.com/blog/nvidia-ftx-a40-benchmarks)
## 3 Evaluating Annotation and Distillation under a Fixed Budget
In Table 2, we evaluate the two strategies under varying budgets for six different tasks. We first set \(\ket{N}\), the number of starting data annotated by spending an initial $. Given a fixed budget, we then either _annotate more data_ for the annotation (Ann.) strategy, or use more _GPU hours_ along with more _unlabeled data_ for the distillation (Dist.) strategy.
We consider T5-Small (60M) as a compact model and T5-XXL (11B) as a teacher model for our main study. All models are fine-tuned based
on T5 v1.1 Roberts et al. (2020), which was pre-trained in an unsupervised way only, unlike the original T5 Raffel et al. (2020).
In the case of FEVER and Natural Questions, following Lee et al. (2020) and Roberts et al. (2020) respectively, we consider a closed-book setting where models should rely solely on its parametric knowledge, and report performances on dev sets as test sets are private. To measure performances, we use accuracy for FEVER and MultiPIT\({}_{\text{Id}}\), F1 for WLP, Stanceosaurus, and Natural Questions, and BERT-iBLEU Niu et al. (2021) (i.e., the harmonic mean of self-BLEU and BERTScore Zhang et al. (2020)) for MultiPIT\({}_{\text{Gen}}\). More details about experimental settings are described in Appendix C.
### Annotation vs. Distillation
In Table 2, we observe that interestingly, the distillation (Dist.) strategy significantly outperforms the annotation (Ann). strategy across almost all cases for all tasks. While knowledge distillation Hinton et al. (2015) has been proven effective for compression/generalization in previous works Sanh et al. (2019); Kang et al. (2020); Le et al. (2022), our result that takes into account the realistic costs involved in building models is quite surprising, which highlights a new aspect: it is economically efficient. In other words, this suggests that exclusive reliance on scaling data by hiring human annotators might not be a good practice in light of cost efficiency.
Note that Dist. needs to be first fine-tuned on \(\left|\kern-1.075ptN\kern-1.075pt\right|\) labeled data that requires a considerable computational cost, so if the fine-tuning cost exceeds the given budget, we denote such cases as N/A. In such scenarios, Ann. is essentially the right choice. We also notice some scenarios where Ann. is a better option with limited budgets. For example, Ann. defeats its counterpart with $100 for WLP
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Task**}} & \multirow{2}{*}{\(\left|\kern-1.075ptN\kern-1.075pt\right|\) (initial \(\boldsymbol{5}\))} & \multirow{2}{*}{**Strategy**} & \multicolumn{6}{c}{**Additional \(\boldsymbol{5}\)**} \\ \cline{4-10} & & & & & & \multicolumn{2}{c}{**An.** Performance (\(\left|\kern-1.075ptN\kern-1.075pt\right|\))} & \multicolumn{2}{c}{**(\(\left|\kern-1.075ptN\kern-1.075pt\right|\))**} \\ \multicolumn{1}{c}{} & & & & & & Dist. & Performance (\(\left|\kern-1.075ptN\kern-1.075pt\right|\)/\(\left|\kern-1.075ptN\kern-1.075pt\right|\)) & & & & \\ \hline \hline \multirow{4}{*}{**WLP**} & \multirow{4}{*}{\(\left|\kern-1.075ptN\kern-1.075pt\right|\) (526)} & \multirow{4}{*}{T5-Small (_Ann._)} & **40.7** (\(\delta\)0) & 50.0 (\(\lambda\)08) & 53.7 (\(\lambda\)08) & 57.8 (\(\lambda\)123) & 62.7 (\(\lambda\)192) \\ & & T5-COL (**72.4**) + T5-Small (Dist.) & \multirow{4}{*}{NA} & **71.1** (\(\delta\)02/_IR_) & **71.3** (\(\delta\)08/_IR_) & **70.9** (\(\delta\)08/_IR_) & **70.8** (\(\delta\)28/_IR_) \\ \cline{2-10} & & T5-COL (**72.4**) + T5-Small (Dist.) & & **67.4** (\(\delta\)08) & **68.2** (\(\delta\)08) & 66.6 (\(\lambda\)08) & 68.7 (\(\lambda\)123) & 69.3 (\(\delta\)212) \\ & & T5-COL (**74.2**) + T5-Small (Dist.) & \multirow{4}{*}{NA} & 65.3 (\(\delta\)04/_IR_) & **71.8** (\(\delta\)07/_IR_) & **72.4** (\(\delta\)08/_IR_) & **72.5** (\(\delta\)08/_IR_) \\ \hline \hline \multirow{4}{*}{**Stance-Saxtens**} & \multirow{4}{*}{\(\left|\kern-1.075ptN\kern-1.075pt\right|\) (364)} & \multirow{4}{*}{T5-Small (_Ann._)} & **37.5** (\(\delta\)0) & 45.4 (\(\lambda\)292) & 45.5 (\(\lambda\)422) & 45.5 (\(\lambda\)364) & 44.7 (\(\lambda\)824) \\ & & T5-COL (**62.5**) + T5-Small (Dist.) & & & **54.2** (\(\delta\)04/_IR_) & **54.6** (\(\delta\)08/_IR_) & **56.3** (\(\delta\)08/_IR_) & **56.9** (\(\delta\)08/_IR_) \\ \cline{2-10} & & T5-COL (**63.4**) + T5-Small (Dist.) & & & **54.4** (\(\delta\)05) & 57.2 (\(\delta\)02) & 56.6 (\(\delta\)02) & 49.1 (\(\lambda\)549) & 50.3 (\(\delta\)04/_IR_) \\ \cline{2-10} & & T5-COL (**63.4**) + T5-Small (Dist.) & & & **52.4** (\(\delta\)04/_IR_) & **56.4** (\(\delta\)08/_IR_) & **56.2** (\(\delta\)08/_IR_) & **56.5** (\(\delta\)08/_IR_) & **60.5** (\(\delta\)08/_IR_) \\ \hline \hline \multirow{4}{*}{**FEVER**} & \multirow{4}{*}{\(\left|\kern-1.075ptN\kern-1.075pt\right|\) (312)} & \multirow{4}{*}{T5-Small (_Ann._)} & **40.7** (\(\delta\)0) & 49.7 (\(\delta\)0) & 49.7 (\(\delta\)08) & 49.7 (\(\delta\)08) & 49.7 (\(\delta\)08) & 49.7 (\(\delta\)08) & 49.8 (\(\delta\)128) \\ & & T5-COL (**73.1**) + T5-Small (Dist.) & \multirow{4}{*}{NA} & **71.3** (\(\delta\)02/_IR_) & **71.1** (\(\delta\)08/_IR_) & **71.6** (\(\delta\)08/_IR_) & **71.6** (\(\delta\)08/_IR_) & **71.7** (\(\delta\)08/_IR_) \\ \cline{2-10} & & T5-COL (**76.2**) + T5-Small (Dist.) & & & **73.4** (\(\delta\)08/_IR_) & **74.1** (\(\delta\)08/_IR_) & **74.3** (\(\delta\)08/_IR_) & **74.8** (\(\delta\)08/_IR_) \\ \hline \hline \multirow{4}{*}{**MultiPIT\({}_{\text{Gen}}\)**} & \multirow{4}{*}{\(\left|\kern-1.075ptN\kern-1.075pt\right|\) (371)} & \multirow{4}{*}{T5-COL (**67.4**) + T5-Small (Dist.)} & \multirow{4}{*}{NA} & **73.4** (\(\delta\)08/_IR_) & **74.1** (\(\delta\)08/_IR_) & **74.3** (\(\delta\)08/_IR_) \\ \cline{2-10} & & T5-COL (**76.2**) + T5-Small (Dist.) & & & **50.0** (\(\delta\)05) & **51.1** (\(\delta\)08/_IR_) & **74.5** (\(\delta\)00/_IR_) & **74.0** (\(\delta\)08/_IR_) \\ \cline{2-10} & & T5-COL (**76.9**) + T5-Small (Dist.) & & & **73.1** (\(\delta\)02/_IR_) & **73.0** (\(\delta\)08/_IR_) & **73.8** (\(\delta\)08/_IR_) & **73.8** (\(\delta\)08/_IR_) & **77.9** (\(\delta\)08/_IR_) \\ \cline{2-10} & & T5-COL (**77.0**) + T5-Small (**76.7**) + T5-Small (Dist.) & & & **80.6** (\(\delta\)04/_IR_) & **80.5** (\(\delta\)08/_IR_) & **81.1** (\(\delta\)08/_IR_) & **81.9** (\(\delta\)06/_IR_) \\ \hline \hline \multirow{4}{*}{**MultiPIT\({}_{\text{Gen}}\)**} & \multirow{4}{*}{\(\left|\kern-1.075ptN\kern-1.075pt\right|\) (371)} & \multirow{4}{*}{T5-Small (_Ann._)} & **40.7** (\(\delta\)0) & **45.0** (\(\delta\)0) & **45.0** (\(\delta\)0) & **59.2** (\(\delta\)09) & **59.3** (\(\delta\)08) \\ & & T5-COL (**67.4
(_N=5K_) and MultiPIT\({}_{\text{Gen}}\) (_N=10K_). In these cases, the _#unlabeled data_ used for distillation are highly limited (\(7K\) and _10K_, respectively) as fine-tuning costs make up a substantial portion of limited budgets.
### Does Distillation Work Better Simply by Making Use of Unlabeled Data?
In Table 2, we observe a substantial performance gap between Ann. and Dist. One notable point is that there is a big difference in the absolute number of data (_#labeled data_ and _#unlabeled data_ ) used for each strategy given a fixed budget. In Table 2, for instance in WLP, given $500, _1923_ more data can be annotated for Ann., whereas _111K_ unlabeled data can be leveraged for Dist. This not only means that annotated data is expensive, but also raises a question: _is the performance gap simply because of the difference in the number of data points?_ To investigate this question by building a fair ground in terms of the size of data, we take a self-distillation (Self-Dist.) approach [22] in which the architecture of a teacher and a student is the same (i.e., T5-Small).
In Table 3, we compare Dist. against Self-Dist. using the same _100K_ unlabeled data. We see that Self-Dist. is worse than the Dist. across all tasks by remarkable margins even though the same number of data is used. In fact, the performance of Self-Dist. is found to be bounded by its teacher (i.e., T5-Small (Ann.)), as also observed in [23]. This analysis suggests that the performance gap between Dist. and Ann. can indeed be attributed to exploiting the large pre-trained language model's capability, not simply making use of more data.
### Comparison under Larger Budgets
Our experiments suggest that distillation (Dist.) is a more economical choice than relying completely on the human annotation to train a compact model, at least within scenarios presented in Table 2. However, this raises a question: _could_ Ann. _reach the performance of_ Dist. _when investing a much larger budget?_ Table 4 shows the results of Dist. with budgets for _100K_ unlabeled data, and Ann. with much larger budgets (or upper bound by using all available _#labeled data_ ). Interestingly, in some cases (Stanceosaurus & MultiPIT\({}_{\text{Gen}}\)), Dist. turns out to be an astoundingly economically efficient way to train a compact model. Even though all existing annotated data (\(\sim\)_50K_) are used for MultiPIT\({}_{\text{Gen}}\) training (w/ $14,469), it never outperforms Dist. (w/ only $245). For other tasks except for the aforementioned ones, we notice that Ann. can outperform Dist. with much larger budgets (e.g., $12,899 for FEVER). In practice, however, we still find that Ann. can be much more costly (e.g. 10x in the case of FEVER) to obtain similar performance.
## 4 Further Analyses
In this section, we study varied values of each variable: the initial number (\(N\)) of annotated data (SS4.1), the compact model size (SS4.2), and the teacher model size (SS4.3), all of which are fixed in the main experiment (SS3.1).
### Pareto Curves
In Figure 2, we explore different combinations of _#labeled data_ (L={0.1K, 0.5K, 1K, 5K, 10K}) and _#unlabeled data_ (U={0, 10K, 100K}). Note that U=0 indicates the annotation (Ann.) strategy in essence. We plot the performances of each combination and approximate the Pareto frontier
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Model** & **WLP** & **Stanceosaurus** & **FEVER** & **MultiPIT\({}_{\text{Gen}}\)** & **MultiPIT\({}_{\text{Gen}}\)** & **Natural Questions** \\ \hline \hline
15-Small \(\Rightarrow\) T5-Small (Dist.) & 70.6 (592) & **58.9** (279) & 74.2 (811) & 80.9 (151) & **73.8** (5245) & 17.8 (5148) \\
15-Small (Ann.) & 70.5 (51,100) & N/A & 74.0 (51,02) & 81.0 (51,980) & N/A & 17.8 (53,127) \\
17-Small (Ann.) - Upper Bound & **71.1** (51,800) & 53.0 (52,595) & **76.9** (512,899) & **87.5** (517,442) & 69.3 (514,469) & **26.2** (59,801) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performances along with (the corresponding budget) of Dist., Ann. that performs the same/similar to Dist., and Ann. upper bound by leveraging all existing annotated data. The best performance for each task is in bold.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Model** & **WLP** & **Stanceosaurus** & **FEVER** & **MultiPIT\({}_{\text{Gen}}\)** & **MultiPIT\({}_{\text{Gen}}\)** & **Natural Questions** \\ \hline \hline
15-Small \(\Rightarrow\) T5-Small (Self-Dist.) & 65.2 (674) & 50.3 (50.5) & 67.6 (67.2) & 77.1 (78.0) & 66.1 (68.1) & 3.8 (9.8) \\
15-Small (Ann.) & 70.6 (74.2) & 58.9 (69.6) & 74.2 (78.0) & 80.9 (84.5) & 73.8 (74.8) & 17.8 (26.1) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of self-distillation and distillation with the same amount of unlabeled data (_100K_). Numbers in [ ] represent the performances of the teacher models that are trained on _5K_ annotated data.
(Abdolrashidi et al., 2021; Treviso et al., 2022) by interpolating the given data points. For all tasks, we observe that the distillation (Dist.) strategy is almost always Pareto optimal.6 In Appendix (Table 11), we also look at the low resource setting in detail.
Footnote 6: One exception is (L=0.1K, U=0) where a budget is so limited that leveraging a large model is barely feasible.
Furthermore, we observe that using a smaller amount of unlabeled data (U=10K) is Pareto optimal for smaller budgets, while larger unlabeled data (U=100K ) maximizes utility as the budget increases. This implies that in low-budget settings, the teacher's capacity is limited, allowing the student to catch up quickly. However, once the teacher outperforms the student by a significant margin, it is more economical to allocate a larger part of the budget towards distillation.
In Figure 3, we provide an additional analysis by varying the number of initially annotated data (\(N\)) under fixed budgets to look at the impact of \(N\). Expectedly, we notice that Dist. outperforms Ann. in general except for some cases with low \(N\)
Figure 3: Results according to different number of starting annotated data (\(N\)) under fixed additional budgets.
Figure 2: Pareto curves with various combinations of _#labeled data_ (L={0.1K, 0.5K, 1K, 5K, 10K}) and _#unlabeled data_ (U={0}, 10K, 100K )). U=0 denotes the annotation (Ann.) strategy. The Pareto frontier (***) is the set of optimal solutions that practitioners would choose from, and is approximated by interpolating the given data points. The X-axis is on a logarithmic scale.
especially for MultiPIT\({}_{\mathsf{Gen}}\) as also evidenced in Appendix (Table 11). It is worth noting that there is a common trend across all tasks that the Dist. performances drop with high \(N\). This is due to the limited budgets; high \(N\) requires a substantial fine-tuning cost for a large model, hence the budget to be used for distillation is limited. For instance, in the case of Stanceosaurus with budget=$200, if \(N\) is \(\leavevmode\nobreak\ \overline{\leavevmode\nobreak\ }{IK},\leavevmode \nobreak\ \overline{\leavevmode\nobreak\ }{82K}\) unlabeled data can be used for distillation, whereas only \(\leavevmode\nobreak\ \overline{\leavevmode\nobreak\ }{35K}\) unlabeled data are used when \(N\)= \(\leavevmode\nobreak\ \overline{\leavevmode\nobreak\ }{10K}\), resulting in the former outperforming the latter. This offers a lesson that unconditionally pursuing larger \(N\) is not desirable in a fixed budget scenario; it is advisable for practitioners to understand and consider the trade-off between the fine-tuning and distillation costs.
### Varying the Compact Model Size
To consider various inference scenarios, we explore different sizes of a compact model in Figure 4. In general, the performances of all models improve as the budget increases, and Dist. outperforms Ann. given the same cost except for the low budget (\(N\)=\(0.1K\)) setting. Interestingly, we observe that T5-XXL \(\Rightarrow\) T5-Base(Dist.) is better than T5-XXL \(\Rightarrow\) T5-Large(Dist.) in some cases ($1600 for WLP, $671 and $4010 for MultiPIT\({}_{\mathsf{Gen}}\)) although the former is smaller and more efficient. We conjecture that this is attributed to the model's larger number of parameters that require more GPUs and thereby more cost. This result disproves the prevailing belief that larger models are always superior, at least in fixed-budget scenarios.
### Varying the Teacher Model Size
We now investigate teacher models with different scales (Figure 5). It turns out that relatively smaller teacher models (T5-Large & T5-XL) cannot be good teachers in the low budgets scenarios. For instance, with $521 for MultiPIT\({}_{\mathsf{Gen}}\), T5-Large \(\Rightarrow\) T5-Small(Dist.) and T5-XL \(\Rightarrow\) T5-Small(Dist.) underperform T5-Small (Ann.), whereas T5-XXL \(\Rightarrow\) T5-Small(Dist.) outperforms T5-Small (Ann.). In higher budget settings, it is noticeable that the largest teacher (XXL) is similar to or better than the smaller teacher (Large, XL). Taken together, this analysis suggests that when adopting distillation, the scale of the teacher model matters, and it may be safe to leverage sufficiently a larger model as a teacher regardless of any budgetary scenarios.
## 5 GPT-3.5 as an Annotator
Furthermore, we examine the cost efficiency of GPT-3.5 (Ouyang et al., 2022) annotation through an in-context few-shot learning scheme. Wang et al. (2021) has recently demonstrated that GPT-3 (Brown et al., 2020) can be used as a cheaper labeler compared to humans. We attempt to scrutinize its applicability to the tasks considered in this work, and also contextualize its result with that of Dist. ultimately. We make use of the text-davinci-003 model to generate pseudo-labels by prompting with 32 training examples. In this experiment, we assign $200 each for WLP and Stanceosaurus for GPT-3.5 annotation. Note that OpenAI7 charges money based on the number of tokens used. The cost per label for WLP
Figure 4: Results with different compact model sizes: Small (60M), Base (220M), Large (770M). For Dist., a teacher is fixed (XXL-11B), and the distillation cost is set to $300. Best viewed in color.
Figure 5: Results with varied scales of the teacher: Large (770M), XL (3B), XXL (11B). The compact model is fixed (Small-60M). The distillation cost is fixed as $200 for WLP and $150 for MultiPIT\({}_{\mathsf{Gen}}\).
is $0.046 and for Stanceosaurus is $0.073, if using GPT-3.5 (details in Appendix E).
In Figure 6, we compare GPT-3.5 annotation (GPT-3.5 Ann.) against the human annotation and distillation strategy. In addition to GPT-3.5 Ann., we combine it with human annotation (Human + GPT-3.5 Ann.) to enhance quality and make a comparison with Dist. The results clearly show that while GPT-3.5 could be better than human annotators as hinted in prior work Wang et al. (2021), it significantly underperforms the distillation (Dist.) strategy given the same budget despite GPT-3.5's larger parameters (175B) than the teacher (11B). This once again highlights the different view of knowledge distillation: cost efficiency.
## 6 Related Work
The costs associated with building models have been explored or concerned by many prior works.
Data Annotation.On one hand, researchers have attempted to tackle the problem of noisy or expensive human annotation. For example, Zhang et al. (2021) studies how to distribute annotation budgets between more examples with a single label and fewer examples with many labels. Chen et al. (2022) investigates a redundant annotation with a majority vote vs. cleaning or relabeling the incorrect annotations. Wang et al. (2021) compares human annotations against GPT-3 Brown et al. (2020) annotations. However, these works only focus on the annotation cost.
Knowledge Distillation.On the other hand, other lines of work address computational budgets associated with knowledge distillation. Ye et al. (2022) proposes using a larger and sparser student model than a teacher model to further reduce inference cost. Jooste et al. (2022) compares different distillation schemes for cheap, fast, and environmentally friendly translation models. Ma et al. (2022) explores an efficient interactive distillation with meta-learning. The aforementioned works, however, ignore the data budgets and/or barely consider the realistic computational costs involved in the distillation process. While knowledge distillation has been shown effective for compression or generalization in previous NLP works Sanh et al. (2019); Kang et al. (2020); Le et al. (2022), it remains unclear whether or not it is efficient even when considering the actual cost of distillation, which is often overlooked. As concurrent works, Sun et al. (2023) presents a novel principle-driven self-alignment approach, and Hsieh et al. (2023) introduces a method that involves step-by-step distillation using chain-of-thought Wei et al. (2022) rationales. Although the main focus is completely different from ours (i.e., cost), we believe that these works not only enhance this particular area but also have the potential to support our own findings regarding the cost-efficiency of distillation as the new methods would make the gap with annotation even bigger.
Data and Compute.Unlike most existing works that consider exclusively either annotation or computational cost, our study contextualizes the two superficially dissociated types of costs, known to be expensive Ning et al. (2019); Hong et al. (2020); Hendrycks et al. (2021); Izsak et al. (2021); Obando-Ceron and Castro (2021); Minixhofer et al. (2022) while being obscure in how they can be comparable to each other. Kirstain et al. (2022) compares scaling parameters against adding more labeled examples, but a compact model and a realistic cost ($) are not of interest to it. Our work resembles Bai et al. (2021) in terms of study framework, which explores how to optimally assign pre-training and
Figure 6: Comparisons with GPT-3.5 annotation. Given an initial human annotation \(N\)={\(0.1K\), \(1K\), \(5K\)} with the corresponding costs, $200 is additionally allocated for distillation or GPT-3.5 annotation (i.e., Human + GPT-3.5 Ann.).
annotation costs specifically for domain adaptation settings. Our focus is more on fine-tuning/distilling a compact model rather than pre-training from scratch and on exploring more general scenarios with diverse tasks.
## 7 Conclusion
In this work, we address a dilemma that practitioners often face when building a model: _given a limited budget, how to invest it to train a compact model in an economically efficient manner?_ We provide empirical evidence that (i) only scaling data using human annotators or GPT-3.5 for annotation may not be the most economical solution, and (ii) when adopting the distillation strategy, using a smaller amount of unlabeled data leads to Pareto efficient models with a smaller budget, while it becomes more beneficial to use larger amounts of unlabeled data as the budget increases. Furthermore, (iii) we demonstrate that in budget-constrained settings, a smaller final model could produce both better performance and more efficient inference. Given these findings, future work can explore different approaches to leveraging a large model's capability such as pruning for cost-efficient compact models.
## Limitations
This paper fundamentally considers a scenario in which practitioners rent cloud GPUs. In the case of hosting GPUs by themselves, the two strategies explored in this study would not be simply comparable. However, in practice, when training a large model (w/ 8 A100 GPUs), we conjecture that renting GPUs could be preferred in many cases as scaling compute powers is not trivial and prohibitively expensive Izsak et al. (2021); Obando-Ceron and Castro (2021); Minixhofer et al. (2022). It is also noteworthy that in the future, computational costs may become cheaper as new hardware advances, the pricing policy by cloud platform services changes, and more optimization techniques are applied. On the other hand, human annotation cost is likely to be the same at least or even more expensive. With cost changes in such a direction, the same conclusion made by our study will hold even though the gap between the two strategies will get larger.
For a compression method, our work focuses on knowledge distillation Hinton et al. (2015). However, it is worth noting that distillation amplifies a societal bias in a compressed model Hooker et al. (2020); Silva et al. (2021) due to its limited capacity Ahn et al. (2022). Accordingly, practitioners are encouraged to additionally leverage bias mitigation techniques Ahn et al. (2022) when adopting distillation for real-world applications. On top of our finding that the distillation scheme is more cost-efficient than the data annotation approach, other efficient methods such as pruning Xia et al. (2022) may be investigated in future work to decide which one is the best efficient solution among methods that leverages a large model. We believe, however, it should be noted that retaining performances after pruning a large portion (e.g., \(\sim\)99.995%: 11B \(\Rightarrow\) 60M) for a compact model would not be trivial, evidenced in a prior work Michel et al. (2019).
## Acknowledgments
We thank Fan Bai and Jonathan Zheng for their assistance in estimating data annotation costs and collecting unlabeled data for WLP and Stanceosaurus, respectively. This material is based upon work supported by the NSF (IIS-2052498) and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2307.08795 | A Comparative Study of the Perceptual Sensitivity of Topological
Visualizations to Feature Variations | Color maps are a commonly used visualization technique in which data are
mapped to optical properties, e.g., color or opacity. Color maps, however, do
not explicitly convey structures (e.g., positions and scale of features) within
data. Topology-based visualizations reveal and explicitly communicate
structures underlying data. Although we have a good understanding of what types
of features are captured by topological visualizations, our understanding of
people's perception of those features is not. This paper evaluates the
sensitivity of topology-based isocontour, Reeb graph, and persistence diagram
visualizations compared to a reference color map visualization for
synthetically generated scalar fields on 2-manifold triangular meshes embedded
in 3D. In particular, we built and ran a human-subject study that evaluated the
perception of data features characterized by Gaussian signals and measured how
effectively each visualization technique portrays variations of data features
arising from the position and amplitude variation of a mixture of Gaussians.
For positional feature variations, the results showed that only the Reeb graph
visualization had high sensitivity. For amplitude feature variations,
persistence diagrams and color maps demonstrated the highest sensitivity,
whereas isocontours showed only weak sensitivity. These results take an
important step toward understanding which topology-based tools are best for
various data and task scenarios and their effectiveness in conveying
topological variations as compared to conventional color mapping. | Tushar M. Athawale, Bryan Triana, Tanmay Kotha, Dave Pugmire, Paul Rosen | 2023-07-17T19:25:55Z | http://arxiv.org/abs/2307.08795v1 | A Comparative Study of the Perceptual Sensitivity of Topological Visualizations to Feature Variations
###### Abstract
Color maps are a commonly used visualization technique in which data are mapped to optical properties, e.g., color or opacity. Color maps, however, do not explicitly convey structures (e.g., positions and scale of features) within data. Topology-based visualizations reveal and explicitly communicate structures underlying data. Although we have a good understanding of what types of features are captured by topological visualizations, our understanding of people's perception of those features is not. This paper evaluates the sensitivity of topology-based isocontour, Reeb graph, and persistence diagram visualizations compared to a reference color map visualization for synthetically generated scalar fields on 2-manifold triangular meshes embedded in 3D. In particular, we built and ran a human-subject study that evaluated the perception of data features characterized by Gaussian signals and measured how effectively each visualization technique portrays variations of data features arising from the position and amplitude variation of a mixture of Gaussians. For positional feature variations, the results showed that only the Reeb graph visualization had high sensitivity. For amplitude feature variations, persistence diagrams and color maps demonstrated the highest sensitivity, whereas isocontours showed only weak sensitivity. These results take an important step toward understanding which topology-based tools are best for various data and task scenarios and their effectiveness in conveying topological variations as compared to conventional color mapping.
Perception & cognition, computational topology-based techniques, comparison and similarity.
## 1 Introduction
The scale and complexity of scientific datasets have reached a level that makes directly communicating the details of the data through visualization exceedingly difficult [37]. Even if such data can be rendered, the complexity of the output far exceeds what humans can directly interpret [82, 87]. One approach to resolving this issue is to use topological data analysis (TDA) to summarize the data [79, 90]. TDA shows promise in this area because it provides a suite of tools that can summarize _n_-dimensional scalar field data as features or hierarchies in ways that are intuitive to humans [68, 89]. TDA-based visualizations have been fundamental to understanding feature variations in multiple real-world applications, including fluid dynamics [31, 60] and combustion [8]. Although what data features TDA can summarize is quite well understood (e.g., the relationship of critical points of a scalar function), _what TDA-based visualizations communicate to a user is not_. In this paper, we describe our study of the sensitivity of TDA-based visualization methods, namely, isocontours, Reeb graphs, and persistence diagrams, to a reference color mapping visualization in communicating feature variations within a scalar field.
In practice, color mapping (see Fig. 1(a)) is often used to directly render the scalar field, but such an approach leaves the function topology to be inferred by the viewer. On the other hand, TDA-based visualizations show abstractions of the scalar field topology. The isocontour visualization (see Fig. 1(b)) acts as a topographic map of the function by using a series of level sets that directly show changes in the topology of the function. Reeb graphs (see Fig. 1(c)) act as a skeletal summary of the isocontour visualization by tracking the evolution, i.e., creation, destruction, merging, and splitting, of the contour level sets and specifically
highlight critical points in the function. Finally, persistence diagrams (see Fig. 2d) abstract the topology of a Reeb graph by pairing critical points and presenting them in a scatterplot-like display. Each visualization has advantages and disadvantages in terms of intuitiveness, visual clutter, and the types of insights they purport to show.
These visualizations are frequently employed for comparing datasets in scientific applications. For example, Nieru et al. [61] compared inverse solutions for potentials on the heart surface through isocontours to gain insight into positional uncertainty of the source of arrhythmia (see Fig. 1a). Makram and Kamel [53] analyzed Reeb graphs of Morse functions mapped to the human skull to extract and compare patient-specific cephalometric landmarks (see Fig. 1b). Finally, Vidal et al. [88] compared persistence diagrams of time-varying datasets (see Fig. 1c) and developed a novel technique to compute the barycenter of persistence diagrams that is visually closer to individual time steps.
This paper provides a first-of-its-kind empirical evaluation of the efficacy of isocontours, Reeb graphs, and persistence diagrams against a reference of color mapping in performing a visual comparison task for scalar fields. Specifically, our evaluation looks at scalar fields defined on 2-manifold triangular mesh with no boundary, embedded in 3D that can be rotated and zoomed (similar to Fig. 1a and Fig. 1b). Further, the scalar fields are modeled as a mixture of a small number of Gaussian signals (a commonly used data model in scientific analysis, e.g., see [89, 91]), where _amplitude_ and _position_ refer to the peak and mean of a Gaussian, respectively (see Sect. 5.1 for more details). Since the amplitude and position of Gaussian signals are building blocks of our data model, by considering those two types of variations, we can gain a better sense of the saliency of information within these visualizations. To do this, we created and conducted a human-subject study involving 102 non-expert participants who performed a comparison task on these visualizations. Our high-level findings are:
* Our study confirmed some of our expectations for these visualizations. For instance, Reeb graphs were reasonably sensitive to positional variations, whereas persistence diagrams and color maps showed high sensitivity to amplitude variations.
* Some counterintuitive results also surprised us. For instance, color maps showed sensitivity to amplitude variations but not positional variations, and isocontours showed no sensitivity to positional variations and only weak sensitivity to amplitude variations.
* The results of this study provide an important step toward understanding the communication of topological features in these commonly utilized visualization techniques. Furthermore, our results provide important insights into the limitations of these visualizations and guidance on how techniques and systems can be improved to provide more reliable insights into data.
## 2 Prior Work
We briefly discuss related research, including the evaluation of visualizations, topology-based data visualization, and sensitivity analysis.
### _Evaluation of Visualization_
Although there are many approaches to visualizing complex scientific data, measuring how good a particular visualization is has been a nontrivial challenge for researchers [42]. Visualization quality is determined by the ability of users to effectively and efficiently extract knowledge from complex, large-scale data. Various visualization components (e.g., hardware requirements, software implementation costs, interactivity, accuracy of visualizations, and perception and cognition) have been evaluated to understand the overall impact of visualizations on decision-making [33, 58, 69, 86]. One promising approach to evaluating the effectiveness of visualizations in decision-making is conducting human-subject studies. Human-subject studies enable researchers to perform quantitative assessment of visualization parameters (e.g., errors and time associated with decisions, comparison with alternative visualizations, and the sensitivity of decisions to a person's domain and visualization expertise), which can potentially help reinforce the scientific foundation of visualization [43, 62].
Multiple studies have assessed the effectiveness of various scientific data visualizations. Color mapping is one of the fundamental methods used in data visualization because scalar data can be encoded into intuitive visual attributes, such as hue and opacity, to convey data features. The choice of a color map, however, strongly influences human perception [55, 92]. Liu and Heer [50] quantified the perceptual performance of users for different color maps by presenting tasks that required color comparisons. A similar study was performed to understand the effectiveness of 2D [45] and 3D [34] vector field visualizations by evaluating the quality of decisions in identifying flow features (e.g., critical points and direction of particle advection). An interesting approach to measuring cognitive load experienced when perceiving information from visualizations is through the analysis of EEG patterns of subjects [1].
### _Topology-Based Data Visualization_
As a key tool in scientific visualization, topological data analysis (TDA) [38, 79, 90] enables the understanding of abstract structures present in data. Visualization applications in numerous scientific domains (e.g., molecular dynamics [59], combustion science [9], fluid dynamics [47], and radio astronomy [72]) have demonstrated the strengths of TDA in unraveling complexities of the underlying data. A few of the fundamental topological descriptors for univariate data include level sets [51], critical points [57], Reeb graphs [26], contour trees [14], and persistence diagrams [20] (see Sect. 3 for technical details on these descriptors). Other topological descriptors include Morse-Smale complexes [28], which are widely used topological abstractions that segment the domain into cells with uniform gradient behavior. Reeb spaces [27], Jacobi sets [80], and fiber surfaces/feature level sets [13, 41] extend the concepts of Reeb graphs, critical points, and level sets, respectively, to multivariate data. Furthermore, there have been significant research developments in leveraging topological features for effective feature tracking of scientific data [7, 10, 19, 30, 66, 75, 83].
Previous works have also investigated novel metrics to quantitatively compare scalar fields via topological descriptors. For example, the distance between persistence diagrams can be measured as the cost of matching features between diagrams, e.g., using bottleneck [21] or Wasserstein [26] distance. Further, kernel-based methods [16, 70] have been developed to quantify the distance between persistence diagrams that are more suitable for machine learning tasks. An array of metrics have been devised, including interleaving distance [56], edit distance [76], stable distance [4], and Wasserstein distance [65], that quantitatively measure the distance between merge trees, the building blocks of contour trees and Reeb graphs. Bauer et al. [2] proposed functional distortion metric, and Lan et al. [46] leveraged the idea of intrinsic interleaving distances between merge trees [35] to compare Reeb graphs. The recent survey paper by Yan et al. [90] provides an overview of the state of the art in quantitative measures for comparing topological descriptors of scalar fields.
### _Sensitivity Analysis_
Since we evaluate the sensitivity of visualizations to amplitude and positional variations of scalar fields, we discuss how sensitivity analysis has been shown to help improve data representations and scientific analyses. Saltelli et al. [73] discussed in detail a broad spectrum of statistical techniques for understanding the sensitivity of functions to
Fig. 2: Demonstration of the sensitivity of the visualizations to variations of features compared with a baseline (middle). The positional variations (top) are reflected via the change in the position of hot spots in the color map and the orientation of features in isocontours and Reeb graphs, but they do not affect the persistence diagram. Amplitude variations (bottom) are reflected in the brightness in the color map, the distribution of lines in isocontours, and the movement of dots in the persistence diagram, but they do not prominently impact Reeb graphs. Our experimental stimuli (see supplement) contain multiple features and noise.
their input parameters. Cacuci et al. [11, 12] provided the theoretical foundation for sensitivity and analyzed sensitivity in various applications, including fluid dynamics and atmospheric science. In the context of visualization, Brechiesen et al. [6] analyzed the sensitivity of fiber tracking of diffusion tensor imaging to input parameters. Chan et al. [17, 18] proposed techniques to quantify the local sensitivity of high-dimensional data and encode such sensitivity into 2D scatterplots for improved data analyses. Finally, Liu et al. [49] encoded the sensitivity of direct volume rendering into the transfer function space to help users efficiently and effectively design transfer functions.
## 2 Background on TDA
Here, we briefly present technical definitions of level sets, Reeb graphs, and persistence diagrams, which we utilized in our study.
### _Contour Level Sets and Sublevel Sets_
Level sets are a fundamental tool for analyzing scientific data. Let \(f:\mathcal{M}\rightarrow\mathbb{R}\) be a scalar function defined on an \(m\)-dimensional manifold, \(\mathcal{M}\). A level set, \(\mathcal{C}\), of the function, \(f\), for the isolevel, \(f_{\varsigma}\), corresponds to a pre-image of function value, \(f_{\varsigma}\). Mathematically, \(C(f_{\varsigma})\equiv\{P:P\in\mathcal{M}\wedge f(P)=f_{\varsigma}\}\), where \(P\) denotes domain positions. In a related notion, the sublevel set, \(\mathcal{L}\), of function, \(f\), for isovalovaline \(f_{\varsigma}\in\mathbb{R}\) is \(\mathcal{L}(f_{\varsigma})\equiv\{P\in\mathcal{M}|f(P)\leq f_{\varsigma}\}\). The visualization of level sets, called _isocontouring_, is discussed in Sect. 4.2.
### _The Reeb Graph_
The Reeb graph is a structural abstraction that provides insight into the topological skeleton of scalar field data. Formally, for an \(m\)-dimensional manifold, \(\mathcal{M}\) (which has a Morse function, \(f\), mapped over the surface, \(f:\mathcal{M}\rightarrow\mathbb{R}\)), the Reeb graph tracks the connected components of the level sets as \(f\) is swept from \(-\infty\rightarrow+\infty\)[26]. The Reeb graph of a Morse function, \(f\), defined on a simply-connected manifold, \(\mathcal{M}\), is loop-free and is referred to as a contour tree [15]. Fig. 3(left panel) summarizes how connected components of level sets evolve as the iso-level, \(f_{\varsigma}\), is swept. In Fig. 3a, each connected component of the level set for iso-level, \(f_{\varsigma}\), is denoted by \(C_{si}\) with \(i\in N\), where \(N\) denotes the number of connected components. Each connected component, \(C_{si}\), can be collapsed to a single point, \(G_{zi}\), as illustrated in Fig. 3b. The collapsed points, \(G_{zi}\), are connected to derive the Reeb graph in Fig. 3c.
Tracking the evolution of connected components, \(C_{zi}\), special events change the number, \(N\), of connected components. These events are associated with critical points in the topology of \(f\) and are positions on the manifold, where \(\nabla f=0\) (see Fig. 3(center panel)). In the case of a local maximum, a new edge begins (see Fig. 3d), while an edge ends for a local maximum (see Fig. 3e). In the case of saddle points, two or more edges can merge into one (see Fig. 3f), or one edge can split into multiple branches (see Fig. 3g) [36]. Therefore, the structure of the Reeb graphs is determined by the critical points of the scalar field. We calculate the Reeb graph using Recon [25].
### _Persistence Diagrams_
The structure of the Reeb graph can be summarized with a multiset of points, known as a _persistence diagram_. Persistence diagrams highlight the more prominent features in a Reeb graph and are a stable representation of the function [20, 26].
Points of the persistence diagram are formed by pairing critical points in one of three configurations, saddle-minimum, saddle-maximum, and saddle-saddle pairs. Without a loss of generality, we describe saddle-minimum pairing, which depends on the sublevel sets of the function and corresponds to the elder's rule [26]. As \(f_{z}\) increases continuously from \(-\infty\rightarrow+\infty\), let \(B\) be a saddle point of the function with function value \(f_{z}=f(B)\). Let \(A\) be the last unpaired minimum added to the sublevel set component of B, \(\mathcal{L}(f_{z}=f(B))\). \(A\) and \(B\) are grouped together as a _birth-death pair_ feature, where \(A\) marks the birth point of the feature, and \(B\) marks the death point of the feature.
Of the three types of birth-death pairs that exist in Reeb graphs, saddle-minimum and saddle-maximum pairs are identified using an approach called _branch decomposition_[63], whereas saddle-saddle pairs, formed by tunnels in the manifold, are identified using an extension of branch decomposition [84]. A measure, known as _persistence_, \(d=|f(B)-f(A)|\), is applied to the pair, as shown in Fig. 3h.
Persistence is used to classify the importance of a feature and can be used to differentiate topological signal from noise. One method of denoising a Reeb graph is to prune noise via _persistence simplification_[29], as illustrated in Fig. 3i. For a given feature to be considered noise, its persistence, \(d\), must be less than a certain \(\epsilon\) threshold. The features are removed from the output graph by deleting associated nodes and reconcecting the graph [24]. Selecting an optimal threshold is difficult to automate because being too aggressive can lead to the removal of important features, and being too relaxed may result in a noisy dataset. Therefore, users of persistence often rely on manual selection and tuning of \(\epsilon\) to gain insights from Reeb graphs.
## 3 Visualizations Evaluated
We compare a reference visualization type--color maps--to three TDA-based visualizations--isocontours, Reeb graphs, and persistence diagrams--that reveal the topology of a function applied to a manifold.
### _Reference Visualization: Color Maps_
Color maps are a commonly used visualization and are seen as an intuitive way to interpret data (see Fig. 4a) [55, 92]. Given a function, \(f\), applied to the surface of a manifold, \(\mathcal{M}\), with a global minimum of \(f_{min}\) and a global maximum of \(f_{max}\)1, a color scale, \(K\), is a set of colors mapped between \(K_{min}\) and \(K_{max}\), respectively. Practically, color maps are implemented with a discrete set of colors, and in general, a given input, \(f_{i}\), is linearly interpolated to convey the continuity of the function. However, multi-hue color maps, e.g., viridis, are formed by tracing curves through perceptually-uniform color models, e.g., CIELAB [48]
Fig. 3: Reeb graph construction and critical point types are described. Left Panel: (a) The isolevels, \(f_{\varsigma}=(f_{\varsigma},f_{\varsigma},f_{\varsigma})\), of function, \(f\), derive level sets, \(C_{z}=(C_{\varsigma},C_{\varsigma},C_{\varsigma})\). (b) The number of connected components of a level set for each function value depends on the critical points of the function. Both \(f_{\varsigma}\) and \(f_{\varsigma}\) have a level set with one connected component, whereas \(f_{\varsigma}\) has two. The center point for each connected component, \(C_{si}\), is labeled as \(G_{zi}\). (c) By tracking the points, \(G_{zi}\), from \(-\infty\rightarrow+\infty\), the Reeb graph structure emerges, which summarizes the topology succinctly. Center Panel: The four types of critical points, including (d) a local minimum in the function, which has one outgoing edge; (e) a local maximum, which has one incoming edge; (f) the merge saddle, which has two incoming edges that merge into a single outgoing edge; and (g) the split saddle, which has one edge split into two. Right Panel: (h) For two paired critical points, \(A\) and \(B\), the persistence between them is \(d=|f(B)-f(A)|\), where \(f\) is the function applied to the manifold. (i) To reduce visual clutter, the Reeb graph can be pruned by removing any birth-death pair with \(d<\epsilon\).
and CAM02-UCS [52]. Fig. 4a shows example visualizations with \(f_{min}\) mapped to a purple hue and \(f_{max}\) mapped to a yellow hue.
**Design:** One challenge for color map visualizations is selecting the best color map to use. For example, the size of physical marks used varies what the user perceives [77]. Further, perceived color differences are not necessarily uniform--colors may be mathematically equidistant, while perception is biased (e.g., pure green may appear brighter than pure blue) [50, 77]. In a recent study, Liu et al. [50] compared various color maps and concluded that the _viridis map_ was the most effective at presenting data in a way that enabled users to ascertain features correctly. Cooper et al. [22] presented a list of color maps that maintain better perceptual ordering than the rainbow color map, among which the viridis was a candidate. The luminance component, which carries magnitude information in human vision, of viridis monotonically increases (not strictly proportional) with increased data value. Moreover, for the data with low spatial frequency, the changes in saturation and hue of viridis are more effective than a grayscale color map [71]. Since we model prominent data features as a mixture of Gaussian distributions with low spatial frequency (see Sect. 6.6), the viridis color map is a reasonable choice for a reference implementation.
**Interpretation:** When observing a color map visualization, the user should look for cold spots (purple hue) and hot spots (yellow hue) to understand the positions of local minima and maxima (critical points) of the scalar field. Fig. 4a shows cool and hot spots on the rabbit model.
**Interaction:** Interactive rotation and zooming help overcome occlusion and enable investigating the data in finer detail.
### _Topology-Based Visualization: Isocontours_
The visualization of level sets for different isovalues (see Sect. 3.1), called isocontouring, provides insight into the structural evolution of a function as well as the distribution of isolevels across the manifold on which the data are sampled (see Fig. 4b).
**Design:** Let \(f_{min}\) and \(f_{max}\) denote the global minimum and global maximum values of a function sampled on a manifold, \(\mathcal{M}\). The interval \([f_{min},f_{max}]\) is partitioned into \(L\) equally spaced isolevels. Then, a series of level sets for the isolevels is extracted using the marching triangles algorithm [39], and they are overlaid on top of the model.
**Interpretation:** When observing function topology with isocontours, the most important features are the visual patterns that arise as the contours wrap around the model. Formation or merging of contours will occur around the critical points, so identifying those features may create a better picture of the function's topology. The topological evolution of isocontours is illustrated with red-dotted boxes in Fig. 4b.
**Interaction:** We define the level of detail for isocontours as the number of isolevels, i.e., \(L\), where \(L\) is a user-selectable parameter. Larger values of \(L\) result in higher level of detail, and smaller values of \(L\) result in lower level of detail (see the skull dataset example in Fig. 4b). The parameter \(L\) can be modified to change the level of detail, thereby enhancing the perception of the structure of the function. The view can also be rotated and zoomed to overcome occlusion and enhance details.
### _Topology-Based Visualization: Reeb Graphs_
Reeb graphs (see Fig. 4c) are used to visualize the topological structure as a skeleton (see Sect. 3.2).
**Design:** Our visualization embeds the Reeb graph onto the model, similar to how it is handled in the Topology Toolkit [54, 81]. The nodes of the Reeb graph, which represent the critical points of the function, are overlaid on the model. The edges represent the flow of the function (contours) between critical points. Therefore, a naive edge connection scheme would not suffice because it would not represent the flow of data correctly. We utilized the centroids of isocontours to draw the trajectory of the arcs throughout the model.
**Interpretation:** Since the Reeb graph is embedded in the model, its features can be directly correlated with the function on the model. The primary features to look for in a Reeb graph are the positions of critical points, which are represented as blue dots on the model, and the arc trajectories between blue dots, which are represented as red curves.
**Interaction:** We define the level of detail in the context of the Reeb graph as the number of arcs of the graph. For high-density noise, the Reeb graph visualization might look cluttered owing to the high level of detail arising from noise, as shown in the bottom right image of the skull in Fig. 4c. Visual clutter can be interactively mitigated by adjusting the level of detail using persistence-guided pruning [29], which removes low persistence critical point pairs from the visualization, as shown in the bottom left image of the skull in Fig. 4c. Notably, persistence-based noise reduction may remove longer edges before short ones, which may be counter-intuitive. Occlusion and clutter can be further mitigated by rotating or zooming the view.
### _Topology-Based Visualization: Persistence Diagrams_
Persistence diagrams (see Fig. 4d) are scatterplots that can be used to illustrate the birth-death pairs of topological data for contour trees, Reeb graphs, and other TDA-based techniques. A birth-death pair corresponds to the pair of minimum/maximum and saddle critical points that result in the creation and disappearance of a connected component, respectively, as described in Sect. 3.3. Persistence diagrams provide a stable view of the function from a topological perspective, as they show the relationship of the critical points in the Reeb graph.
**Design:** To generate the persistence diagrams, Reeb graphs are generated for the dataset, and the birth-death pairs are then identified, as described in Sect. 3.3. The pairs are then visualized in the persistence diagram. The horizontal axis of a persistence diagram is labeled as _birth_ and denotes the appearance of a feature. The vertical axis is labeled as _death_ and denotes the disappearance of a feature. The blue dot represents local saddle-minimum or saddle-maximum pairs. The red dots represent saddle-saddle pairs, which are formed by holes/tunnels in the model. The diagonal line indicates where \(birth=death\), and the
Fig. 4: Example visualizations of the rabbit and skull models. (a) The color map visualization use viridis [50], which has a purple hue at the global minimum and a yellow hue at the global maximum. (b) Important isocontours features are highlighted on the rabbit, while the skull is visualized at low and high levels of detail. (c) The Reeb graph shows blue spheres at critical point positions and red arcs that indicate the evolution of contours between critical points. The skull is rendered with and without pruning, guided by the persistence of critical points. (d) In the persistence diagram, the blue dots denote the saddle-minimum or saddle-maximum pairs, and red dots denote the saddle-saddle pairs. The distance from the diagonal is a quantified measure of persistence. Persistence diagrams can be simplified by removing low-persistence features, which are closer to the diagonal.
vertical distance from the diagonal is a quantified measure of persistence. Points near the diagonal have a low persistence (i.e., regarded as noise), whereas those farther away are more important. High- and low-persistence features for the rabbit are highlighted in Fig. 3(d).
**Interpretation:** When evaluating data with the persistent diagram, one must note the points far away from the diagonal and their color/type, as those points denote the more persistent features. Secondarily, one may consider the distribution and patterns of points along the diagonal (e.g., feature clustering for the rabbit in Fig. 3(d)).
**Interaction:** We define the level of detail in the context of persistence diagrams as the number of points representing critical point pairs. We provide the functionality to interactively set a persistence threshold, which removes low-persistence points for observing different levels of detail, which is depicted in the bottom of Fig. 3(d).
## 4 Sensitivity Analysis of Topological Visualizations
We describe the method used for measuring the sensitivity of visualizations to variations in data using a crowdsourced experiment.
### Data Model
For our study, we model prominent data features as a mixture of 3D Gaussian distributions mapped onto a 2-manifold. Specifically, a given dataset has two components. The first is a 2-manifold triangle mesh, \(\mathcal{M}\), embedded in 3D space. The second component is a scalar field, \(f\), which is a mixture of Gaussian distributions and noise defined on the vertices of the mesh. Gaussian mixtures are commonly utilized in scientific literature to model data (e.g., in Vidal et al. [88] and Yan et al. [91]), and scalar fields on 2-manifolds are observed in many real-world applications (see Fig. 1).
#### 4.1.1 Scalar Field Generation
Scalar values were generated per vertex using a combination of several data features and noise. The value at each vertex was calculated as
\[f(x,y,z)=\mathcal{N}(x,y,z)+\sum_{i}^{NOF}\mathcal{G}_{i}(x,y,z),\]
where \(\mathcal{N}\) was the noise function, \(NOF\) (number of features) was the number of salient data features, and \(\mathcal{G}_{i}\) was the data feature function.
**Data Features:** To simplify the definition of data features, isotropic 3D Gaussian functions were used:
\[\mathcal{G}_{i}(x,y,z)=a_{i}\cdot\exp\left(-\left(\frac{(x-x_{i})^{2}+(y-y_{i} )^{2}+(z-z_{i})^{2}}{2\sigma^{2}}\right)\right),\]
where \(a_{i}\) was the amplitude of the feature, which was 1 by default; \((x_{i},y_{i},z_{i})\) was the source position, which was set using a random vertex on the model; and \(\sigma\) was the standard deviation, which was fixed to 1.
**Noise:** Noise was added using the following function:
\[\mathcal{N}(x,y,z)=S_{N}\cdot Perlin(x,y,z),\]
where \(S_{N}\) is the amplitude of the noise, and \(Perlin\) is the noise function. The level of noise was specified using an input signal-to-noise ratio (SNR). Because the default feature amplitude was 1, \(S_{N}=1/SNR\). \(Perlin\) was a standard Perlin noise function [64], the frequency of which was set during our data calibration (see Sect. 6.5).
### Sensitivity for 1D Gaussian distributions
Without a loss of generality, we describe visualization sensitivity in the context of 1D Gaussian distributions. At the end of the section, we discuss how our observations extend to the 3D Gaussian distributions described in the previous section.
#### 4.2.1 Position and Amplitude Variation
We are interested in evaluating the sensitivity of the topological visualizations described in Sect. 4.2-Sect. 4.4 as compared to the reference color mapping (Sect. 4.1) to changes in the input function. In particular, we evaluate the sensitivity of these visualization techniques to two variation types. First, we are interested in changes in the _position_ of topologically important features of the data. For example, in Fig. 4(a), the feature of interest is moved from \(A\) to \(B\). Second, we are interested in sensitivity to changes in the scale or _amplitude_ of topologically important features. As shown in Fig. 4(b), the maximum value of the peak decreases from \(A\) to \(B\).
#### 4.2.2 Measuring Sensitivity
To test the sensitivity of the visualizations, we consider a scenario in which a participant must pick from two experimental visualizations which is the most similar to a baseline visualization. Let DB denote the baseline dataset corresponding to the original position and amplitude parameters \(B\). Let DA\({}_{0}\) and DA\({}_{1}\) denote the two datasets corresponding to parameters \(A_{0}\) and \(A_{1}\), respectively, representing either variations of amplitude or position for a single feature. Let VA\({}_{0}\), VA\({}_{1}\), and VB be visualizations generated using the _same_ visualization technique, either color maps, isocontours, Reeb graphs, or persistence diagrams. We present a participant with these visualizations, and the participant has to decide which of VA\({}_{0}\) and VA\({}_{1}\) is closer to VB.
Consider the 1D examples in Fig. 5(d). For positional variation (see Fig. 5(a)), given a baseline visualization for the position parameter \(B\), the participant must select between the visualizations for parameters \(A_{0}\) or \(A_{1}\), which they believe is more similar to the baseline visualization. Similarly, for the amplitude variation (see Fig. 5(b)), the participant must select between the visualizations for parameters \(A_{0}\) or \(A_{1}\), which is more similar to the baseline visualization. For amplitude variation, either \(A_{0}\) or \(A_{1}\) is always larger than \(B\), and the other is smaller than \(B\) to ensure the participants are not directly comparing \(A_{0}\) and \(A_{1}\).
To evaluate the sensitivity of a visualization to variations in features, we measure how often participants can correctly select between the VA\({}_{0}\) and VA\({}_{1}\) that is closer to VB when considering the following distance measure between the dataset parameters:
\[A^{\prime}=\left|d(A_{0},B)-d(A_{1},B)\right|,\]
where \(d\) indicates the distance between the positional or amplitude parameters. For positional variations, \(d(A,B)\) is the geodesic distance between the location of the feature being moved in \(A\) and \(B\). For amplitude variations, the absolute difference between the amplitude of the feature being modified is used (i.e., \(d(A,B)=|A-B|\)).
Intuitively, \(A^{\prime}\) measures how different in distance or amplitude the variations are from the baseline. A larger \(A^{\prime}\) value implies that one stimulus is much more similar to the baseline and would therefore be more likely to be selected by a participant. Consider the example shown in Fig. 5(d). For both positional and amplitude variations, it is not easy to decide which visualization is perceptually more similar to the baseline visualization when \(d(A_{0},B)\approx d(A_{1},B)\) (see Fig. 5(top)). In contrast, when \(d(A_{1},B)\) is significantly smaller than \(d(A_{0},B)\) (see Fig. 5(bottom)), it is relatively easy to visually decide that the 1D plot for parameter \(A_{1}\) is closer to the baseline visualization.
Generally speaking, the sensitivity of a quantity of interest Y with respect to a parameter X is defined as the first derivative \(\partial T/\partial X\) and can be estimated through linear regression for the observed data [73]. In perceptual psychology, however, the Weber-Fechner Law [32] notes that the minimal detectable increase in the stimulus is proportional to the stimulus magnitude (Weber's law), and the perceived intensity is proportional to logarithm of the actual intensity (Frecher's inference). Thus, the sensitivity of visualizations is captured by the change in the accuracy of user decisions with respect to the change in parameter \(A^{\prime}\) using logistic regression (see the analysis description in Sect. 6.7).
### Generating Positional and Amplitude Variations in 3D
To test the sensitivity, three scalar functions are generated for each experimental trial by first selecting the parameters, as described in Sect. 6.6. Initially, a baseline dataset DB is generated. The two additional test datasets are generated by first selecting one feature at random to vary by position or amplitude based on the test being performed. Then, for positional variations, the associated feature Gaussian is moved
Figure 4: 1D examples of the feature variations studied, including (a) position variation where the peak at \(A\) moves right to \(B\), and (b) amplitude variation where the amplitude of the peak \(A\) is reduced in \(B\).
to \(A_{0}\) or \(A_{1}\) by selecting (at random) a vertex located at a target geodesic distance away from the baseline location. We compute the geodesic distance on the mesh manifold by using Dijkstra's algorithm on the vertices of the mesh, where edge weights are the Euclidean distance between vertices. For amplitude variations, the selected feature had its amplitude modified to the target amplitude of \(A_{0}\) or \(A_{1}\). All other features remained unchanged.
## 5 Experiment
### Hypotheses
To understand the sensitivity of the four visualizations, we consider the topological aspects of the function that each visualization shows. Fig. 2 illustrates the sensitivity of each visualization to positional and amplitude variations, assisting in the formulation of our hypotheses.
**Color Maps:** For color mapping, we have chosen the virtids color map (see Sect. 4.1 for more details). Thus, the brightest spot in the color map should also indicate the location of the peak of the feature. Therefore, we hypothesize that _color maps will be sensitive to variations in both the amplitude and the position of features._
**Iscontours:** Similar to color maps, the concentric rings of isocontours show where function extrema occur. However, the rings themselves provide no direct indication of the function value. Therefore, we hypothesize that _iscontours will be sensitive to variations in the position of features but not sensitive to the amplitude of features_.
**Reeb Graphs:** Reeb graphs show the interconnectedness of critical points in the function. However, they do not provide any indication of the value of those critical points. Therefore, we hypothesize that _Reeb graphs will be sensitive to variations in the position of features but not sensitive to the amplitude of features._
**Persistence Diagrams:** Persistence diagrams show only the birth and death of critical features, which correspond to the amplitude of a birth-death pair. Positional variations are visible indirectly only when pairs switch. Therefore, we hypothesize that _persistence diagrams will be sensitive to variations in the amplitude of features but not sensitive to their position._
### Experimental Interface
The experiment consisted of a web page with four demographic questions, a tutorial describing all of the visualization types and interactive capabilities (e.g., rotation and zooming), four practice questions, 24 experimental trials, and a post-experiment questionnaire. The tutorial and practice questions serve to familiarize participants, especially those with no expertise or limited prior experience in visualization, with the four visualization types. Specifically, we present guidelines for participants summarizing what specific patterns to look for when decoding the visualizations (similar to the interpretation descriptions in Sect. 4). Please refer to pages 4-10 of the supplementary material illustrating the tutorial and practice questions for a single user. The responses to our post-experiment questionnaire (see pp. 39-47 of the supplementary material and Sect. 7.4) capture the thought process of participants and the difficulties they encountered when interpreting the four visualization types and making decisions.
The main experimental interface (see Fig. 7) consists of the three visualizations, always of the same type, including a baseline in the center and experimental dataset choices to the left and right (see Fig. 7(a)). An orbiting camera with zoom was used so participants could view the models from every angle, which enabled the participants to observe features that would otherwise be hidden and to investigate details. The navigation was coordinated to ensure that changes to orientation or zoom were applied to all three visualizations. Visualizations that allow level-of-detail adjustments had a slider for this purpose (see Fig. 7(b)). We asked the participants which of the two experimental visualization choices was more similar to the baseline (see Fig. 7(c)). The participants had to make one of three choices: (1) left, (2) right, or (3) unsure if they were unable to answer with any confidence. The correct answer (i.e., the more similar visualization) was randomly shuffled between the left and right sides. For each trial, the participant was allowed up to 60 seconds, after which the experiment automatically advanced.
### Model Selection
Models were selected to balance the desire for realistically complex geometry while also limiting cognitive strain caused by interpreting the model. We, therefore, selected familiar 3D shapes, namely, biological models from humans and animals (see examples in the supplemental material). To avoid categorical bias, we selected four models from each of the categories: human bustus, human anatomical models, human extremities, land animals, sea animals, and birds. The goldenRetriever, lion, rabbit, horse, skull, tooth, turtle, shark, fish, owl, parrot, and bird come from the Princeton Shape Benchmark repository [74]. The bimba, bust, windfish, handFist, and handPointPrep are from the AIM@SHAPE repository [23]. The heart and kidney are from the University of Michigan's BlueLink AnatomyTOOL [85]. The duck, frederic, lincoln, foot1, and foot2 were acquired from Free3D [67]. The models were prepared and cleaned using Blender 3D's [3] mesh retopology and malformed mesh detection tools. All models were then normalized to fit in a unit box at the origin.
### Variables
The independent variables of our experiment include the 3D model; visualization types (color maps, isocontours, Reeb graphs, or persistence diagrams); variation type (position or amplitude); values of \(A_{0},A_{1}\), and \(A^{\prime}\); SNR; and the NOF. However, we specifically focus on evaluating visualization types, variation types, and \(A^{\prime}\) values. The dependent variables are the _accuracy_ and _sensitivity_ of selection, time taken, mouse interactions (rotate and zoom), and level-of-detail interactions.
Figure 6: The illustration of 1D visualization sensitivity to (a) positional and (b) amplitude variations. For variation types where \(d(A_{0},B)\approx d(A_{1},B)\) (top), it is difficult to visually determine if the 1D plots for \(A_{0}\) or \(A_{1}\) is closer to the 1D plot for \(B\). In contrast, when \(d(A_{0},B)\) is considerably greater than \(d(A_{1},B)\) (bottom), it is easier to determine that the plot for parameter \(A_{1}\) is closer to the baseline than the plot for \(A_{0}\).
Figure 7: Example of the experimental interface on Reeb graphs. (a) The three visualizations include a baseline in the center and experimental choices to the left and right. The navigation was coordinated such that changes in orientation or zoom were applied to all three visualizations. (b) Visualizations that allow level-of-detail adjustments had a slider for this purpose. (c) The participant had to make one of three choices.
### Parameter Calibration
Determining parameter ranges was an important problem because the data complexity had to be tuned to ensure that the task was neither too difficult nor to easy for all visualization types. In particular, we were focused on identifying reasonable values of \(A^{\prime}\), SNR, and NOF. Our parameters went through a four-stage calibration process. (1) Initially, parameters were set using the research team's observations of values. (2) Next, we ran an internal study using the research team and lab members to check the difficulty of tasks and adjust accordingly. (3) Then, we ran a small-scale (40-person) preliminary study on Amazon Mechanical Turk. (4) A final internal study was repeated using the research team and lab members. At each stage, parameters were adjusted to calibrate difficulty and experiment length and to improve the experimental interface.
### Data Generation
A Python script was used to generate the model and function configuration for all participants in the experiment. The within-subject design consisted of 24 trials. Each participant was presented with trials that had the following characteristics:
* 1 trial per 3D model
* 6 trial per visualizations type: {color map, isocontour, Reeb graph, persistence diagram}
* 12 trial per variation type: {position, scale}
* 6 trial per \(A^{\prime}\): {\(0.15,0.30,0.45,0.60\)}
* 8 trial per SNR value: {\(80,90,100\)}
* 3 trial per NOF in the range: \([2,9]\)
Thus, we ensured that each user was presented with the datasets generated using a balanced distribution of parameters, including visualization type, function type, \(A^{\prime}\), SNR, and NOF. Additionally, \(A_{0}\) was randomly selected such that \(|A_{0}|\in[0.1,0.9]\), and \(A_{1}\) was selected such that \(|A_{1}|\in[0.1,0.9]\) while also satisfying the \(A^{\prime}\) requirement. For amplitude variation, \(A_{0}\) is added to \(B\), and \(A_{1}\) is subtracted from \(B\), or vice versa, in a random manner to guarantee that one stimulus would have a larger amplitude, and one stimulus would have a smaller amplitude relative to the baseline.
### Data Collection and Analysis Methodology
Datasets were pre-generated using a Python script and the parameters described Sect. 6.6. The model, function, Reeb graph, and persistence data were stored in JSON format and ZIP compressed to improve the the transfer rate. The experiment was run on a custom-built Node.js web server. Three.js [7] was used for color map, isocontour, and Reeb graph rendering. D3.js [5] was used for persistent diagram rendering. The visualizations were custom implementations. The answers from each participant were recorded in a JSON document with entries containing participant ID, visualization type, parameters for the stimuli, participant's selection, interaction information (i.e., click, scroll, slider move counts), and time taken.
We utilized three statistical tools in the analysis. First, a binomial test, which determines if an observed distribution deviated from an expected distribution, was used to determine whether the accuracy of the visualizations was statistically significant from a null hypothesis of 50% (i.e., guessing would achieve a score of approximately 50%). Next, to determine if the differences in accuracy for each method were significant, we utilized a \(\chi^{2}\) contingency test, which determines if the observed distributions in one or more categories deviated from an expected distribution with a null hypothesis that all methods had identical accuracy. Finally, binary logistical regression is commonly used as a statistical model for hypothesis testing of a continuous independent variable and binary dependent variable [40]. We used a logit function (i.e., binary logistical regression) to evaluate whether the accuracy of visualization methods was sensitive to increases in \(A^{\prime}\) with a null hypothesis that it was not. For all tests, we consider significance to be \(p<.05\), but we report exact \(p\) values to 3 digits for completeness.
## 6 Results
We conducted the institutional review board-approved study using participants from Amazon Mechanical Turk. The experiment generally took less than 15 minutes, and each participant was compensated $2 USD. There were 120 participants filtered by region (US and Canada) and HIT Approval Rate (\(>95\%\)), of which 18 provided problematic data (i.e., the participant failed to engage with the study because they did not answer the complete 24 questions, frequently timed out during the experiment, or their median response time was less than 5 seconds). Hence, we analyzed 2,448 trials from the remaining 102 participants.
**Participant Demographics:** 57% of the participants were male, 42% female, and 1% nonbinary. All participants were 18 years old or older (see Fig. 7(a)). 92% of the participants reported having casual, minimal, or no visualization experience (see Fig. 7(b)).
In Sect. 7.1, we present the overall accuracy of participants for position and amplitude variations in data features. Next, Sect. 7.2 reports the sensitivity of participants' decisions to feature variations. Finally, we report observations of the timing and interactions and collected feedback from participants in Sect. 7.3 and Sect. 7.4, respectively.
### Overall Task Accuracy
We begin by evaluating the accuracy of the different visualizations.
**Positional Variation:** The results for positional variation can be seen in Table 1. To test whether the overall accuracy was statistically significant, a binomial test was conducted on each visualization, with a null hypothesis of 50% accuracy (i.e., guessing). All visualizations _except_ Reeb graphs showed significance. To test for the significance of the differences in the overall accuracy between methods, we performed a \(\chi^{2}\) contingency test with the null hypothesis that all methods perform identically. The result, \(\chi^{2}(3,N=1191)=8.09,p=.044\), shows the differences are significant. Overall, color maps performed the best of all visualizations, persistence diagrams and isocontours are almost tied in terms of accuracy, and Reeb graphs were the least accurate. However, it is important to note that our concern is with the sensitivity as \(A^{\prime}\) increases, not the overall accuracy.
**Amplitude Variation:** The results for amplitude variation can be seen in Table 2. The same binomial test was conducted to verify that accuracy was statistically significant. Again, all methods _except_ Reeb graphs showed significance. A \(\chi^{2}\) contingency test was also used to compare the accuracy of the methods. The result, \(\chi^{2}(3,N=1187)=7.58,p=.056\), shows the differences are just outside the range of significance. Nevertheless, persistence diagrams offered the best accuracy, followed by color maps, isocontours, and finally, Reeb graphs. However, we again note that our primary focus is sensitivity to changes in \(A^{\prime}\), not overall accuracy.
**Statistics of Unsurve Sections:** Tables 1 and 2 show a small number of unsurve selections, and participants answered the questions more than 95% of the time for all visualizations. Therefore, the results did not show any significant differences between visualization types.
### _Sensitivity to Variation_
Here, we evaluate the sensitivity of different visualizations to changes in features (i.e., \(A^{\prime}\)). Please refer to Sect. 5.2.2 describing visualization sensitivity with respect to parameter \(A^{\prime}\). Ideally, as \(A^{\prime}\) increases, the accuracy of selections should also increase for a visualization.
**Positional Variation:** Fig. 9(top) shows the bar charts and the logistic regression of positional variation accuracy for different \(A^{\prime}\) values for each visualization type (see Table I for exact accuracies). For color maps and isocontours, the nearly flat line and large \(p\) values indicate that they are not sensitive to increases in \(A^{\prime}\). Persistence diagrams show a downward trend that, although not statistically significant, suggests that higher \(A^{\prime}\) values were detrimental to the accuracy. Finally, the Reeb graph is the only method that shows a statistically significant upward trajectory. In other words, for positional variations, Reeb graphs are the only method sensitive to changes in \(A^{\prime}\).
**Amplitude Variation:** Fig. 9(bottom) shows the bar charts and logistic regression for different \(A^{\prime}\) values of each visualization for changes in amplitude (see Table II for exact accuracies). Color maps show a statistically significant upward trend. Isocontours also have an upward trend; however, it is slightly outside of statistical significance. Reeb graphs show a flat line and high \(p\)-value, which indicates that they are not sensitive to changes in \(A^{\prime}\). Lastly, persistence diagrams react the most to increases in \(A^{\prime}\), as they show the steepest statistically significant upward trend.
### _Time and Interaction_
We next evaluate whether using any of the visualizations resulted in longer time or more interactions from participants.
**Time Taken:** The time it took for participants to answer each question was similar for each visualization, as shown in Fig. 9(a). We also found no clear trend between the time it took a participant to answer a question and the accuracy of their answer.
**Interactions:** We also evaluated accuracy when participants used some form of interaction (i.e., mouse click, mouse scroll, slider movement). We found no significant relationship between the use of interactions and the accuracy of the answers, as shown in Fig. 9(b).
### _Qualitative Feedback_
At the end of the experiment, participants had the opportunity to provide several forms of qualitative feedback. The responses provided by individual participants are documented in pp. 39-47 of the supplement.
**Perceived Difficulty Visualization Methods:** Participants had the option to indicate which visualizations they thought were easiest and most challenging to use for the comparison task (see Fig. 9(c)). The overwhelming majority considered color maps to be the easiest. The results for the hardest to use were mixed, but they did indicate that Reeb graphs and persistence diagrams tended to be harder to understand, even though they had the best performance on positional and amplitude variations, respectively.
**Participant Feedback:** Participants were also asked to provide additional information on what approaches they used to answer the questions. The typical response for color maps was that they tried to find the variation that shared the most significant number of cold and
\begin{table}
\begin{tabular}{l|
hot areas compared to the baseline. For isocontours, they tried to infer similarities by looking at the amount of space between contour lines. Moreover, some participants also mentioned that the level-of-detail sidler helped reduce the contours' density when it was hard to identify similarities. For Reeb graphs, many participants indicated that they had difficulties understanding the graph, and for some, it was helpful to reduce the level of detail of the graph. Lastly, for persistence diagrams, participants mentioned that they tried to identify common clusters of points away from the diagonal.
## 8 Discussion & Conclusions
### Visualization Accuracy and Sensitivity
**Color Maps:** Color maps performed well in our experiment. Although they showed the highest accuracy for positional variation, they also did _not_ show a statistically significant sensitivity to positional variation, thereby causing us to reject this hypothesis. For amplitude variation, color maps performed second best overall and showed a strong effect in terms of sensitivity, thereby allowing us to confirm that hypothesis. _Overall, we can confirm color map sensitivity to amplitude variations but cannot confirm its sensitivity to positional variation._
**Isocontours:** The accuracy and sensitivity results for isocontours were overall in the least agreement with our expectations. In terms of accuracy, isocontours were in the middle of the pack for both variation types. Furthermore, isocontours showed no sensitivity to positional variation, thereby causing us to reject that hypothesis. For amplitude variation, we found a weak, nonstatistically significant sensitivity. Technically, this confirms our hypothesis (that isocontours have no sensitivity to amplitude changes), but we consider this result ambiguous. _Overall, isocontours showed no sensitivity to positional variation and ambiguous results on amplitude variation._
**Reeb Graph:** Reeb graphs showed overall lower task accuracy compared to the other visualization types, which was expected behavior considering the discrete, high-frequency, and sparse nature of Reeb graphs. Although the overall accuracy for Reeb graphs was low, the Reeb graph was the only visualization type to show sensitivity to positional variations, thereby confirming this hypothesis. Interestingly, Reeb graphs performed poorly at lower \(A^{\prime}\) values (i.e., below 50%, which is worse than guessing), possibly due to contradictory information generated by changes in the noise, which played an outsize role in the visualization. In addition, Reeb graphs showed no statistically significant effect in the sensitivity to variations in amplitude, thereby confirming this hypothesis as well. _Overall, as predicted, Reeb graphs showed sensitivity to position variations but not amplitude variations._
**Persistence Diagrams:** The results for persistence diagrams showed that, as hypothesized, they were not sensitive to positional variations. Interestingly, they showed a non-statistically significant negative trend as \(A^{\prime}\) increased. We speculate that this is due to artifacts caused by birth-death pairing switches that occur when critical points move apart. For amplitude variation, persistence diagrams showed both the highest accuracy and sensitivity to variations in \(A^{\prime}\), thereby confirming this hypothesis. _Overall, as predicted, persistence diagrams were sensitive to amplitude variations but were not sensitive to positional variations._
### Implications
There are several important implications for our findings.
**Rejected Hypotheses:** The several hypotheses that were rejected might be as important as those that were confirmed. Our hypotheses come from multiple decades worth of combined experience in scientific visualization. These rejected hypotheses potentially signify important misconceptions about the effectiveness of certain visualization types that further studies may illuminate.
**Color Maps and Isocontours:** One surprising result to us was the relative strength of color maps and the weakness of isocontours. Through our analysis of accuracy and sensitivity, color maps excelled in several aspects, whereas isocontours stood out in none.
**Reeb Graphs and Persistence Diagrams:** Reeb graphs and persistence diagrams have demonstrated very precise utility within the context of our study of positional variation for Reeb graphs and amplitude variation for persistence diagrams. This quality can be seen as both an asset and a liability because it means each will highlight variations of their supported type, whereas other variations may be lost.
**No Visualization to Rule Them All:** One of the most important implications of our study is that no single visualization stood out clearly with both position and amplitude variations. On one hand, this justifies using multiview visualization of the data to identify individual position and amplitude variations in data, e.g., combining color map and Reeb graph for high sensitivity to position and amplitude. On the other hand, _it means that no visualization will easily identify features with both position and amplitude variations present_.
### Ecological Validity and Future Work
To contextualize our study, we consider multiple perspectives on the ecological validity of the work.
**Task Relevance:** The chosen task of comparing multiple variations of scalar fields is a frequent data analysis task (see Fig. 1). However, it does not cover the complete suite of analysis tasks one would perform with a scalar field. Further investigation is needed to more holistically evaluate the effectiveness of these visualization types.
**Participant Pool:** Notably, our participant pool comes from the general population instead of experts in scalar field visualization. Unfortunately, the number of participants needed, made identifying enough experts difficult. The lack of expertise in the participant pool may have played a role in some of the results (e.g., lower accuracy for some methods). We note that expert participant pools are not without their weaknesses either. For example, expert decisions may be influenced by the familiarity bias, which puts them at risk of falling into the Dunning-Kruger effect [44]. Nevertheless, we separately evaluated the seven participants who claimed to be regular or extensive visualization users, despite the pool not being large enough to generate any statistical significance. The results (see supplement) were not noteworthy. We also informally evaluated the research team's performance during testing and similarly found the results did not contradict the overall findings.
**Perception vs. Data Analysis:** Due to the participant pool, our experiment was explicitly designed to focus on a low-level, mostly perceptual task, to make them accessible to the participant pool. Higher-level tasks require understanding the context of the data, the meaningfulness of topological descriptors, and understanding the relationship of the visualization to the topological descriptors. These are important (but difficult) factors to measure and teach to a nonexpert participant pool.
**Types of Features Generated and Evaluated:** For the sake of practicality, we used Gaussian features and Perlin noise. We did not consider other feature types (e.g., anisotropic or non-Gaussian functions) or noise (e.g., salt-and-pepper noise). In addition, the functions were limited to two simple variation types: position and amplitude. We also did not consider the mixing of multiple types of variation (i.e., both position and amplitude variation together) or other types of variations (e.g., anisotropic shape, periodic functions, etc.). Any of these variations would likely alter the results and deserve further study. Lastly, our experimental results and analysis consider topology-based visualizations of data sampled on a 2-manifold. In the future, we plan to extend our evaluation to variations of multiple features, high-dimensional scalar fields, additional visualization types (e.g., planar Reeb graphs or Morse complexes), and vector as well as tensor fields.
**Role of Design and Data Size and Complexity:** We are cautious in extrapolating the results of our study for a given technique (e.g., color maps having uniformly higher accuracy). Our results compare specific design variations of each visualization type, and alternatives may influence their performance. For instance, a planar Reeb graph may reduce clutter but lose spatial context. Furthermore, each visualization type may respond differently in terms of data type (e.g., volumetric), size, and complexity (e.g., number of features and noise). Evaluation of these datasets presents multiple non-trivial challenges in terms of identifying a participant pool, the increasing complexity of the visualization, and the need for context/domain knowledge to perform tasks.
**Next Research Steps:** The approach presented in this paper lays the foundation to study the perceptual sensitivity of topological and other scientific visualizations for more complex data sets, different visualization types, design variations within the visualizations, and other analytical tasks. Particularly, the sensitivity trends presented in our evaluation serve as an important guide to the types of features that each visualization technique is better or worse at communicating. We hope the community will use the framework of study here to further investigate these questions.
## Acknowledgments
This work was supported in part by a grant from the National Science Foundation (III-2316496) and by the U.S. Department of Energy (DOE) RAPIDS-2 SciDAC project under contract number DE-AC05-00OR22725.
|
2304.00841 | Hypergraph Animals | Here we introduce simple structures for the analysis of complex hypergraphs,
hypergraph animals. These structures are designed to describe the local node
neighbourhoods of nodes in hypergraphs. We establish their relationships to
lattice animals and network motifs, and we develop their combinatorial
properties for sparse and uncorrelated hypergraphs. We make use of the tight
link of hypergraph animals to partition numbers, which opens up a vast
mathematical framework for the analysis of hypergraph animals. We then study
their abundances in random hypergraphs. Two transferable insights result from
this analysis: (i) it establishes the importance of high-cardinality edges in
ensembles of random hypergraphs that are inspired by the classical
Erd\"os-Reny\'i random graphs; and (ii) there is a close connection between
degree and hyperedge cardinality in random hypergraphs that shapes animal
abundances and spectra profoundly. Both findings imply that hypergraph animals
can have the potential to affect information flow and processing in complex
systems. Our analysis of also suggests that we need to spend more effort on
investigating and developing suitable conditional ensembles of random
hypergraphs that can capture real-world structures and their complex dependency
structures. | Michael P. H. Stumpf | 2023-04-03T09:37:27Z | http://arxiv.org/abs/2304.00841v3 | # Hypergraph Animals
###### Abstract
Here we introduce simple structures for the analysis of complex hypergraphs, hypergraph animals. These structures are designed to describe the local node neighbourhoods of nodes in hypergraphs. We establish their relationships to lattice animals and network motifs and develop their combinatorial properties, for sparse and uncorrelated hypergraphs. Here we can make use of the tight link of hypergraph animals to partition numbers, which opens up a vast mathematical framework for the analysis of hypergraph animals. We then study their abundances in random hypergraphs. Two transferable insights result from this analysis: (i) it establishes the importance of high-cardinality edges in ensembles of random hypergraphs that are inspired by the classical Erdos-Renyi random graphs; and (ii) there is a close connection between degree and hyperedge cardinality in random hypergraphs that shapes the animal abundances and spectra profoundly. Together these two findings imply that we need to spend more effort on investigating and developing suitable conditional ensembles of random hypergraph that can capture real-world structures and their complex dependency structures.
## 1 Introduction
Complex systems are more than the sum of their parts [1]. While they cannot be fully comprehended by dissecting them into their constituent components [2], we can gain valuable, if partial, insights into their global dynamics by characterising key local features. The search and classification of such local features has been an important cornerstone of complex systems science. Here I consider one candidate feature for complex hypergraphs [3], and which I will refer to as "hypergraph animals". Below I will outline their relationships to lattice animals and network motifs; I will then discuss some of their combinatorial properties before deriving and analysing their distributions in a class of random hypergraphs.
Lattice animals [4] refer to connected groups of sites in lattices or hypercubes (see Fig. 1A). The number of different configurations for a fixed number of lattice sites, their volume, and circumference of lattice animals have been a topic of longstanding interest in statistical physics, combinatorics, experimental mathematics, but also recreational mathematics [5]. The study of lattice animals and their growth has been important in percolation theory to map out percolation transitions on different lattices. |
2302.02300 | Run-Off Election: Improved Provable Defense against Data Poisoning
Attacks | In data poisoning attacks, an adversary tries to change a model's prediction
by adding, modifying, or removing samples in the training data. Recently,
ensemble-based approaches for obtaining provable defenses against data
poisoning have been proposed where predictions are done by taking a majority
vote across multiple base models. In this work, we show that merely considering
the majority vote in ensemble defenses is wasteful as it does not effectively
utilize available information in the logits layers of the base models. Instead,
we propose Run-Off Election (ROE), a novel aggregation method based on a
two-round election across the base models: In the first round, models vote for
their preferred class and then a second, Run-Off election is held between the
top two classes in the first round. Based on this approach, we propose DPA+ROE
and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite
Aggregation (FA) approaches from prior work. We evaluate our methods on MNIST,
CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to
3%-4%. Also, by applying ROE on a boosted version of DPA, we gain improvements
around 12%-27% comparing to the current state-of-the-art, establishing a new
state-of-the-art in (pointwise) certified robustness against data poisoning. In
many cases, our approach outperforms the state-of-the-art, even when using 32
times less computational power. | Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi | 2023-02-05T04:48:30Z | http://arxiv.org/abs/2302.02300v3 | # Run-Off Election: Improved Provable Defense against Data Poisoning Attacks
###### Abstract
In data poisoning attacks, an adversary tries to change a model's prediction by adding, modifying, or removing samples in the training data. Recently, _ensemble-based_ approaches for obtaining _provable_ defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose _Run-Off Election (ROE)_, a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, _Run-Off_ election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We show how to obtain robustness for these methods using ideas inspired by dynamic programming and duality. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to \(4.73\%\), \(3.63\%\), and \(3.54\%\), respectively, establishing **a new state-of-the-art** in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.
## 1 Introduction
In recent years, Deep Neural Networks (DNNs) have achieved great success in many research areas, such as computer vision (He et al., 2016) and natural language processing (Chen, 2015) and have become the standard method of choice in many applications. Despite this success, these methods are vulnerable to _poisoning attacks_ where the adversary manipulates the training data in order to change the classifications of specific inputs at the test time (Chen et al., 2017; Shafahi et al., 2018; Gao et al., 2021). Since large datasets are obtained using methods such as crawling the web, this issue has become increasingly important as deep models are adopted in safety-critical applications.
While empirical defense methods have been proposed to combat this problem using approaches such as data augmentation and data sanitization (Hodge and Austin, 2004; Paudice et al., 2018; Gong et al., 2020; Borginia et al., 2021; Ni et al., 2021), the literature around poisoning has followed something of a "cat and mouse" game as in the broader literature on adversarial robustness, where defense methods are quickly broken using adaptive and stronger attack techniques (Carlini et al., 2019). To combat this, several works have focused on obtaining _certifiable defenses_ that are _provably robust_ against the adversary, regardless of the attack method. These works provide a _certificate_ for each sample that is a guaranteed lower bound on the amount of distortion on the training set required to change the model's prediction.
The most scalable provable defenses against data poisoning have been considered the use of _ensemble methods_ that are composed of multiple base classifiers (Levine and Feizi, 2020; Chen et al., 2020; Jia et al., 2021; Wang et al., 2022; Chen et al., 2022). At the test time, the prediction of these models is aggregated by taking a majority vote across them. Depending on the exact method, the certificates may be deterministic or stochastic. For instance, Deep Partition Aggregation (DPA) method of (Levine and Feizi, 2020) trains multiple models on disjoint subsets of the training data. Since each poisoned sample can affect at most one model, this leads to a deterministic certificate based on the gap between the predicted and the runner-up class. This can be wasteful, however as the models predicting other than the top two classes are ignored. While the choice of the partitioning scheme used for training the models has been extensively considered in the literature, for both deterministic (Levine and Feizi, 2020; Wang et al., 2022) and stochastic (Chen et al., 2020; Jia et al., 2021) partitioning schemes, all of these approaches share the problem as they take a majority vote at test time.
In this work, we propose a novel aggregation method called Run-Off Election (ROE) that greatly improves on existing approaches using a _two-round election_ among the models. In the first round, models vote for their preferred class, and we obtain the top two classes in terms of the number of votes. We then hold a second, _Run-Off_ election round where all base models vote for one of these two classes. These votes are obtained using the _logits layer_ information of the models, where each model votes for the class with the higher score in this layer. By using all of the base models for prediction, we effectively increase the gap between the predicted class and the runner-up, leading to an improved certificate. An illustrative example of our approach is shown in Figure 1. Our method is general and, in principle, can be applied to any kind of deterministic or stochastic ensemble method. In this paper, we focus on deterministic methods and develop DPA+ROE and FA+ROE defenses based on the Deep Partition Aggregation (DPA) (Levine and Feizi, 2020) and Finite Aggregation (FA) (Wang et al., 2022) respectively and calculate the prediction certificates.
**Technical challenges of calculating certificates.** Compared to the majority vote method used in prior work, calculating certificates in ROE is more challenging since characterizing the adversary's optimal actions is more complex. Letting \(c^{\text{pred}}\) and \(c^{\text{sec}}\) denote the predicted class and the runner-up class, the adversary can change the model's prediction in many ways, as we briefly discuss below.
1. It can get the \(c^{\text{sec}}\) elected in the second round by poisoning models to change their votes from the predicted class to the runner-up class.
2. It can get \(c^{\text{pred}}\) eliminated in the first round by ensuring that at least two classes receive more votes than \(c^{\text{pred}}\) in the first round.
3. For some \(c\notin\{c^{\text{pred}},c^{\text{sec}}\}\), it can ensure that \(c\) receives more votes than \(c^{\text{sec}}\) in the first round and receives more votes than \(c^{\text{pred}}\) in the second round. This leads to the counter-intuitive situation where the adversary _decreases_ the votes of the runner-up class \(c^{\text{sec}}\).
In order to obtain a unified argument for all of these cases, we introduce the concepts of a _1v1 certificate_ and _2v1 certificate_. Intuitively, a 1v1 certificate bounds the number of poisoned samples required for one class to "beat" another class, i.e., receive more votes. This is similar to the prediction certificate of majority vote, with the distinction that we consider any two arbitrary classes. A 2v1 certificate extends this idea and bounds the number of poisoned samples required for _two_ classes to beat another class simultaneously.
We will show that as long as the 1v1 and 2v1 certificates can be calculated efficiently, we can use them to calculate a prediction certificate in ROE. Taking a reduction approach is beneficial as it ensures that our method works for any choice of ensembles, as long as 1v1 and 2v1 certificates can
Figure 1: An example illustrating our proposed method (**Run-off Election**). Training dataset is partitioned into 7 parts (\(d_{1},d_{2},...,d_{7}\)) and \(7\) separate classifiers are trained on each part (\(f_{1},f_{2},...,f_{7}\)). At test time, after giving the input to the classifiers, we obtain the logits-layer information of each of them. For example, the class dog has the highest logits-layer score for classifier \(f_{2}\). In our method (see Section 3), we hold a two-round election. In the first round, each model votes for its top class, and we find the top-two classes with the most votes. In the second run, each model votes for one of these two classes based on the logits-layer information, e.g., \(f_{6}\) votes for cat as it prefers it to dog. Existing methods output the most repeated class and can be fooled using a single poisoned sample, e.g., by changing the prediction of \(f_{3}\) to dog. As we prove in Section 3.3 (Theorem 3.4), our method is more robust and the adversary needs at least two poisoned samples to change the model’s prediction in this example. This is due to the fact the **gap between the number of votes of the top two classes effectively increases in the second round.**
be calculated. Focusing on DPA+ROE and FA+ROE, we show that the 1v1 certificates can be calculated using similar methods as the prediction certificates for the majority vote. Calculating 2v1 certificates are more complex, however as the adversary needs to "spread" its effort between two classes, and it is not straightforward how this should be done. For DPA+ROE, we use a **dynamic programming** based approach to recursively solve this problem. The argument, however, does not extend to FA+ROE since the underlying partitioning scheme is more complex and the adversary is more constrained in its actions. For FA+ROE, we deal with this challenge using a **duality-based approach** that considers the **convex combination of the adversary's constraint set**.
By reasoning on the adversary's behavior as above, we obtain two separate certificates to ensure that the predicted class is unchanged in each of the two rounds. Since the adversary can change our prediction in either of the rounds, we take the minimum of two numbers as our final certificate. We further refer to Section 3.3 for more details on the calculation of the certificate in DPA+ROE and FA+ROE.
**Empirical results.** We evaluate our model in the context of _deterministic_ robustness certificates and observe substantial improvements over the existing state-of-the-art (FA). **FA+ROE can improve certified accuracy by up to \(4.73\%\), \(3.63\%\), and \(3.54\%\) respectively on MNIST, CIFAR-10, and GTSRB datasets.** Furthermore, in some cases, **DPA+ROE also outperforms FA** while it significantly uses less computational resources than FA. Note that FA improves over DPA by increasing the number of classifiers, which comes at the cost of a significant increase in training time. Indeed, in some cases on all MNIST, CIFAR-10, and GTSRB datasets, DPA+ROE has improvements over FA while it exploits around **32 times less training cost**.
**Contributions.** In summary, our contributions include:
1. We propose _Run-Off election_, a novel aggregation method for ensemble-based defenses against data poisoning. Our approach is general, provable, and can be applied in combination with different partitioning schemes of the datasets. Using the partitioning schemes in DPA and FA, we propose the DPA+ROE and FA+ROE defense methods.
2. We introduce the notion of 1v1 and 2v1 certificates and show how they can be used to calculate _provable certificates for robustness_ for any ensemble method via a reduction. Focusing on DPA+ROE and FA+ROE, we obtain these certificates using careful reasoning on the adversary's optimal action. For each round, we bound the minimum number of poisoned samples the adversary needs. In the first round, we propose a dynamic programming-based approach for characterizing the adversary's action in DPA+ROE and a duality-based approach for bounding the adversary's effect in FA+ROE. In the second round, we carefully bound the minimum number of samples required for electing other classes.
3. We empirically evaluate our method on existing benchmarks. Compared to prior work, we observe considerably improved results, in some cases, even when using significantly fewer computational resources, thus our method establishes the new **state-of-the-art** in certified accuracy against **general data poisoning attacks**.
### Related work
Certified robustness has been widely studied in the literature and prior works have considered various notions of robustness, such as label-flipping (Rosenfeld et al., 2020) and distributional robustness (Lai et al., 2016; Diakonikolas et al., 2016, 2019). Recent works have also studied the poisoning problem theoretically using a PAC learning model (Blum et al., 2021; Gao et al., 2021; Balcan et al., 2022; Hanneke et al., 2022). In this work, we focus on (pointwise) certified robustness against data poisoning and assume a general poisoning model where the adversary may insert, delete, or modify any images.
Most closely related to our work, are the DPA (Levine and Feizi, 2020) and the FA methods (Wang et al., 2022) that use an ensemble of classifiers to obtain _deterministic_ robustness certificates. A similar line of work (Chen et al., 2020; Jia et al., 2021) considers _stochastic_ robustness certificates. As mentioned, we improve on these works, establishing a new state-of-the-art for (pointwise) certified robustness. Following prior work, we use _certified fraction_, i.e., the fraction of (test) data points that are certifiable correct, to measure robustness. A similar but slightly different notion is studied by (Chen et al., 2022) who certify the _test accuracy_ without certifying any specific data point.
Our work is closely related to the smoothing technique of (Cohen et al., 2019). that has been extensively studied both in terms of its applications (Raghunathan et al., 2018; Singla and Feizi, 2019, 2020; Chiang et al., 2020) and known limitations (Yang et al., 2020; Kumar et al., 2020; Blum et al., 2020). The original DPA method is inspired by derandomized smoothing (Levine and Feizi, 2020). Smoothing can also be directly applied to data poisoning attacks (Weber et al., 2020), though this requires strong assumptions on the adversary.
## 2 Preliminaries
**Notation.** For a positive integer \(n\), we use \([n]:=\{1,\ldots,n\}\) to denote the set of integers at most \(n\). Given two arbitrary sets \(A\) and \(B\), we use \(A\backslash B\) to denote the set
of all elements that are in \(A\) but not in \(B\) and use \(A\times B\) to denote the _Cartesian product_, i.e., \(A\times B:=\{(a,b):a\in A,b\in B\}\). We use \(d_{\text{syn}}(A,B)\) to denote the size of the _symmetric difference_ of \(A\) and \(B\), i.e.,
\[d_{\text{syn}}(A,B):=|(A\backslash B)\cup(B\backslash A)|.\]
This can be thought of as a measure of distance between \(A\) and \(B\) and equals the number of insertions and deletions required for transforming \(A\) to \(B\).
We use \(\mathcal{X}\) to denote the set of all possible unlabeled samples. This is typically the set of all images, though our approach holds for any general input. Similarly, we use \(\mathcal{C}\) to be the set of possible labels. We define a _training set_\(D\) as any arbitrary collection of labeled samples and let \(\mathcal{D}:=\mathcal{X}\times\mathcal{C}\).
We define a _classification algorithm_ as any mapping \(f:\mathcal{D}\times\mathcal{X}\rightarrow\mathcal{C}\), where \(f(D,x)\) denotes the prediction of the _classifier_ trained on the set \(D\) and tested on the sample \(x\). We use the notation \(f_{D}(x):=f(D,x)\) for convenience. We assume that the classifier \(f_{D}\) works by first scoring each class and choosing the class with the maximum score. For neural networks, this corresponds to the logits layer of the model. We use \(f_{D}^{\text{logits}}(x,c)\in\mathbb{R}\) to denote the underlying score of class \(c\) for the test sample \(x\) and assume that \(f_{D}^{\text{logits}}(x,c)\neq f_{D}^{\text{logits}}(x,c^{\prime})\) for all \(c\neq c^{\prime}\).
**Threat model.** We consider a _general poisoning_ model where the adversary can poison the training process by adding, removing, or modifying the training set. Given an upper bound on the adversary's budget, i.e., the maximum amount of alteration it can make in the training set, we aim to certify the prediction of the test samples.
Given a classification algorithm \(f\), a dataset \(D\) and a test sample \(x\), we define _a prediction certificate_ as any provable lower bound on the number of samples the adversary requires to change the prediction of \(f\). Formally, \(\mathtt{Cert}\) is a prediction certificate if
\[f_{D}(x)=f_{D^{\prime}}(x)\text{ if }d_{\text{sym}}(D,D^{\prime})<\mathtt{ Cert}.\]
## 3 Proposed method: Run-Off election
In this section, we present our defense approach. We start by discussing _Run-Off election_, an aggregation method that takes as input a test sample \(x\) and ensemble of \(k\) models \(\{f_{i}\}_{i=1}^{k}\), and uses these models to make a prediction. The method makes no assumptions about the ensemble and works for an arbitrary choice of the models \(f_{i}\). In order to obtain certificates, however, we will need to specify the choice of \(f_{i}\). In Section 3.2, we consider two choices, DPA+ROE and FA+ROE. In Section 3.3, we show how to obtain certificates for these methods.
### Run-Off Election
As mentioned in the introduction, our method can be seen as a _two-round election_, where each model corresponds to a voter, and each class corresponds to a candidate. Given a test sample \(x\), and an ensemble of \(k\) models \(\{f_{i}\}_{i=1}^{k}\), our election consists of the following two rounds.
* **Round 1.** We first obtain the top two classes as measured by the number of models "voting" for each class. Formally, the setting, \[N_{c}^{\text{R1}}:=\sum_{i}\mathds{1}\left[f_{i}(x)=c\right],\] (1) we calculate the top two classes \(c_{1}^{R1}:=\operatorname*{arg\,max}_{c}N_{c}^{\text{R1}}\) and \(c_{2}^{R1}:=\operatorname*{arg\,max}_{c\neq c_{1}^{R1}}N_{c}^{\text{R1}}\).
* **Round 2.** We collect the votes of each model in an election between \(c_{1}^{R1}\) and \(c_{2}^{R1}\). Formally, for \((c,c^{\prime})\in\{(c_{1}^{R1},c_{2}^{R1}),(c_{2}^{R1},c_{1}^{R1})\}\), we set \[N_{c}^{\text{R2}}:=\sum_{i=1}^{k}\mathds{1}\left[f_{i}^{\text{logits}}(x,c)>f_{i}^{\text{logits}}(x,c^{\prime})\right],\] and output \(\text{ROE}(D,x):=\operatorname*{arg\,max}_{c\in\{c_{1},c_{2}\}}N_{c}^{\text{R2}}\).
We assume that \(\operatorname*{arg\,max}\) breaks ties by favoring the class with the smaller index. The formal pseudocode of ROE is provided in Algorithm 1.
### Choice of ensemble
We now present two possible choices of the ensembles \(\{f_{i}\}_{i=1}\). We begin by considering a disjoint partitioning scheme based on DPA and then consider a more sophisticated overlapping partitioning scheme based on FA. We denote the methods by DPA+ROE and FA+ROE, respectively.
**DPA+ROE.** In this method, training data is divided into several partitions and a separate base classifier \(f_{i}\) is trained on each of these partitions. Formally, given a hash function \(h:\mathcal{X}\rightarrow[k]\), the training set \(D\) is divided into \(k\) partitions \(\{D_{i}\}_{i=1}^{k}\), where \(D_{i}:=\{x\in D:h(x)=i\}\), and the classifiers \(\{f_{i}\}_{i=1}^{k}\) are obtained by training a base classifier on these partitions, i.e., \(f_{i}:=f_{D_{i}}\). For instance, when classifying images, \(f_{i}\) can be a standard ResNet model trained on \(D_{i}\).
**FA+ROE.** In this method, we use two hash functions \(h_{\text{spl}}:\mathcal{D}\rightarrow[kd]\) and \(h_{\text{spr}}:[kd]\rightarrow[kd]^{d}\). We first _split_ the datasets into \([kd]\) "buckets" using \(h_{\text{spl}}\). We then create \(kd\) partitions by _spreading_ these buckets, sending each bucket to all of the partitions specified by \(h_{\text{spr}}\). Formally, for \(i\in[kd]\), we define \(D_{i}\) as \(D_{i}:=\{x\in D:i\in h_{\text{spr}}(h_{\text{spl}}(x))\}\). We then train a separate classifier \(f_{i}:=f_{D_{i}}\) for each \(D_{i}\). A pseudocode of training FA is shown in Algorithm 3.
### Calculating certificate
Since our aggregation method is more involved than taking a simple majority vote, the adversary can affect the decision-making process in more ways. Calculating the prediction certificate thus requires a more careful argument compared to prior work. In order to present a unified argument, we introduce the concept of _1v1_ and _2v1_ certificates. A 1v1 certificate bounds the number of poisoned samples required for one class to beat another class while a 2v1 certificate extends this idea and bounds the number of poisoned samples required for _two_ classes to beat another class.
We will show that as long as the 1v1 and 2v1 certificates can be calculated efficiently, we can use them to calculate a prediction certificate (Theorem 3.4). The reduction ensures that our approach is general and works for any choice of ensemble, as long as 1v1 and 2v1 certificates can be calculated. We then provide an implementation of how to calculate those certificates for DPA+ROE (Lemmas 3.5 and 3.6) and FA+ROE (Lemmas 3.7 and 3.8).
We begin by defining the notion of the _gap_ between two classes.
**Definition 3.1**.: Given an ensemble \(\{f_{i}\}_{i=1}^{k}\), a sample \(x\in\mathcal{X}\), and classes \(c,c^{\prime}\), we define the _gap_ between \(c,c^{\prime}\) as
\[\text{gap}(\{f_{i}\}_{i=1}^{k},x,c,c^{\prime}):=N_{c}-N_{c^{\prime}}+\mathds{1 }\left[c^{\prime}>c\right],\]
where \(N_{c}:=\sum_{i}\mathds{1}\left[f_{i}(x)=c\right]\) For \(\{f_{i}\}_{i=1}^{k}\) obtained using the training set \(D\), we use \(\text{gap}(D,x,c,c^{\prime})\) to denote \(\text{gap}(\{f_{i}\}_{i=1}^{k},x,c,c^{\prime})\).
We will omit the dependence of gap on \(\{f_{i}\}_{i=1}^{k}\) and \(x\) when it is clear from context. We say that \(c\)_beats_\(c^{\prime}\) if \(\text{gap}(c,c^{\prime})>0\). If the adversary wants the class \(c^{\prime}\) to beat \(c\), it needs to poison the training set \(D\) until \(\text{gap}(c,c^{\prime})\) becomes non-positive. We can therefore use this notion to reason on the optimal behaviour of the adversary.
We define a 1v1 certificate as follows.
**Definition 3.2** (1v1 certificate).: Given models \(\{f_{i}\}_{i=1}^{k}\), a test sample \(x\), and two classes \(c,c^{\prime}\in\mathcal{C}\) we say \(\text{Certv1}\in\mathbb{N}\) is a _1v1 certificate_ for \(c\) vs \(c^{\prime}\), if for all \(D^{\prime}\) such that \(d_{\text{sym}}(D,D^{\prime})<\text{Certv1}\), we have \(\text{gap}(D^{\prime},x,c,c^{\prime})>0\).
We note that if \(\text{gap}(D,x,c,c^{\prime})\leq 0\), then \(\text{Certv1}\) can only be zero.
We similarly define a 2v1 certificate as follows.
**Definition 3.3** (2v1 certificate).: Given models \(\{f_{i}\}_{i=1}^{k}\), a test sample \(x\), and three classes \(c,c_{1},c_{2}\in\mathcal{C}\) we say \(\text{Certv2}\in\mathbb{N}\) is a _2v1 certificate_ for \(c\) vs \(c_{1},c_{2}\) if for all \(D^{\prime}\) such that \(d_{\text{sym}}(D,D^{\prime})<\text{Certv2}(\{f_{i}\},x,c,c_{1},c_{2})\), we have \(\text{gap}(D^{\prime},x,c,c_{1})>0\) and \(\text{gap}(D^{\prime},x,c,c_{2})>0\).
Assuming these certificates can be calculated efficiently, we can obtain a prediction certificate, as we outline below. Let \(c^{\text{pred}}\) denote the predicted and \(c^{\text{sec}}\) the runner-up class. The adversary can change the model's prediction in one of the following two ways.
1. It can eliminate \(c^{\text{pred}}\) in Round 1. This means it needs to choose two classes \(c_{1},c_{2}\in C\backslash\{c^{\text{pred}}\}\) and ensure that \(c_{1}\) and \(c_{2}\) both have more votes than \(c^{\text{pred}}\) in Round 1. By definition, this requires as least \(\text{Certv2}(c^{\text{pred}},c_{1},c_{2})\). Since the adversary can choose \(c_{1},c_{2}\), we can lower bound the number of poisoned samples it needs with \[\text{Cert}^{\mathds{R}1}:=\min_{c_{1},c_{2}\in C\backslash\{c^{\text{pred}}\}} \text{Certv2}(c^{\text{pred}},c_{1},c_{2}).\]
2. It can eliminate \(c^{\text{pred}}\) in Round 2. Letting \(c\) denote the class that is ultimately chosen, this requires that \(c\) makes it to Round 2 and beats \(c^{\text{pred}}\) in Round 2. For \(c\) to make it to Round 2, the adversary needs to ensure that it beats either \(c^{\text{pred}}\) or \(c^{\text{sec}}\) in Round 1. Given the previous case, we can assume that \(c^{\text{pred}}\) makes it to Round 2, which means \(c\) needs to beat \(c^{\text{sec}}\). The number of poisoned samples required for this is at least \[\text{Cert}^{\mathds{R}2}_{c,1}:=\text{Certv1}(\{f_{i}\}_{i=1}^{k},c^{\text{sec }},c).\] Note that this also includes the special case \(c=c^{\text{sec}}\) as \(\text{Certv1}(c^{\text{sec}},c^{\text{sec}})=0\).
As for \(c\) beating \(c^{\text{pred}}\) in Round 2, let \(g_{i}^{c}:\mathcal{X}\rightarrow\{c,c^{\text{pred}}\}\) denote the binary \(c^{\text{pred}}\) vs \(c\) classifier obtained from \(f_{i}\). Formally, we set \(g_{i}^{c}(x)\) to \(c\) if \(f_{i}^{\text{logits}}(x,c)>f_{i}^{\text{logits}}(x,c^{\text{pred}})\) and set it to \(c^{\text{pred}}\) otherwise. We can lower bound the number of poisoned samples the adversary requires with
\[\text{Cert}^{\mathds{R}2}_{c,2}:=\text{Certv1}(\{g_{i}^{c}\}_{i=1}^{k},c^{ \text{pred}},c).\]
Overall, since the adversary can choose the class \(c\), we obtain the bound
\[\text{Cert}^{\mathds{R}2}:=\min_{c\neq c^{\text{pred}}}\max\{\text{Cert}^{ \mathds{R}2}_{c,1},\text{Cert}^{\mathds{R}2}_{c,2}\}.\]
Given the above analysis, we obtain the following theorem, a formal proof of which is provided in Appendix C.2.
**Theorem 3.4** (ROE prediction certificate).: _Let \(c^{\text{pred}}\) denote the prediction of Algorithm 1 after training on a dataset \(D\). For any training set \(D^{\prime}\), if_
\[d_{\text{sym}}(D,D^{\prime})<\min\left\{\text{Cert}^{\mathds{R}1},\text{Cert}^{ \mathds{R}2}\right\},\]
_the Algorithm 1 would still predict \(c^{\text{pred}}\) when trained on the dataset \(D^{\prime}\)._
We now show how we can obtain 1v1 and 2v1 certificates for DPA+ROE and FA+ROE.
**Certificate for DPA+ROE.** We start with calculating \(\texttt{Certv1}(c,c^{\prime})\). Since each poisoned sample can affect at most one model, the optimal action for the adversary is to "flip" a vote from \(c\) to \(c^{\prime}\). The adversary, therefore, requires at least half of the gap between the votes of \(c\) and \(c^{\prime}\). Formally, we use the following lemma, the proof of which is provided in Appendix C.3.1.
**Lemma 3.5** (DPA+ROE 1v1 certificate).: _Define \(\texttt{Certv1}\) as_
\[\texttt{Certv1}(c,c^{\prime}):=\left\lceil\frac{\max\left(0,\texttt{gap}(c,c^ {\prime})\right)}{2}\right\rceil.\]
_Then \(\texttt{Certv1}\) is a lv1 certificate for DPA+ROE._
Calculating \(\texttt{Certv2}(c,c_{1},c_{2})\) is more complex as the adversary's optimal action choice is less clear. Intuitively, while the adversary should always change votes from \(c\) to either \(c_{1}\) or \(c_{2}\), it is not clear which class it should choose. To solve this issue, we use dynamic programming. Defining \(\texttt{gap}_{i}:=\max\{0,\texttt{gap}(c,c_{i})\}\) for \(i\in\{1,2\}\), we calculate \(\texttt{Certv2}\) as a function of \(\texttt{gap}_{1},\texttt{gap}_{2}\). As long as \(\texttt{gap}_{1},\texttt{gap}_{2}>2\), an optimal adversary should first choose a poison to reduce one of the gaps and then continue the poisoning process. This leads to a recursive formulation which we solve efficiently using dynamic programming. Formally, we fill a matrix dp of size \([k]^{2}\) where if \(\min(i,j)\geq 2\) we set
\[\text{dp}[i,j]=1+\min\{\text{dp}[i,j-2]+\text{dp}[i-2,j]\},\]
and if \(\min(i,j)<2\), we set \(\text{dp}[i,j]:=\left\lceil\frac{\max(i,j)}{2}\right\rceil\). We obtain the following lemma, the proof of which is in Appendix C.3.2.
**Lemma 3.6** (DPA+ROE 2v1 certificate).: _Define_
\[\texttt{Certv2}(c,c_{1},c_{2}):=\text{dp}[\texttt{gap}_{1},\texttt{gap}_{2}],\]
_where \(\text{gap}_{i}:=\max\{0,\texttt{gap}(c,c_{i})\}\). Then \(\texttt{Certv2}\) is a 2v1 certificate for DPA+ROE._
**Certificate for FA+ROE.** We start with the 1v1 certificate. Consider a poisoned sample of the adversary and assume it falls in some bucket \(i\). By definition of buckets, this can only affect the models \(f_{j}\) satisfying \(j\in h_{\texttt{spr}}(i)\). If model \(j\) votes for \(c\), this can reduce the gap by at most two, and if the model votes for some class \(\tilde{c}\notin\{c,c^{\prime}\}\), it can reduce the gap by 1. This allows us to bound the effect of poisoning each bucket on the gap. As we will see, the effect of poisoning multiple buckets is, at most, the sum of the effects of each bucket. Formally, we obtain the following lemma, the proof of which is in Appendix C.4.1.
**Lemma 3.7** (FA+ROE 1v1 certificate).: _Given two classes \(c,c^{\prime}\), define the poisoning power of each bucket \(b\in[kd]\) as_
\[\texttt{pw}_{b}:=\sum_{i\in h_{\texttt{spr}}(b)}2\mathds{1}\left[f_{i}(x)=c \right]+\mathds{1}\left[f_{i}(x)\notin\{c,c^{\prime}\}\right].\]
_Let \(\texttt{Certv1}(c,c^{\prime})\) be the smallest number such that the sum of the \(\texttt{Certv1}\) largest values in \((\texttt{pw}_{b})_{b\in[kd]}\) is at least \(\text{gap}(c,c^{\prime})\). Then \(\texttt{Certv1}\) is a lv1 certificate._
Formal pseudocode for obtaining \(\texttt{Certv1}\) is provided in Algorithm 4.
In order to calculate \(\texttt{Certv2}(c,c_{1},c_{2})\), we first observe that the adversary needs at least \(\max_{i}(\texttt{Certv1}(c,c_{i}))\) poisoned samples since both \(c_{1}\) and \(c_{2}\) need to beat \(c\). This is not necessarily enough, however, as making \(c_{1}\) and \(c_{2}\) beat \(c\)_simultaneously_ may be more difficult than making each beat \(c_{i}\) individually. In order to obtain a stronger bound, we use an approach inspired by duality and consider the conical combination of the constraints. Defining \(\text{gap}_{i}:=\max\{0,\texttt{gap}(c,c_{i})\}\), we observe that if \(\text{gap}_{1}\) and \(\text{gap}_{2}\) both become non-positive, then so does every combination \(\lambda\texttt{gap}_{1}+\lambda^{\prime}\texttt{gap}_{1}\) for \(\lambda,\lambda^{\prime}\geq 0\). As a special case, this includes \(\texttt{gap}^{+}:=\texttt{gap}_{1}+\text{gap}_{2}\). We can bound the number of poisoned samples for making \(\texttt{gap}^{+}\) non-positive using a similar argument as the 1v1 certificate. Each bucket \(b\) can only affect models \(j\) such that \(j\in h_{\texttt{spr}}(b)\). If \(j\) votes for \(c_{1}\) or \(c_{2}\), the gap cannot be reduced. If \(j\) votes for \(c\), the gap can be reduced by at most \(3\) and if \(j\) votes for some \(\tilde{c}\notin\{c,c_{1},c_{2}\}\), the gap can be reduced by at most \(1\). We Define the _total poisoning power_ of each bucket as
\[\texttt{pw}_{b}^{+}:=\sum_{i\in h_{\texttt{spr}}(b)}3\mathds{1}\left[f_{i}(x)= c\right]+\mathds{1}\left[f_{i}(x)\notin(c,c_{1},c_{2})\right],\]
where we hide the dependence on \(c,c_{1},c_{2}\) for brevity. We obtain the following lemma, the proof of which is in Appendix C.4.2.
**Lemma 3.8** (FA+ROE 2v1 certificate).: _For any \(c,c_{1},c_{2}\in\mathcal{C}\), let \(\texttt{Certv2}^{+}\) denote the smallest number such that the sum of the \(\texttt{Certv2}^{+}\) largest values in \((\texttt{pw}_{b}^{+})_{b\in[kd]}\) is at least \(\texttt{gap}^{+}\). For \(i\in\{1,2\}\), define \(\texttt{Certv2}^{(i)}:=\texttt{Certv1}(c,c_{i})\). Finally, define \(\texttt{Certv2}\) as_
\[\texttt{Certv2}:=\max\{\texttt{Certv2}^{(1)},\texttt{Certv2}^{(2)},\texttt{ Certv2}^{+}\}.\]
_Then \(\texttt{Certv2}\) is a 2v1 certificate for FA+ROE._
## 4 Evaluation
In this section, we empirically analyze our method and demonstrate that it reaches state-of-the-art results in certified robustness. In some cases, this comes with considerably less computation than the baseline.
### Experimental setting
We consider the same setup as prior work (Levine and Feizi, 2020; Wang et al., 2022b) and evaluate our method on MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009) and GTSRB (Stallkamp et al., 2012) datasets. We
similarly use Network-In-Network (Lin et al., 2013) architecture, to be trained with the set of hyperparameters from (Gidaris et al., 2018). (Wang et al., 2022a) observe that the accuracy of ensemble methods can be further improved by having better base classifiers, i.e., base classifiers that have better classification accuracy. They improve over the original DPA by training base classifiers on the augmented version of datasets. As we want to have a fair comparison to the FA baseline, we train classifiers of both DPA and FA as (Wang et al., 2022b).
As in prior work, we consider _certified fraction_ (CF) as our performance metric. Given a budget \(B\) for the adversary, the certified fraction of the dataset denotes the fraction of test samples that are _provably_ classified correctly as long as the dataset is not altered by more than \(B\) points.
As baselines, we use the Deep Partition Aggregation (DPA) method of (Levine and Feizi, 2020) and the Finite Aggregation (FA) method of (Wang et al., 2022b). As discussed in (Wang et al., 2022b), FA is effectively a generalization of DPA that uses overlapping partitions. Compared to DPA, FA takes an additional parameter \(d\) and uses \(d\) times as many base models. When using \(d=1\), FA coincides with DPA. While larger values of \(d\) increase the robustness of the model, this comes at the cost of increased computation; the training cost for FA is \(d\) times the training cost for DPA.
### Results
Our results are shown in Tables 1, 2, 3 and Figs 2, 3. As seen in Table 1, when we apply our aggregation method to FA, it can remarkably improve the certified accuracy of the original Finite Aggregation (we compare these two methods with the same \(d\)). Improvements can be up to \(3\%\) or \(4\%\). This implies that FA+ROE is the new **state-of-the-art** in provable defense. Furthermore, based on the results in Table 2, DPA+ROE significantly improves the original DPA baseline and similarly, improvements can be up to \(3\%\) or \(4\%\).
Perhaps more impressively, as seen in Table 3, DPA+ROE also competes with, and for larger values of \(B\) outperforms FA while it significantly needs less training cost as its training cost is equivalent to that of DPA. For example, on CIFAR-10 dataset when \(k=250\), by using a single NVIDIA GeForce RTX 2080 Ti GPU, the total training time of classifiers used in DPA+ROE is around \(3\) hours while it takes around \(47.3\) hours to train classifiers needed in FA with \(d=16\). Roughly speaking, the training of FA with parameter \(d\), takes \(d\) time more than that of DPA, or DPA+ROE.
Although DPA+ROE uses less training time, it obtains a higher certified accuracy for larger values of \(B\), e.g., on the standard CIFAR-10 dataset when \(k=50\), it obtains a higher certified fraction than FA with \(d=32\) when \(B\geq 15\)**even
\begin{table}
\begin{tabular}{|c||c||c||c|c|c|c|c|} \hline dataset & \(k\) & method & \(d\) & \multicolumn{4}{c|}{certified fraction} \\ \hline \hline \multirow{4}{*}{MNIST} & & & \(B\leq 100\) & \(B\leq 200\) & \(B\leq 300\) & \(B\leq 400\) & \(B\leq 500\) \\ \cline{3-8} & & FA & 16 & 92.75\% & 87.89\% & 78.91\% & 62.42\% & 31.97\% \\ \cline{3-8} & 1200 & FA+ROE & 16 & 92.80\%(+0.05\%) & 88.09\%(+0.2\%) & 80.26\%(+1.35\%) & 65.31\%(+2.89\%) & 36.76\%(+4.79\%) \\ \cline{3-8} & & FA & 32 & 92.97\% & 88.49\% & 80.17\% & 64.34\% & 31.09\% \\ \cline{3-8} & & FA+ROE & 32 & 92.99\%(+0.02\%) & 88.76\%(+0.27\%) & 81.49\%(+1.32\%) & 66.72\%(+2.38\%) & 35.78\%(+4.69\%) \\ \hline \hline \multirow{4}{*}{CIFAR-10} & & & & \(B\leq 5\) & \(B\leq 10\) & \(B\leq 15\) & \(B\leq 18\) & \(B\leq 20\) \\ \cline{3-8} & & FA & 16 & 60.55\% & 48.85\% & 34.61\% & 25.46\% & 19.90\% \\ \cline{3-8} & 50 & FA+ROE & 16 & 61.60\%(+1.05\%) & 51.18\%(+2.33\%) & 37.12\%(+2.51\%) & 28.49\%(+3.03\%) & 22.08\%(+2.18\%) \\ \cline{3-8} & & FA & 32 & 61.31\% & 50.31\% & 36.03\% & 26.55\% & 19.93\% \\ \cline{3-8} & & FA+ROE & 32 & 62.52\%(+1.21\%) & 52.55\%(+2.24\%) & 38.83\%(+2.8\%) & 29.05\%(+2.5\%) & 21.97\%(+2.04\%) \\ \cline{3-8} & & & & \(B\leq 10\) & \(B\leq 20\) & \(B\leq 40\) & \(B\leq 50\) & \(B\leq 60\) \\ \cline{3-8} & & FA & 8 & 45.38\% & 36.05\% & 20.08\% & 14.39\% & 9.70\% \\ \cline{3-8} & 250 & FA+ROE & 8 & 46.37\%(+0.99\%) & 38.37\%(+2.32\%) & 23.57\%(+3.49\%) & 17.84\%(+3.45\%) & 13.05\%(+3.35\%) \\ \cline{3-8} & & FA & 16 & 46.52\% & 37.56\% & 21.99\% & 15.79\% & 11.09\% \\ \cline{3-8} & & FA+ROE & 16 & 47.75\%(+1.23\%) & 40.04\%(+2.48\%) & 25.23\%(+3.24\%) & 19.42\%(+3.63\%) & 13.96\%(+2.87\%) \\ \hline \hline \multirow{4}{*}{GTSRB} & & & \(B<5\) & \(B\leq 10\) & \(B\leq 15\) & \(B\leq 20\) & \(B\leq 22\) \\ \cline{3-8} & & FA & 16 & 82.71\% & 74.66\% & 63.77\% & 47.52\% & 35.54\% \\ \cline{3-8} & 50 & FA+ROE & 16 & 82.59\%(-0.12\%) & 75.55\%(+0.89\%) & 65.47\%(+1.7\%) & 50.33\%(+2.81\%) & 38.89\%(+3.35\%) \\ \cline{3-8} & & FA & 32 & 83.52\% & 76.26\% & 66.32\% & 49.68\% & 38.31\% \\ \cline{3-8} & & FA+ROE & 32 & 83.61\%(+0.1\%) & 77.07\%(+0.81\%) & 67.83\%(+1.51\%) & 51.81\%(+2.14\%) & 41.61\%(+3.29\%) \\ \cline{3-8} & & & & \(B\leq 5\) & \(B\leq 15\) & \(B\leq 20\) & \(B\leq 25\) & \(B\leq 30\) \\ \cline{3-8} & & FA & 16 & 48.19\% & 33.95\% & 25.96\% & 18.92\% & 13.82\% \\ \cline{3-8} & 100 & FA+ROE & 16 & 48.00\%(+0.19\%) & 35.76\%(+1.81\%) & 28.92\%(+2.95\%) & 22.30\%(+3.38\%) & 16.32\%(+2.49\%) \\ \cline{3-8} & & FA & 32 & 48.39\% & 34.96\% & 27.05\% & 19.83\% & 14.47\% \\ \cline{3-8} & & FA+ROE & 32 & 48.15\%(-0.25\%) & 36.81\%(+1.84\%) & 29.85\%(+2.8\%) & 23.37\%(+3.54\%) & 17.41\%(+2.95\%) \\ \hline \end{tabular}
\end{table}
Table 1: Certified fraction of FA+ROE, and original FA with various values of hyperparameter \(d\) with respect to different attack sizes \(B\). Improvements of FA+ROE over the original FA (with same \(d\)) are highlighted in blue if they are positive and red otherwise.
though it uses 32 times less computation in training.
**Effect of the budget \(B\).** Our results show that ROE methods are especially useful for larger values of the adversary's budget \(B\). Intuitively, ROE is utilizing base classifiers that were discarded by DPA and FA. As such, for a fixed budget \(B\), _the ratio_ of the poisoned samples to the utilized models is considerably smaller for our method, which allows us to obtain improved results. We note that this is in strong contrast to FA, where for larger values of \(B\), the accuracy gains compared to DPA diminish and eventually cease to exist. Indeed, as seen in Figure 2, FA can actually be _worse_ than DPA for large budgets, while our method remains strongly favorable, as we achieve 5% higher certified fraction on the standard CIFAR-10 dataset.
While our aggregation method performs well when the adversary's budget is high, we see a slightly lower certified fraction when \(B\) is relatively small. In these cases, the certified fraction is close to clean accuracy, i.e., accuracy of the model when training data is not poisoned. Indeed, as seen in Figures 2 and 3, ROE methods have slightly lower clean accuracy because they involve all models in prediction, even models which are not accurate for a sample test. On the other hand, when the model's prediction is correct, involving more models makes the adversary's task harder, i.e., he needs more poisoned samples.
## 5 Conclusion
In this paper, we introduced Run-Off Election (ROE), a new aggregation method for ensemble-based defenses against data poisoning. We proposed a novel two-stage election across the base models of the ensemble that utilizes all of the models in order to increase the prediction gap between the top and runner-up classes. We showed how to obtain prediction certificates for our method in a unified manner by defining the notions of 1v1 and 2v1 certificates. Based on these ideas, we proposed two new defense methods, DPA+ROE and FA+ROE, and calculated their certificates using techniques inspired by dynamic programming and duality. We evaluated our methods on standard benchmarks based on prior work and observed improved poisoning certificates while simultaneously reducing the training cost. In fact, our method established a new state-of-the-art in provable defense against general data poisoning in several datasets.
An interesting direction for future work is to extend our methodology to other ensemble methods including the ones producing stochastic certificates. As discussed in Section 3, in principle, ROE can be applied on top of any ensemble method, though it is not immediately clear how one can obtain prediction certificates for such hybrid models. Given the unified approach we proposed in Section 3.3, calculating a prediction certificate reduces to calculating 1v1 and 2v1 certificates. While obtaining these certificates for any
Figure 2: **First row**: The curves of certified fraction of different methods on different datasets. **Second row**: The improvements of certified fraction over DPA. Plots in the first columns refers to CIFAR-10 (\(k=250\)), plots in the second column refers to GRSTB (\(k=50\)), and plots in the last column corresponds to MNIST (\(k=1200\)). Note that training cost of different methods scales up with \(d\), i.e., **training of FA or FA+ROE with parameter \(d\)** takes roughly \(d\) **times more than that of DPA or DPA+ROE**. When the adversary’s budget is large, **DPA+ROE outperforms FA** while it significantly exploits less training cost.
method will likely involve new technical challenges, we hope that the techniques we have utilized in deriving certificates for DPA+ROE and FA+ROE can also be used in other settings.
## Acknowledgements
This project was supported in part by NSF CAREER AWARD 1942230, a grant from NIST 60NANB20D134, HR001119S0026 (GARD), ONR YIP award N00014-22-1-2271, Army Grant No. W911NF2120076 and the NSF award CCF2212458.
|
2309.00665 | Fused Classification For Differential Face Morphing Detection | Face morphing, a sophisticated presentation attack technique, poses
significant security risks to face recognition systems. Traditional methods
struggle to detect morphing attacks, which involve blending multiple face
images to create a synthetic image that can match different individuals. In
this paper, we focus on the differential detection of face morphing and propose
an extended approach based on fused classification method for no-reference
scenario. We introduce a public face morphing detection benchmark for the
differential scenario and utilize a specific data mining technique to enhance
the performance of our approach. Experimental results demonstrate the
effectiveness of our method in detecting morphing attacks. | Iurii Medvedev, Joana Pimenta, Nuno Gonçalves | 2023-09-01T16:14:29Z | http://arxiv.org/abs/2309.00665v1 | # Fused Classification For Differential Face Morphing Detection
###### Abstract
Face morphing, a sophisticated presentation attack technique, poses significant security risks to face recognition systems. Traditional methods struggle to detect morphing attacks, which involve blending multiple face images to create a synthetic image that can match different individuals. In this paper, we focus on the differential detection of face morphing and propose an extended approach based on fused classification method for no-reference scenario. We introduce a public face morphing detection benchmark for the differential scenario and utilize a specific data mining technique to enhance the performance of our approach. Experimental results demonstrate the effectiveness of our method in detecting morphing attacks.
## 1 Introduction
The development of deep learning techniques in recent years has led to significant progress in the field of face recognition, but sophisticated presentation attack techniques, such as face morphing, continue to pose security risks that require new protection solutions. Face morphing involves merging/blending of two or more digital face images to create a synthetic image that can share the biometric properties of original images and match different individuals. Such generated images it can be difficult to detect using traditional human or computer-based face recognition methods.
The risks associated with face morphing are not hypothetical; they have been demonstrated through real-world incidents. One notable example occurred in 2018 when a German activist exploited face morphing techniques to issue an authentic German passport using a morphed face image of Federica Mogherini (at that time High Representative of the Union for Foreign Affairs and Security Policy) blended with their own photo [28]. This incident highlighted the potential for face morphing attacks to deceive identity verification systems and emphasizes the need for effective detection methods. Additionally, face morphs have been occasionally detected during border control procedures, raising concerns about the circulation of morphed documents. Some recent investigations [44] acknowledged the presence of morphing cases, indicating the tangible risks and uncertainties surrounding the prevalence of such documents. These real-world examples underscore the urgency of robust morphing detection approaches to mitigate the security risks associated with face morphing attacks. That is why face morphing and its detection methods have gained interest in both industry [43] and academia [25].
Morphing detection methods in facial biometric systems can be categorized into two pipelines based on the processing scenario. The _no-reference_ morphing attack detection algorithm is designed to detect morphing in a single image, with a focus on mitigating the risks of accepting manipulated images during the _enrollment_ process, where successful acceptance of forged images can lead to the issuance of an authentic document that could deceive the face recognition system.
On the other hand, the _differential_ morphing attack detection algorithms involve acquiring live data from an authentication system to provide reference information for detecting morphing attacks. This usually occurs during _automatic border control_ and such approaches aim to identify discrepancies between the presented face and the stored biometric data, enabling the system to detect potential morphing attempts in real-time and prevent unauthorized access with malicious ID documents (documents with accepted face morphs).
In this paper, we focus on differential face morphing detection and propose a novel deep learning method that incorporates sophisticated face recognition tasks and employs a fused classification scheme for morphs classification. We follow the no-reference MorDeephy [21] approach and adopt it methodology and data for the differential case.
Additionally, we extend benchmark utilities, which are proposed in [21] with a public face morphing detection benchmark for differential scenario.
## 2 Related Work
### Face Recognition
Contemporary face recognition methods heavily rely on the utilization of deep learning techniques, particularly convolutional neural networks (CNNs), which have proven to be highly effective in extracting discriminative features from unconstrained facial images. [33]. These networks possess the ability to learn complex patterns and structures, making them well-suited for tackling the challenges associated with facial pattern recognition tasks.
Various deep learning strategies are employed for face recognition, all aimed at extracting low-dimensional facial representations - deep face features with high discriminatory capabilities.
For example, metric learning techniques focus on explicit optimizing the face representation by contrasting pairs of matched and non-matched samples with similarity metric [38]. Achieving reliable convergence with these methods necessitates extensive datasets and advanced sample mining techniques.
Classification-based methods have received major attention and they are are better represented in recent academic research. These methods focus on learning face representation implicitly through a closed-set identity classification task [41]. Deep networks in these approaches encapsulate face representation in the last hidden layer and typically employ various softmax-based loss functions [41].
To achieve better discriminate properties of deep facial features various techniques are used. For instance, explicit compacting of intra-class features to their center [46] or several types of marginal restrictions, which address inter-class discrepancy [11, 40]. Many recent works were focused on investigating sample-specific learning strategies, which are driven by various characteristics, such as sample quality [22], hardness of classification [15]. Some works use on properties of embedding as a proxy for the image quality (like norm of the features) [19, 23], or rely on artificial assignment by known data augmentation [39]. These approaches try the control the feature distribution in the discriminative feature domain.
### Face Morphing
Modern face recognition systems can very accurately match images of individuals, however they are still vulnerable to various malicious presentation attacks. Face morphing allows do design such attack and drastically increase the probability for face recognition network to return matched embeddings for unmatched biometric samples, especially in the cases when the thresholds of face recognition systems are not set to support critically low false match rates.
Basic landmark based face morphs were first investigated by Ferrara _et al._[12]. Face morphing was performed directly in the image spatial domain by the face landmark alignment, image warping and blending. Various morphing algorithms mentioned in the literature follow this strategy [3].
The field of face morphing has witnessed significant advancements with recent breakthroughs in Deep Learning techniques, leading to the development of several innovative tools and methodologies. Face morphing has witnessed significant advancements with recent breakthroughs in Deep Learning techniques. Generative Adversarial Networks (GANs) have emerged as a prominent and widely utilized approach in various generative tasks, including face morphing. MorGAN [7] approach pioneered this tool for face morphing generation. The StyleGAN [17] approach introduced a latent domain representation to control various aspects of the generated image, which enabling to generate high-quality face morphs without blending artifacts. The MIPGAN [47] method optimized StyleGAN specifically for face morphing, preserving the identity of the generated morphed face image. The diffusion autoencoders for face morphing were proposed by MorDIFF [6] to generate smooth and high-fidelity face morphing attacks.
### Face Morphing Detection
Initially, the problem of face morphing detection focused on the no-reference scenario, where validation decisions were based on single image presentations. However, considering practical concerns, it became valuable to explore a differential approach that simulates the process of document verification by border control officers.
No-reference face morphing detection algorithms initially relied on analyzing local image characteristics like Binarized Statistical Image Features (BSIF) [29] or sensor noise (Photo Response Non-Uniformity) [9], texture features [32], local features in frequency and spatial image domain [24] or fusion of various features [35]. Deep learning methods for the no-reference case typically involve binary classification of pretrained face recognition features [30], which can be combined with local texture characteristics [45]. Additional pixel-wise supervision [8] or attention mechanism [1] can be applied. MorDeephy method [21] generalized single image morphing detection to unseen attacks by additional feature regularisation with face recognition task.
In contrast to no-reference case, differential face morphing detection is closely correlated with face recognition, as the discriminability of deep face representation usually helps combating attacks in this scenario.
For instance, many differential approaches rely on clas
sification of pretrained deep features for face recognition [36].
Borghi et al. [4] conducted differential morphing detection through the fine-tuning of pretrained networks within a sophisticated framework involving identity verification and artifacts detection modules.
Qin et al. [27] proposed a method for detecting and locating face morphing attacks for both (single-image and differential) scenaria. The authors utilise feature-wise supervision, which provides better characterization of morphing patterns and localization of morphed areas.
Ferrara et al. [13] presented an alternative approach to the differential scenario, which involves reverting morphing by retouching the testing face image using a trusted live capture. This technique aims to unveil the true identity of the legitimate document owner.
In this work we propose to extend the no-reference MorDeephy [21] approach for the differential morphing detection scenario by adopting the methodology and data mining techniques to the differential pipeline.
## 3 Methodology
From the original work [21] we inherit the S-MAD methodology, which require several modifications for the differential case. Recall that the face morphing detection here is made by the behavior of deep face features, which is achieved by regularizing the morphing detection with face recognition task. The definition of the task is motivated by ubiquity of classifying the face morphs (since they belong to 2 or more identities). This leads to the setup with two separate CNN-based deep networks that treat bona fide samples similarly but handle morphed samples differently. These networks does not share weights and are not trained in a contrastive manner, where positive and negative pairs are matched. Both networks learn high-level features through classification tasks, with each network assigning different identity labels to face morphs. The _First Network_ labels them based on the original identity from the first source image, while the _Second Network_ labels them according to the second original label.
In comparison to the S-MAD scenario, where the same image is sent to both networks, the D-MAD case imply processing a pair of images. That is why the fused classification strategy is adopted to the D-MAD in the following way (see Fig. 1).
We keep the assumption of associating each image with two identity labels \(y1\) and \(y2\), which are defined basing on the image origin. For the Bona Fide samples those labels are the same (copied from the original face image label), when for the Morphs those labels are different and are taken from the source face images. The sampling process for the _First Network_ is not changed. For instance the image \(\dot{I}\) with \(\dot{y}1_{\dot{I}}\) and \(\dot{y}2_{\dot{I}}\) is sampled for the input. For the input of the _Second Network_ the image \(\tilde{I}\) (complementary to \(\dot{I}\)) with \(\ddot{y}1_{\dot{I}}\) and \(\ddot{y}2_{\dot{I}}\) is selected with a condition that \(\ddot{y}1_{\dot{I}}=\dot{y}1_{\dot{I}}\). The loss function components require the following rules. The identity classification for the _First Network_ is made by the \(\dot{y}1_{\dot{I}}\), and by the \(\ddot{y}2_{\dot{I}}\) for the _Second Network_. The ground truth cross label for the morphing binary classification is made by matching \(\dot{y}2_{\dot{I}}\) and \(\ddot{y}2_{\dot{I}}\).
It is important to note that such formulation allow both images \(\dot{I}\) and \(\ddot{I}\) to be Morphs. However to match the D-MAD scenario, where the _Live Enrollment_ image is genuine and trusted, we supervise selecting the \(\tilde{I}\) as a Bona Fide sample.
Due to the above modifications the formulation of the identity classification softmax-based loss components is transformed as follows:
\[L_{1}=-\frac{1}{N}\sum_{i}^{N}\log(\frac{e^{\dot{W}^{T}_{\dot{y} 2_{\dot{I}}}\dot{I}+\dot{b}_{\dot{y}1_{\dot{I}}}}}{\sum_{j}^{C}e^{\dot{f}_{ \dot{y}1_{\dot{I}}}}}) \tag{1}\] \[L_{2}=-\frac{1}{N}\sum_{i}^{N}\log(\frac{e^{\ddot{W}^{T}_{\dot{y }2_{\dot{I}}}\ddot{I}+\dot{b}_{\dot{y}2_{\dot{I}}}}}{\sum_{j}^{C}e^{\dot{f}_{ \dot{y}2_{\dot{I}}}}}), \tag{2}\]
where \(\left\{\dot{f}_{i},\ddot{f}_{i}\right\}\) denote the deep features of the \(i-th\) sample pair, \(\left\{\dot{W},\ddot{W}\right\}\) and \(\left\{\dot{b},\ddot{b}\right\}\) are weights and biases of last fully connected layer (respectively for the \(\left\{\mathit{First},\mathit{Second}\right\}\) networks). \(N\) is the number of samples in a batch and \(C\) is the total number of classes.
For the morph binary classification component only the definition of the cross label is changed:
\[L_{3}=-\frac{1}{N}\sum_{i}^{N}t\log\frac{1}{1+e^{-D}}+(1-t)\log\left(1-\frac{ 1}{1+e^{-D}}\right), \tag{3}\]
where \(D=\dot{f}\cdot\ddot{f}\) is a dot product of high level features extracted by _First_ and _Second_ backbones and the cross-label \(t=abs(sgn(\dot{y}2_{i}-\ddot{y}2_{i}))\) of the \(i-th\) sample pair.
The above strategy imply pushing morph samples towards their original classes differently by _First_ and _Second_ networks. This allow to increase the distance between the morph samples in the feature domain.
In this work we also consider a modification of the above approach. We propose to allocate separate classes for the morphs samples. The formulation of such case means redefining the classification labels _for the morph samples_ and doubling the number of identity classes \(C\): \(\dot{y}1_{i}^{*}=\dot{y}1_{i}+C\); \(\ddot{y}2_{i}^{*}=\ddot{y}2_{i}+C\).
Such class allocation is made differently by _First_ and _Second_ in and does not impact the differentiating morphs by these networks. However, since it also pushes the morph samples away from their original classes it can help to increase discriminative power of deep face features. Further in the work we make experiments with both cases and mark
the original strategy with the label _V1_ and the modified strategy with _V2_
## 4 Data Mining
With the adopted methodology the training can be proceeded on the same data as S-MAD. The only sampling of this data is different. Recall that VGGFace2 dataset [5], is used as a source of original bona fide images. We repeat the quality based filtering of this dataset and generate respective morphs.
### Morph dataset
In this work we utilize two automatic methods for generating morphs. First we use a customized landmark-based morphing approach with blending coefficient \(0.5\). Second we generate GAN-based morphs with use of the StyleGAN [17] method. To synthesize such morphs for two original images, they are first projected to a latent domain and then their deep representations are interpolated. The resulting morph is generated from such interpolated latent embedding.
To ensure effective learning in the fused classification task (see Fig. 1), it is crucial to have unambiguous class labeling in our training dataset of morphs, which are generated from face images from different classes. To address this, we follow the original S-MAD approach [21] and employ a strategy where the dataset is split into two disjoint parts attributed to the _First_ and _Second_ networks. When generating face morphs, we randomly pair images from these identity subsets and label the morphed images accordingly for classification by the respective networks. This approach acts as a regularization technique and enhances the performance of morphing detection. By separating the dataset into two disjoint identity sets, we ensure consistent classification of morphed combinations by the networks.
### Selfmorphing
Fully automatic landmark morphing methods often introduce visible artifacts to the generated images, which can bias the learning process towards these artifacts. However, real fraudulent morphs are retouched to remove such perceptual artifacts. To address this, we utilize _selfmorphs_, generated by applying face morphing to images of the same identity. We follow the original S-MAD approach [21] and use selfmorphs as bona fide samples to focus on the behavior of deep face features rather than detecting artifacts. We assume that the deep discriminative face features remain intact after selfmorphing.
In our work _selfmorphs_ are generated for both landmark-based and GAN-based morphing approaches.
Figure 1: Schematic of the proposed D-MAD method. For simplicity of visualization batch contains a single image pair.
### Result dataset
Our resulting dataset consist of \(\sim\)500k original images from VGGFace2, \(\sim\)250k their landmark-based selfmorphs, \(\sim\)250k their GAN-based selfmorphs, \(\sim\)500k landmark-based morphs and \(\sim\)500k GAN-based morphs. The overall dataset is balanced by the amount of bona fides and morphs.
## 5 Benchmarking.
One commonly used metric for evaluating single image morphing detection is the relationship between the Bona fide Presentation Classification Error Rate (BPCER) and the Attack Presentation Classification Error Rate (APCER) as specified by ISO/IEC 30107-3 [16]. This relationship can be visualized using a Detection Error Trade-off (DET) curve.
For this work we adopt the public Face Morphing Detection benchmark utilities [21]1 for the differential case. We develop the functionality for generation verification protocols in the differential pipeline and generate several protocols basing on the public data. Bona fide pairs in all the protocols are combined from the frontal faces of the following public datasets: FRLL Set [10], FEI [2], Aberdeen and Utrecht [37] (\(\sim\)500 pairs in total). Morph pairs are combined by pairing images from the morphs from FRLL-Morphs dataset [34] and bona fides from FRLL Set [10]. We propose several protocols for different type of morphs (protocol names correspond to the FRLL-Morph subset names): _protocol-asml_ (\(\sim\) 4.5k morph pairs); _protocol-facemorpher_ (\(\sim\) 2.5k morph pairs); _protocol-webmorph_ (\(\sim\) 2.5k morph pairs); _protocol-stylegan_ (\(\sim\) 2.5k morph pairs).
Footnote 1: [https://github.com/iurti-m/MorDeeply](https://github.com/iurti-m/MorDeeply)
Several benchmarks (with restricted data and protocols) available for evaluating the performance of morphing detection or morphing resistant algorithms: The NIST FRVT MORPH [26] and FVC-onGoing MAD [31]. They accept both no-reference and differential morphing algorithms, however they are proprietary and managed by a specific entity, leading to submission restrictions and limited accessibility. In this work we will use the public results of NIST FRVT MORPH to compare with our approach.
## 6 Experiments
### Differential Benchmarking
We performed experiments of the fused classification strategy with the binary classification baseline and tested those cases in our custom benchmarks. The baseline is implemented on the same setup (see Fig. 1) where the identity
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Method} & \multicolumn{8}{c|}{\(APCER@BPCER=\delta\)} \\ \cline{2-9} & \multicolumn{2}{c|}{protocol-asml} & \multicolumn{2}{c|}{protocol-facemorpher} & \multicolumn{2}{c|}{protocol-webmorph} & \multicolumn{2}{c|}{protocol-stylegan} \\ \cline{2-9} & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) \\ & \(0.1\) & \(0.01\) & \(0.1\) & \(0.01\) & \(0.1\) & \(0.01\) & \(0.1\) & \(0.01\) \\ \hline BC & 0.315 & 0.729 & 0.245 & 0.649 & 0.391 & 0.701 & 0.913 & 0.997 \\ \hline FCV1 & 0.063 & 0.351 & 0.066 & 0.514 & 0.135 & 0.529 & 0.556 & 0.959 \\ \hline FCV2 & 0.039 & 0.275 & 0.061 & 0.315 & 0.102 & 0.4595 & 0.501 & 0.957 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of Fused Classification (FC) with the Binary Classification (BC) by APCER@BPCER = (0.1, 0.01) in several protocols.
Figure 2: DET curves for the D-MAD algorithms by NIST FRVT MORPH benchmark. a) protocol Visa-Border; b) protocol Manual; c) protocol MIPGAN-II; d) protocol Print + Scanned.
classification components are disabled and the training is driven by a single loss component in Eq. 3.
In all the cases we use the EfficientNetB3 [42] backbone network with input image size 300\(\times\)300. It is trained with SGD optimizer for 5 epochs with momentum 0.9 and linearly decreasing learning rate from 0.01 to 0.0001. The batch size is 28.
Our results (see Fig. 2, and Table 1) demonstrate the superiority of our approach over the baseline. Fused classification allow to generalize the detection performance to the unseen data and scenarios. Also we conclude that the V2 strategy (where morphs are disentangled from their original classes) is superior then the V1 and allow to achieve better MAD performance.
### NIST FRVT MORPH
We evaluate the performance of our top-performing model (Fused Classification V3) by comparing it with several state-of-the-art (SOTA) D-MAD approaches, which have public results on the FRVT NIST MORPH Benchmark [26]. We perform comparison in several protocols: _Visa-Border_ (25727 Morphs); _Manual_ (323 Morphs); _MIPGAN-II_ (2464 Morphs); _Print + Scanned_ (3604 Morphs). All protocols in the comparison utilize a substantial collection of \(\sim\)1M bona fide images. The performance evaluation is conducted using the metrics \(APCER@BPCER=(0.1,0.01)\).
Our performance results (see Table 2, Fig. 3) are comparable to the leaders in several benchmarks. Also, our method does not demonstrate bias to a particular morphing generative strategy and has the most stable performance across all protocols in comparison to other approaches.
We also present the algorithm (Ours(FR))(see Table 2), where the morphing detection signal of our fused classification detector is multiplied with the similarity, which is given by a face recognition model. This algorithm indeed demonstrate a superior result in all the benchmarks, where it was tested. This indicates that currently differential face morph
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Method} & \multicolumn{8}{c|}{\(APCER@BPCER=\delta\)} \\ \cline{2-9} & \multicolumn{2}{c|}{Visa-Border} & \multicolumn{2}{c|}{Manual} & \multicolumn{2}{c|}{MIPGAN-II} & \multicolumn{2}{c|}{Print+Scan} \\ \cline{2-9} & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) & \(\delta=\) \\ & \(0.1\) & \(0.01\) & \(0.1\) & \(0.01\) & \(0.1\) & \(0.01\) & \(0.1\) & \(0.01\) \\ \hline Scherhag et al. [36] & 0.013 & 0.212 & 0.055 & 0.357 & 0.004 & 0.134 & 0.012 & 0.176 \\ \hline Kashiani et al. [18] & 0.447 & 0.901 & 0.873 & 0.989 & 0.182 & 0.481 & 0.842 & 0.996 \\ \hline Lorenz et al. [20] & 0.432 & 1.000 & 0.634 & 1.000 & 0.168 & 1.000 & 0.732 & 1.000 \\ \hline Ferrara et al. [14] & 0.966 & 0.999 & 0.689 & 0.969 & 0.004 & 0.751 & 0.070 & 0.280 \\ \hline Ours & 0.232 & 0.555 & 0.531 & 0.872 & 0.359 & 0.859 & 0.680 & 0.926 \\ \hline Ours(FR) & 0.087 & 0.453 & - & - & - & - & 0.125 & 0.568 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison with differential image morphing detection methods by APCER@BPCER = (0.1, 0.01) from the NIST FRVT MORPH benchmark.
Figure 3: DET curves for the D-MAD algorithms by NIST FRVT MORPH benchmark. a) protocol Visa-Border; b) protocol Manual; c) protocol MIPGAN-II; d) protocol Print + Scanned.
ing benchmarks share many similarities with 1-1 face verification protocols and can be approached with only a strong face recognition model in hands. Such approach is reasonable from the practical security perspective of detecting impostors, but not very correct from the academic perspective, since the face morphing detector in differential case is prompted to detect identity non-matched pairs that may not have face morphs at all.
Despite the fact that this feature does not pose significant risks, it should be taken into account when developing algorithms for differential face morphing detection or benchmarks for their evaluation.
## 7 Conclusion
In this paper, we focus on differential face morphing detection and propose a novel deep learning method that incorporates sophisticated face recognition tasks and employs a fused classification scheme for morphs classification. We propose public benchmark utilities for differential face morphing detection. Additionally we and raise several questions on the differences in the vision of the differential face morphing detection in academic and security application perspective.
|
2308.08110 | View Consistent Purification for Accurate Cross-View Localization | This paper proposes a fine-grained self-localization method for outdoor
robotics that utilizes a flexible number of onboard cameras and readily
accessible satellite images. The proposed method addresses limitations in
existing cross-view localization methods that struggle to handle noise sources
such as moving objects and seasonal variations. It is the first sparse
visual-only method that enhances perception in dynamic environments by
detecting view-consistent key points and their corresponding deep features from
ground and satellite views, while removing off-the-ground objects and
establishing homography transformation between the two views. Moreover, the
proposed method incorporates a spatial embedding approach that leverages camera
intrinsic and extrinsic information to reduce the ambiguity of purely visual
matching, leading to improved feature matching and overall pose estimation
accuracy. The method exhibits strong generalization and is robust to
environmental changes, requiring only geo-poses as ground truth. Extensive
experiments on the KITTI and Ford Multi-AV Seasonal datasets demonstrate that
our proposed method outperforms existing state-of-the-art methods, achieving
median spatial accuracy errors below $0.5$ meters along the lateral and
longitudinal directions, and a median orientation accuracy error below 2
degrees. | Shan Wang, Yanhao Zhang, Akhil Perincherry, Ankit Vora, Hongdong Li | 2023-08-16T02:51:52Z | http://arxiv.org/abs/2308.08110v1 | # View Consistent Purification for Accurate Cross-View Localization
###### Abstract
This paper proposes a fine-grained self-localization method for outdoor robotics that utilizes a flexible number of onboard cameras and readily accessible satellite images. The proposed method addresses limitations in existing cross-view localization methods that struggle to handle noise sources such as moving objects and seasonal variations. It is the first sparse visual-only method that enhances perception in dynamic environments by detecting view-consistent key points and their corresponding deep features from ground and satellite views, while removing off-the-ground objects and establishing homography transformation between the two views. Moreover, the proposed method incorporates a spatial embedding approach that leverages camera intrinsic and extrinsic information to reduce the ambiguity of purely visual matching, leading to improved feature matching and overall pose estimation accuracy. The method exhibits strong generalization and is robust to environmental changes, requiring only geo-poses as ground truth. Extensive experiments on the KITTI and Ford Multi-AV Seasonal datasets demonstrate that our proposed method outperforms existing state-of-the-art methods, achieving median spatial accuracy errors below \(0.5\) meters along the lateral and longitudinal directions, and a median orientation accuracy error below \(2^{\circ}\)1.
Footnote 1: Our project page is [https://shanwang-shan.github.io/PureACL-website/](https://shanwang-shan.github.io/PureACL-website/)
## 1 Introduction
Accurate self-localization is a fundamental problem in mobile robotics, particularly in the context of autonomous driving. While Global Positioning System (GPS) is a widely adopted solution, its accuracy hardly meets the stringent requirements of autonomous driving [20]. Real-Time Kinematic (RTK) positioning systems provide an alternative by correcting GPS errors, but their implementation is hindered by the need for signal reference stations [13], rendering them an expensive solution. On the other hand, odometry [18, 4, 37, 32] or simultaneous localization and mapping (SLAM) [17, 11, 25, 32] methods can generate accurate short-term trajectories, however, they experience drift accumulation over time that can only be alleviated through loop closures if the agent's trajectories overlap. Lastly, other self-localization techniques [35, 15, 31, 21] that rely on a pre-constructed 3D High Definition (HD) maps face limitations in terms of the extensive time and resources required for map acquisition and maintenance.
Using off-the-shelf satellite images as ready-to-use maps to achieve cross-view localization brings an alternative and promising way for low-cost localization. However, due to the significant disparity between overhead views captured by satellites and views seen by robots, cross-view localization is more challenging than traditional methods. To address this, it is crucial to purify view-consistent features that can support the localization process. Furthermore, satellite views can be captured at different times, leading to variations in seasonal and temporal conditions. The cross-view consistent purification can also minimize the impact of moving and seasonal objects.
Most previous cross-view localization methods [24, 10, 14, 29, 23, 38] approach the task as an image retrieval problem, leading to coarse localization accuracy that is inferior to commercial GPS which can achieve an error of up to 4.9 meters in open sky conditions [30]. In contrast, our method utilizes a coarse pose that is easily obtainable from the Autonomous Vehicles system, to estimate the fine-grained 3-DoF (lateral, longitudinal, yaw) pose of the robot. This is accomplished through visual cross-view matching, utilizing ground-view images captured by onboard cameras and a
Figure 1: (a) Query ground view (onboard camera) images (front, rear, left, and right). (b) reference satellite image. The initial and ground truth poses, and FoV of cameras are shown in (red) and (green), respectively.
spatially-consistent satellite map. Additionally, our method supports multiple camera inputs, which extend the field of view of the query robot. The setting is illustrated in Fig. 1.
Our fine-grained visual localization method utilizes sparse (keypoint) feature matching, a departure from prior methods that rely on dense feature matching. To reduce the inherent ambiguity in purely visual matching, the method incorporates a camera intrinsic and extrinsic aware spatial embedding. Homography transformation is used to establish correspondences between the two views. An on-ground confidence map is employed to ensure the validity of the transformation and eliminate off-the-ground objects. Additionally, a view consistency confidence map is utilized to mitigate the impact of moving objects and view-point variation. The localization process begins with the extraction of spatially aware deep features and the generation of view-consistent, on-ground confidence maps for both views. View-consistent key points are then detected from the ground view confidence map and matched with their corresponding points in the satellite view. The optimal pose is determined through an iterative search using a differentiable Levenberg-Marquardt (LM) algorithm.
Using Google Maps [8] as the satellite view, we evaluate our method on two datasets: the Ford Multi-AV Seasonal (FMAVS) [1] and the KITTI Datasets [7]. The results demonstrate the superiority of our proposed method, achieving mean localization error of less than \(\{0.14m,3.57^{\circ}\}\) on KITTI with one front-facing onboard camera, and less than \(\{0.88m,0.74^{\circ}\}\) on FMAVS with four surrounding onboard cameras.
We summarize our contributions as below:
* the first sparse visual-only cross-view localization method that estimates accurate pose with low spatial and angular errors.
* a view-consistent on-ground key point detector that reduces the impact of dynamic objects and viewpoint variations, as well as removes off-the-ground objects.
* a spatial embedding that fully utilizes camera intrinsic and extrinsic information to improve the extraction of spatially aware visual features.
* a multi-camera fusion approach that significantly improves localization accuracy.
## 2 Related work
**Depth Aware Accurate Cross-view Localization**. The task of accurate cross-view localization has gained attention in recent years. Researchers have mainly focused on developing solutions for Radar and LiDAR cross-view localization as depth information helps in aligning the ground and satellite perspectives. RSL-Net [26] estimates the robot pose by registering Radar scans on a satellite image. This method was later extended to a self-supervised learning framework in [28]. Another work [27] matches the top-down representation of a LiDAR scan with 2D points detected from satellite images. These methods have limitations and are only effective in environments with strong prior structure knowledge, failing in general, non-urban environments. [2] performs localization on bird's eye view (BEV) LiDAR intensity maps using deep feature matching between LiDAR scan and the intensity map. [34] extends this method by incorporating compressed binary maps. Hybrid sensor solutions have also been explored, such as in [16] where an aerial robot achieves global localization through the use of egocentric 3D semantically labelled LiDAR, IMU, and visual information. CSLA [6] and SIBCL [33] extract visual features from ground and satellite images and use LiDAR points to establish correspondence between the two views. CSLA [6] aims to estimate 2-DoF translation, while SIBCL [33] aims to estimate 3-DoF pose, including an additional orientation. All these methods critically rely on depth information to build the correspondence across the two views. In contrast, our method is a visual-only solution that aims to achieve comparable localization accuracy using cheaper commodity sensors.
**Visual Accurate Cross-view Localization**. Most visual-only cross-view localization methods rely on homography transformations of the ground plane, as they lack reliable depth information. [36] aims to estimate 2-DoF translation using similarity matching and produces a dense spatial distribution to address localization ambiguities. HighlyAccurate [22] projects satellite features into the ground view and optimizes the robot pose through dense feature matching. One of its drawbacks is the limited ability to effectively eliminate outliers, such as noise caused by off-the-ground objects (which violates the assumption of homography transformation of the ground plane) and dynamic objects. As a result, their overall performance is limited. In contrast, our method constructs geometric correspondences across sparse view-consistent on-ground keypoints, ensuring that the pose estimation is based on accurate correspondences leading to improved precision.
## 3 Our Method
Our work aims to achieve fine-grained cross-view localization by accurately estimating the 3-DoF pose, denoted by \(\mathbf{P}_{pred}=\{\phi_{pred},\varphi_{pred},\theta_{pred}\}\), where \(\phi\) and \(\varphi\) represent lateral and longitudinal translations, respectively, and \(\theta\) is the yaw angle. We are given a coarse initial pose \(\mathbf{P}_{init}=\{\phi_{init},\varphi_{init},\theta_{init}\}\), a reference satellite view image \(I^{s}\), and a set of ground-view images \(I^{g}=\{I^{i}\}_{i=1}^{N}\) captured by onboard cameras, where \(N\) is the total num
ber of onboard cameras 2. An overview of the proposed PureACL is shown in Fig. 2. It builds upon three innovative modules: 1) Spatially Aware Feature and Confidence Extractor (SAFCE) (Sec. 3.2), 2) View-consistent On-ground Keypoint Detector (VOKD) (Sec. 3.3), and 3) Multi-camera Fusion (Sec. 3.4). Additionally, our approach utilizes two branches of objective functions inherited from the SIBCL method [33]: the Pose-Aware Branch (PAB) and the Recursive Pose Refine Branch (RPRB). In the following sections, we provide a detailed explanation of each module.
Footnote 2: Our method supports varying onboard camera quantities. In the experiments, we employed \(N=4\) for FMAVS and \(N=1\) for Kitti-CVL.
### Preliminary
For completeness, we provide a brief description of the inherited PAB and RPRB. The PAB utilizes a triplet loss [19] that encourages accurate pose (ground truth) and penalizes incorrect (initial) poses by differentiating the residual between the ground truth and initial pose. Specifically, we compute the loss as follows:
\[L_{triplet}=log(1+e^{\alpha(1-\frac{\sum_{p}w|p|\mathbf{P}_{init}\rho([\Upsilon |p|\mathbf{P}_{init}\|^{2})}{\sum_{p}w|p|\mathbf{P}_{gt}\rho([\Upsilon|p| \mathbf{P}_{gt}\|^{2})})}), \tag{1}\]
where \(\alpha\) is a hyper-parameter set to 10 based on experimental results, \(\sum_{p}\) represents the sum of all key points, and \(\rho\) is a robust cost function as defined in [9].
The RPRB, on the other hand, aims to refine the initial pose iteratively using the LM algorithm to approach the ground truth pose. It starts with the coarsest level and uses features from each level successively, with each subsequent level initialized with the output of the previous level. Specifically, we update the pose as follows:
\[\delta_{t+1}=\delta_{t}-(\mathbf{H}+\lambda\text{ diag}(\mathbf{H}))^{-1}\mathbf{J}^{\top}\mathbf{W}\Upsilon, \tag{2}\]
where \(\delta\) represents an individual element in the 3-DoF pose. \(t\in\{1,\cdots,M\times L\}\) represents the current iteration, and \(M\) and \(L\) represent the iteration count per level and the total number of levels, respectively. The matrices \(\Upsilon\) and \(\mathbf{W}\) are formed by stacking the residuals \(r[p]_{\mathbf{P}}\) and weights \(w[p]_{\mathbf{P}}\), while \(\lambda\) is the damping factors [21]. The Jacobian and Hessian matrices are defined as follows:
\[\mathbf{J}=\frac{\partial r[p]_{\mathbf{P}}}{\partial\delta}=\frac{\partial F ^{s}[p]}{\partial[p_{2D}^{s}]_{\mathbf{P}}}\frac{\partial[p_{2D}^{s}]_{ \mathbf{P}}}{\partial\delta}\text{ and }\mathbf{H}=\mathbf{J}^{\top}\mathbf{W} \mathbf{J}, \tag{3}\]
where \([p_{2D}^{s}]_{\mathbf{P}}\) is the 2D projection of keypoints \(p\) onto the satellite image using the pose \(\mathbf{P}\), as shown in the right section of Fig. 6. Finally, we supervise the optimized pose by computing the re-projection error as follows:
\[L_{reproject}(\mathbf{P}_{pred})=\sum\|[p_{2D}^{s}]_{\mathbf{P}_{pred}}-[p_{2D }^{s}]_{\mathbf{P}_{gt}}\|^{2}_{2}. \tag{4}\]
### Spatially Aware Feature/Confidence Extractor
Our approach improves the spatial embedding concept proposed in [14] by leveraging the camera's intrinsic and extrinsic parameters to obtain highly accurate spatial information. The spatial embedding \(E^{g/s}\in\mathbb{R}^{h\times w\times 3}\) has \(3\) channels: heading, distance, and height information. The explanation of these channels is shown in Fig. 3. To incorporate additional spatial embedding information between
Figure 2: Overview of PureACL. SAFCE is used to produce feature maps (\(F\)), view-consistent confidence maps (\(V\)), and on-ground confidence maps (\(O\)) separately for satellite and ground-view images. The VOKD fuses the confidence maps and identifies the top-k confident features from the ground-view images and their corresponding features on the satellite feature maps. Sub-pixel interpolation is used to lookup point features (\(F[p]\) from \(F\)) and their weights (\(w[p]\) from \(V\otimes O\)). The residual between the two views (\(r[p]_{\mathbf{P}}=F^{s}[p]_{\mathbf{P}}-F^{g}[p]\)) and the point weights (\(w[p]_{\mathbf{P}}=w^{s}[p]_{\mathbf{P}}\times w^{g}[p]\)) are fed to the RPRB for subsequent pose optimization. The olive outline indicates that the \(O^{s}\) disables gradient backpropagation while red, green, blue and magenta outlines and points represent the front, left, right and rear views, respectively.
the ground and satellite images, we transform the pixels in the onboard camera and satellite images into a common set of query world coordinates ( e.g., the GPS coordinates of the robot). In this coordinate system, the x-axis corresponds to the direction of motion, the y-axis points to the right, and the z-axis points downward. To perform this transformation, we use an inverse projection formula, which is shown in Eq. 5:
\[p_{3D}^{j2g}=\mathbf{R}_{j2g}\mathbf{K}_{j}^{-1}(p_{2D}^{j}\oplus 1), \tag{5}\]
where \(\mathbf{K}_{j}\) is the intrinsic matrix of camera \(j\), which can be either an onboard camera or a satellite camera \(j\in\{i_{1}^{N},s\}\), and \(\oplus 1\) concatenates 1 to generate the homogeneous coordinate. The rotation from camera \(j\) to the ground coordinate, \(\mathbf{R}_{j2g}\), is obtained from the extrinsic information provided in the datasets for onboard cameras and from the initial coarse pose for the satellite camera. For onboard camera images, the 3D coordinate \(p_{3D}^{i2g}\) is a homogeneous coordinate with an unknown scale, while for satellite images, \(p_{3D}^{s2g}\) represents a world coordinate with an unknown down axis. This is because satellite images are approximated as parallel projections, and the equation for the calculating \(p_{2D}^{s}\) is given by:
\[p_{2D}^{s}=\begin{pmatrix}1/\gamma&0&c_{u}\\ 0&1/\gamma&c_{v}\end{pmatrix}p_{3D}^{s}, \tag{6}\]
where \((c_{u},c_{v})\) represents the center of the satellite image, and \(\gamma\) represents the meter-per-pixel ratio calculated using:
\[\gamma=\tilde{r}_{\text{earth}}\times\frac{\cos(\tilde{L}\times\frac{\pi}{180 0^{2}})}{2^{\tilde{z}}\times\tilde{s}}, \tag{7}\]
where \(\tilde{r}_{\text{earth}}=156543.03392\) is radius of the Earth, \(\tilde{L}\) is the latitude, \(\tilde{z}=18\) and \(\tilde{s}=2\) is the zoom factor and the scale of Google Maps [8], respectively.
The heading information is embedded using the cosine value, which is symmetric to both positive and negative orientation noise. This enables distinction between 360-degree views, calculated using the x-axis (\(p_{3D}^{j2g}[0]\)) and y-axis (\(p_{3D}^{j2g}[1]\)) through trigonometric functions, as shown below:
\[E^{j}[0]=p_{3D}^{j2g}[0]/\sqrt{p_{3D}^{j2g}[0]^{2}+p_{3D}^{j2g}[1]^{2}}. \tag{8}\]
The normalized distance embedding of ground images is obtained by assuming all pixels lie on the ground plane:
\[E^{j}[1]=\sqrt{p_{3D}^{j2g}[0]^{2}+p_{3D}^{j2g}[1]^{2}}/\mathcal{D}, \tag{9}\]
where \(\mathcal{D}\) is the maximum visible distance, set to 200 meters according to the satellite maps size and
\[p_{3D}^{i2g}=\frac{h_{i}}{p_{3D}^{i2g}[2]}\times p_{3D}^{i2g}+\mathbf{t}_{i2g} \text{ and }p_{3D}^{s2g}=p_{3D}^{s2g}+\mathbf{t}_{s2g}, \tag{10}\]
where \(h_{i}\) is the onboard camera height relative to the ground plane. For ground view images, the height embedding \(E[2]\) is equal to the value along the down axis, represented as \(p_{3D}^{i2g}[2]\). In the case of satellite images, we set the height embedding to the minimal value to indicate a top-down perspective. Fig. 4 demonstrates that our approach effectively directs greater attention towards the features located in front of the robot by leveraging spatial embedding when using only the front onboard camera.
The SAFCE employs a U-Net structure (\(\mathcal{F}_{\nu}\)) to extract the satellite and ground-view feature maps, represented as \(F^{j}=\mathcal{F}_{\nu}(I^{j}\oplus E^{j})\), where \(j\in\{i_{1}^{N},s\}\), and \(\oplus\) denotes channel concatenation. The maps are then processed by a convolutional layer followed by a reverse sigmoid active function (\(\mathcal{C}_{\psi}\)) to produce view-consistent confidence maps (\(V^{j}\)) and on-ground confidence maps (\(O^{j}\)) represented as \(V^{j},O^{j}=\mathcal{C}_{\psi}(F^{j})\). Each map has multiple resolutions, for example, \(F=\{F_{l}\in\mathbb{R}^{h_{l}\times w_{l}\times c}\}_{l=1}^{L}\) (\(\mathbb{R}^{h_{l}\times w_{l}}\) for \(V\) and \(O\)), where \(L=3\) is adopted in our setting. The maps are ordered from coarsest to finest level as \(l=\{1,2,3\}\). The feature and confidence extraction from each image is performed in parallel using a shared-weight model, allowing for a flexible number of onboard cameras (N).
The view-consistent confidence map \(V\) represents the confidence of objects appearing in both satellite and ground-view images. \(V\) is used as a multiplying factor for the point weights supervised by PAB and RPRB, and is penalized through the network training for the points with high residual (indicating distinct features between the cross-view). Considering the temporal gap between the two views, \(V\) effectively filters out objects that are temporally
Figure 3: The Spatial Embedding. Heading is embedded using the cosine of its angle. Distance is embedded as the normalized on-ground distance from the robot, with the assumption that the pixel is lying on the ground. Height is normalized on the down axis of the ground view coordinates. For satellite images, the height is specifically set to a minimal value to indicate a top-down view.
or seasonally inconsistent, e.g. vehicles, pedestrians, and leaves. Additionally, it highlights view consistent reference objects, including road marks, lanes, building edges, and tree roots. An example is shown in Fig. 4 (row 2). More visualizations are shown in the supplementary.
The on-ground confidence map \(O\) is designed to validate the homography transformation between the ground and satellite views. As a multiplying factor for the point weights, off-ground points that cause incorrect Geo-correspondence between the ground and satellite views, resulting in high residuals, have their on-ground confidence penalized to reduce the overall loss. Given that an incorrect height assumption in points can lead to erroneous projections on the satellite map, penalizing the satellite on-ground confidence map is not meaningful. So we only apply the backpropagation to the ground-view on-ground confidence map. An example of the learned confidence maps is shown in Fig. 4 (row 3).
### View-consistent On-ground Keypoint Detector
Fig. 5 illustrates the details of the proposed VOKD. The view-consistent and on-ground confidence maps of different resolutions are fused to generate the final confidence:
\[C^{i}=\sum_{l=1}^{L}\Xi(\mathcal{N}(V_{l}^{i}\otimes O_{l}^{i}),(h_{L},w_{L})), \tag{11}\]
where \(h_{L}\) and \(w_{L}\) represents the resolution of the fine level confidence map, \(\Xi\) is an interpolation function, and \(\mathcal{N}\) is a min-max normalisation, and \(\otimes\) represents element-wise multiplication. The bottom row of Fig. 4 demonstrates the efficacy of the fused confidence map in filtering out off-the-ground objects and emphasizing temporal stability and view consistency in cues such as road markings and curbs for the subsequent pose estimation. More visual examples can be found in the supplementary.
In order to achieve on-ground keypoint detection, our focus is limited to the area below the focal point, which corresponds to the on-ground area and is our primary interest. From this area, we select the top-K points with the highest confidence score from the fused confidence map. To avoid overcrowding of keypoints, we partition the fused confidence map into smaller patches of size \(8\times 8\) and enforce a limit of one detected keypoint per patch. This approach ensures that the selected keypoints are well-distributed across the on-ground area, thereby improving the accuracy of subsequent pose estimation. The left part of Fig. 6 displays the detected view-consistent on-ground 2D keypoints. These 2D keypoint coordinates \(p_{2D}^{i}\) are used to calculate their corresponding 3D ground world coordinates \(p_{3D}^{i2g}\) through the equations Eq. (5) and Eq. (10). The right part of Fig. 6 shows the projection of these 3D coordinates onto the satellite image (\(p_{2D}^{s}=\mathbf{K}_{s}(\mathbf{R}_{g2s}p_{3D}^{i2g}+\mathbf{t}_{g2s})\)).
Figure 4: Illustration of confidence maps. The view-consistent confidence map (\(2^{nd}\) row) \(V\) assigns high confidence to objects that appear consistently in both ground-view and satellite images, such as road marks, curbs, and building roofs. Conversely, the confidence map assigns low confidence to temporally inconsistent objects, such as vehicles. The on-ground confidence map (\(3^{rd}\) row) \(O\) highlights only on-ground cues, such as road marks and curbs. It is noteworthy that the area behind the robot is assigned a high score due to a lack of supervision, but it does not affect localization accuracy. This is because the influence of the area is suppressed by the view-consistent confidence (\(w[p]\) from \(V\times O\)). The fused confidence map (\(4^{th}\) row) \(C\) highlights objects that are both view-consistent and on-ground.
Figure 5: The pipeline of VOKD. It begins with confidence map fusion, in which all level confidence maps from the view-consistent and on-ground maps are combined to create a single map. Next, in the 2D keypoint detection step, the top part of the image is ignored to concentrate on the ground plane. Moreover, a max pooling technique is employed to avoid overly crowded keypoint detection. Finally, based on the assumption that all detected points are on the ground, their 3D ground query coordinates are calculated.
### Multi-camera Fusion
Our method is flexible and can handle multiple cameras as input, without any restrictions on the field of view. In case there is a potential overlap between the views captured by adjacent cameras, keypoints detected in one camera may be visible in another camera as well. In such cases, we select the point feature with the highest weight:
\[w^{g}[p]=\max_{i}^{N}(V^{i}\otimes O^{i})[p_{2D}^{i}], \tag{12}\]
\[F^{g}[p]=F^{i}[p_{2D}^{i}],\;\;i=\arg\max_{i}^{N}(V^{i}\otimes O^{i})[p_{2D}^{ i}]. \tag{13}\]
## 4 Datasets
To evaluate the effectiveness of the proposed method, we followed the existing methods [22, 33] and conducted experiments on two widely used autonomous driving datasets: the FMAVS dataset [1] and KITTI dataset [7]. We adopted the augmentation method proposed by [33], which involved incorporating spatially-consistent satellite images obtained from Google Maps [8] using the GPS tags provided in the datasets. The satellite images had a resolution of \(1,280\times 1,280\) pixels and a scale of 0.22m per pixel for FordAV-CVL, and 0.2m per pixel for KITTI-CVL.
In the FMAVS dataset, we utilized query images from four cameras (front left, rear right, side left, and side right) to capture the surrounding environment, providing an almost 360-degree field of view with minimal overlap. Since the KITTI dataset provides only front-facing stereo camera images, we used the images from the left camera of the stereo pair as query images. The FMAVS includes multiple vehicle traversals over a consistent route. To evaluate our proposed method, we split the three traversals of the 'Log4' trajectory into training, validation, and test sets, following the split strategy described in [33]. The KITTI dataset [7] comprises various trajectories taken at different times. To assess our model's generalization ability, we selected test sets from different trajectories based on [22].
## 5 Experiments
**Metrics**. Our objective is to estimate the 3-DoF pose, which includes lateral, longitudinal, and yaw information. We measure the accuracy of our proposed method by reporting the median and mean errors in lateral and longitudinal translations (in meters) and yaw rotation (in degrees). In addition to these metrics, we also follow the evaluation criteria outlined in [33] and report the average localization recall 3 at distances of 0.25m, 0.5m, 1m, and 2m, as well as at yaw rotation angles of \(1^{\circ}\), \(2^{\circ}\), and \(4^{\circ}\).
Footnote 3: The percentage of the prediction pose that is within a certain range.
**Implementation Details**. In our experiments, we use an input size of \(432\times 816\) for the ground-view images in the Multi-AV Seasonal Dataset, and \(384\times 1248\) for the KITTI Dataset. RTK GPS 4 is used as the ground truth pose. We add some noise to the RTK GPS poses to generate the initial pose. Unless otherwise stated, the initial pose is randomly sampled with a yaw angle error of \(\pm 15^{\circ}\) and lateral, longitudinal shifts of \(\pm 5\) meters, as the accuracy of GPS is within \(4.9\) meters in open sky conditions [30]. We detect \(256\) ground keypoints from each input ground-view image. We set the batch size to \(b=3\) for training on an NVIDIA RTX 3090 GPU, and use the Adam optimizer [12] with a learning rate of \(10^{-4}\). The feature extractor weights are initialized with the pre-trained weights from [33], which are trained on the KITTI-CVL dataset. The weights of the confidence generator are initially randomly initialized to values near \(0\). Through the application of the inverse sigmoid activation function, these weights are tuned to initialize the confidence values in proximity to \(50\%\).
Footnote 4: RTK GPS achieves an accuracy of 2 cm or better [5].
**Inference Speed**. The SAFCE processes four query ground-view images and one satellite image in approximately \(200\)ms. The detection time for all ground keypoints is about \(3.5\)ms. The optimization process, which runs for \(20\) iterations at each of the three levels, takes a total of approximately \(200\)ms.
**Qualitative Results**. We compare our method with recent state-of-the-art (SOTA) visual-only methods, CVML [36] and HighlyAccurate [22], as well as the LiDAR-visual hybrid method SIBCL [33]. We present the evaluation results on the KITTI-CVL and FordAV-CVL datasets in Tab. 1 and Tab. 2. To ensure a fair comparison, we trained HighlyAccurate [22] and SIBCL [33] under the same image resolution and initial pose noise range. Since CVML [36] is unable to accurately estimate fine-grained orientations, we only evaluated its performance in terms of location estimation. We trained their model with ground truth orientation.
Tab. 1 presents an evaluation of our method's ability to generalize to previously unseen routes in the KITTI-CVL dataset using a front camera. For translation accuracy, our method exhibits superior performance compared to SOTA
Figure 6: (left) On-ground keypoints on ground-view images and detected keypoints (magenta). (right) Projection of on-ground keypoints on the satellite image. Projection by initial pose is shown in (red), projection by predicted pose is shown in (blue), and projection on ground truth pose is shown in (green).
methods, with a significant reduction in the translation error. Specifically, our method achieves a reduction of \(86\%\) and \(94\%\) in mean lateral and longitudinal localization error. While our orientation accuracy is slightly less accurate than the LiDAR-based method, it maintains a comparable performance to SOTA visual-only method [22] in terms of rotation error. These results demonstrate the ability of our method to generalize to a wide range of scenes.
The performance of our method on cross-season generalization is presented in 'Log4'5 of Tab. 2. The test set in this case includes data from different time and seasons compared to the training set, which allows us to evaluate the performance of our method under varying lighting and seasonal conditions. Furthermore, in 'Log4\(\rightarrow\)5' of Tab. 2, we analyze our method's generalization capability on an unseen route. In both cases, our method outperforms existing SOTA methods by significant margins. Specifically, we achieve a reduction of \(52\%\) and \(43\%\) in mean localization lateral error, \(62\%\) and \(52\%\) in mean localization longitudinal error, and \(67\%\) and \(17\%\) in mean orientation error in terms of seen and unseen routes, respectively. These results once again demonstrate the strong performance and robust generalization capabilities of our proposed method.
Footnote 5: The trajectory of ‘Log4’ was selected for method evaluation in SIBCL [33] due to its relatively good satellite view alignment. Additionally, we evaluated other logs and the evaluation results can be found in the supplementary material.
**Performance with Varying Numbers of Camera Inputs**. We investigate the impact of multiple onboard cameras on the FordAV-CVL dataset and evaluate our method using different camera setups. These setups include the front camera (Front) in the 1-camera setting, two side cameras (2Sides), the front and rear cameras (2FR) in the 2-camera setting, and all front, rear, and two side cameras (4Cams) in the 4-camera setting. Our findings indicate that even with the use of a single front camera ('Ours (Front)' in Tab. 2), our method outperforms the SOTA methods. Additional camera inputs lead to further improvements in performance, particularly with regards to orientation estimation, which can be attributed to the fact that a larger field of view (FoV) provides more information to accurately estimate orientation. Furthermore, our study reveals that the front and rear cameras provide more information for localization, whereas the left and right cameras contribute more to the lateral estimation. This could be attributed to the limited visibility of noticeable localization features such as road marks in the side cameras or the sensitivity of the side cameras to the roll angle. It is noteworthy that our method, despite utilizing four onboard cameras, consumes less memory (4499 MB) than HighlyAccurate [22], which requires 6445 MB due to its use of sparse purification.
**Performance under Different Initial Poses**. The proposed method utilizes the LM algorithm and is subject to a convergence range 6 constraint. If the provided initial pose falls outside of this range, the method may fail to converge. To evaluate the method's robustness under a more stringent
\begin{table}
\begin{tabular}{l||c c c c c|c c c c c c c c c c c c c} \hline \hline & \multicolumn{14}{c|}{**Lateral**} & \multicolumn{14}{c|}{**Longitudinal**} & \multicolumn{14}{c}{**Caw**} \\ & mean\(\downarrow\) & median\(\downarrow\) & 0.25m\(\uparrow\) & 0.5m\(\uparrow\) & 1m\(\uparrow\) & 2m\(\uparrow\) & mean\(\downarrow\) & median\(\downarrow\) & 0.25m\(\uparrow\) & 0.5m\(\uparrow\) & 1m\(\uparrow\) & 2m\(\uparrow\) & mean\(\downarrow\) & median\(\downarrow\) & 1\({}^{\circ}\) & 2\({}^{\circ}\) & 4\({}^{\circ}\) \\ \hline \hline \(\star\) SIBCL[33] & 1.02 & 0.54 & 25.59 & 46.26 & 72.63 & 89.78 & 1.69 & 0.64 & 21.91 & 41.22 & 64.47 & 80.37 & **1.91** & **0.85** & **56.05** & **79.70** & **90.89** \\ CVML[36] & 3.38 & 2.40 & 6.11 & 12.24 & 23.78 & 44.14 & 3.54 & 2.46 & 5.97 & 11.68 & 23.73 & 43.36 & - & - & - & - & - \\ HighlyAcc[22] & 1.24 & 0.83 & 16.51 & 32.05 & 57.65 & 83.11 & 2.44 & 2.01 & 7.14 & 14.11 & 27.41 & 49.94 & 3.23 & 1.82 & 29.83 & 53.41 & 76.51 \\ Ours & **0.14** & **0.12** & **84.58** & **95.49** & **99.80** & **100.00** & **0.10** & **0.09** & **98.55** & **100.00** & **100.00** & **100.00** & 3.57 & 1.78 & 31.18 & 54.13 & 76.00 \\ \hline \hline \end{tabular}
* \(\star\): indicates LiDAR-visual hybrid methods \(\uparrow\): larger is better. \(\downarrow\): lower is better. Our method significantly improves translation accuracy while maintaining orientation accuracy compared to SOTA visual method [22].
\end{table}
Table 1: Comparison on the KITTI-CVL dataset
\begin{table}
\begin{tabular}{l c||c c c c|c c c c|c c c c c c} \hline \hline & \multicolumn{14}{c|}{**Lateral**} & \multicolumn{14}{c|}{**Longitudinal**} & \multicolumn{14}{c}{**Caw**} \\ & mean\(\downarrow\) & median\(\downarrow\) & 0.25m\(\uparrow\) & 0.5m\(\uparrow\) & 1m\(\uparrow\) & 2m\(\uparrow\) & mean\(\downarrow\) & median\(\downarrow\) & 0.25m\(\uparrow\) & 0.5m\(\uparrow\) & 1m\(\uparrow\) & 2m\(\uparrow\) & mean\(\downarrow\) & median\(\downarrow\) & 1\({}^{\circ}\) & 2\({}^{\circ}\) & 4\({}^{\circ}\) \\ \hline \hline \(\star\) SIBCL[33] & 1.29 & 0.55 & 24.83 & 45.90 & 74.06 & 89.14 & 2.31 & 0.78 & 18.72 & 34.11 & 58.26 & 75.44 & 2.23 & 0.57 & 66.76 & 81.78 & 90.50 \\ CVML[36] & 2.78 & 2.22 & 5.91 & 11.78 & 23.27 & 45.06 & 3.24 & 2.66 & 6.07 & 11.45 & 21.22 & 38.82 & - & - & - & - & - \\ HighlyAcc[22] & 1.21 & 0.84 & 16.56 & 31.31 & 57.64 & 85.45 & 2.47 & 1.82 & 7.11 & 13.87 & 28.53 & 53.64 & 2.94 & 1.83 & 30.74 & 53.08 & 78.40 \\ Ours (Front) & 0.94 & 0.54 & 26.11 & 46.73 & 73.48 & 89.69 & 1.56 & 0.80 & 17.95 & 34.44 & 56.30 & 75.41 & 2.77 & 1.18 & 44.47 & 66.61 & 83.26 \\ Ours (2FR) & 0.60 & 0.51 & 24.66 & 48.78 & 82.40 & 98.20 & 0.99 & 0.65 & 22.45 & 41.31 & 64.49 & 86.69 & 1.14 & 0.77 & 60.78 & 85.28 & 96.28 \\ Ours (2Sides) & 0.78 & 0.55 & 24.75 & 46.36 & 75.01 & 94.91 & 1.58 & 0.92 & 14.68 & 28.77 & 52.68 & 73.52 & 3.56 & 2.14 & 24.94 & 47.39 & 71.50 \\ Ours (4Cams) & **0.58** & **0.46** & **26.45** & **53.60** & **85.00** & **98.81** & **0.88** & **0.49** & **25.77** & **50.31** & **75.44** & **91.53** & **0.74** & **0.50** & **77.61** & **94.94** & **98.57** \\ \hline \hline \(\star\) SIBCL[33] & 1.99 & 1.38 & 10.49 & 21.57 & 39.05 & 64.98 & 6.27 & 3.23 & 13.77 & 22.23 & 31.11 & 42.62 & 3.32 & 1.78 & 31.78 & 54.91 & 78.90 \\ CVML[36] & 3.10 & 2.31 & 5.25 & 10.88 & 20.58 & 43.25 & 3.32 & 2.63 & 5.89 & 10.45 & 21.86 & 39.87 & - & - & - & - & - & - \\ HighlyAcc[22] & 1.69 & 1.61 & 9.45 & 18.03 & 31.72 & 66.06 & 2.99 & 2.32 & 4.63 & 9.69 & 19.89 & 39.28 & 3.35 & 2.44 & 22.43 & 42.32 & 75.19 \\ Ours (4Cams) & **0.96** & **0.68** & **20.03** & **37.83** & **65.09** & **87.53** & **1.43** & **0.82** & **17.45** & **33.91** & **56.73** & **76.96** & **2.76** & **1.38** & **39
scenario, we conducted experiments using a comprehensive set of initial poses. The results, shown in Fig. 7, indicate that our approach achieves a satisfactory level of accuracy even when the initial pose is subjected to yaw angle errors of up to \(\pm 60^{\circ}\) and lateral and longitudinal shifts of up to \(\pm 15\)m. The longitudinal estimation is found to be more sensitive to the initial pose compared to the lateral estimation. Moreover, in KITTI-CVL datasets that rely solely on a front onboard camera, a larger difference between the mean and median values suggests more cases falling outside the convergence range. Therefore, the use of multiple camera inputs, such as in the FordAV-CVL dataset with four cameras, can significantly expand both the translation and orientation coverage ranges, with the orientation coverage range being notably more improved.
## 6 Ablation Study
**Two Confidence Maps**. The proposed method adopts two types of confidence maps ("2c w/o SE"), i.e., view-consistent and on-ground maps. An alternative approach was to use a single confidence map ("1c w/o SE"), which combined both on-ground and view-consistent confidences, and disabled gradient backpropagation from the satellite view. A comparison of using different types of confidence maps is reported in Tab. 3. We can see that using two confidence maps with distinct gradient backpropagation mechanisms leads to better performance compared to the alternative approach.
**Spatial Embedding**. We study the impact of Spatial Embedding by comparing the performance of our algorithm with ("Full") and without Spatial Embedding ("2c w/o SE"), as shown in Tab. 3. The results demonstrate that incorporating Spatial Embedding significantly improves the performance of the PureACL algorithm.
**View-consistent On-ground Keypoint Detector**. We compare our keypoint detection design with the SOTA SuperPoint [3]. In this comparison, we use SuperPoint to detect keypoints and combine it with the two confidence maps to reduce the weights of points located on dynamic objects or above the ground plane. The results are presented in Tab. 3 as "SuperPoint". Our view-consistent on-ground point detector ("Full") outperforms "SuperPoint" as it detects a sufficient number of on-ground keypoints, which is more beneficial for cross-view localization.
**Multi-camera Fusion Method**. We compare two fusion methods for keypoints captured by multiple onboard cameras: selecting the highest-confidence 2D projection ("Full"), which is used in our proposed method, and computing the mean of features and confidence scores across all visible onboard camera images ("Mean fusion"). The results in Tab. 3 show that highest-confidence fusion outperforms Mean fusion due to more reliable selection.
## 7 Conclusion
This paper presents PureACL, a novel cross-view localization approach for accurate 3-DoF pose estimation that supports flexible multi-camera inputs. Our approach utilizes a view-consistent on-ground keypoint detector to handle dynamic objects and viewpoint variations while removing off-the-ground objects to establish the homography transformer assumption. Additionally, PureACL incorporates a spatial embedding that maximizes the use of camera intrinsic and extrinsic information to reduce visual matching ambiguity. PureACL is the first sparse visual-only approach and the first visual-only cross-view method capable of achieving a mean translation error of less than one meter. Our future plan is to integrate PureACL into the SLAM system for reduced loop closure dependence. Ultimately, PureACL has the potential to lead to robust, reliable, accurate, and low-cost localization systems.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{**Lateral**} & \multicolumn{3}{c|}{**Longitudinal**} & \multicolumn{3}{c}{**Yaw**} \\ \cline{3-8} \multicolumn{1}{c|}{FordAV-CVL} & mean\(\downarrow\) & median\(\downarrow\) & mean\(\downarrow\) & median\(\downarrow\) & mean\(\downarrow\) & median\(\downarrow\) \\ \hline \hline
**1c** & **w/o SE** & 0.63 & 0.49 & 1.29 & 0.68 & 2.08 & 1.28 \\
**2c** & **w/o SE** & 0.63 & 0.48 & 1.17 & 0.63 & 0.90 & 0.57 \\
**Full** & **0.58** & **0.46** & **0.88** & **0.49** & **0.74** & **0.50** \\
**SuperPoint[3]** & 0.65 & 0.53 & 0.95 & 0.52 & 1.05 & 0.60 \\
**Mean fusion** & 0.61 & 0.47 & 0.90 & 0.50 & 0.90 & 0.55 \\ \hline \hline \end{tabular}
* Our **Full** solution incorporates 2 confidence maps (2c) along with Spatial Embedding (w/ SE).
\end{table}
Table 3: Ablation study on FordAV-CVL dataset
Figure 7: (left) Method performance as initial pose translation varies, with orientation noise fixed within \(\pm 15^{\circ}\) range. (right) Method performance as initial pose orientation varies, with translation noise fixed within \(\pm 5\)m range. The vertical axis shows translation error in units of \(\mathrm{m}\) and orientation error in units of \(4^{\circ}\). Additional metric results can be found in the supplementary.
## 8 Acknowledgements
The research is funded in part by an ARC Discovery Grant (grant ID: DP220100800) to HL.
|
2310.12302 | Conditions for the existence of positive operator valued measures | Sufficient and necessary conditions are presented for the existence of
$(N,M)$-positive operator valued measures ($(N,M)$-POVMs) valid for
arbitrary-dimensional quantum systems. A sufficient condition for the existence
of $(N,M)$-POVMs is presented. It yields a simple relation determining an upper
bound on the continuous parameter of an arbitrary $(N,M)$-POVM, below which all
its POVM elements are guaranteed to be positive semidefinite. Necessary
conditions are derived for the existence of optimal $(N,M)$-POVMs. One of these
necessary conditions exhibits a close connection between the existence of
optimal informationally complete $(N,M)$-POVMs and the existence of
isospectral, traceless, orthonormal, hermitian operator bases in cases, in
which the parameter $M$ exceeds the dimension of the quantum system under
consideration. Another necessary condition is derived for optimal
$(N,M)$-POVMs, whose parameter $M$ is less than the dimension of the quantum
system. It is shown that in these latter cases all POVM elements necessarily
are projection operators of equal rank. This significantly constrains the
possible parameters for constructing optimal $(N,M)$-POVMs. For the special
case of $M=2$ a necessary and sufficient condition for the existence of optimal
$(N,2)$-POVMs is presented. | Maximilian Schumacher, Gernot Alber | 2023-10-18T20:09:47Z | http://arxiv.org/abs/2310.12302v1 | # Conditions for the existence of positive operator valued measures
###### Abstract
Sufficient and necessary conditions are presented for the existence of \((N,M)\)-positive operator valued measures (\((N,M)\)-POVMs) valid for arbitrary-dimensional quantum systems. A sufficient condition for the existence of \((N,M)\)-POVMs is presented. It yields a simple relation determining an upper bound on the continuous parameter of an arbitrary \((N,M)\)-POVM, below which all its POVM elements are guaranteed to be positive semidefinite. Necessary conditions are derived for the existence of optimal \((N,M)\)-POVMs. One of these necessary conditions exhibits a close connection between the existence of optimal informationally complete \((N,M)\)-POVMs and the existence of isospectral, traceless, orthonormal, hermitian operator bases in cases, in which the parameter \(M\) exceeds the dimension of the quantum system under consideration. Another necessary condition is derived for optimal \((N,M)\)-POVMs, whose parameter \(M\) is less than the dimension of the quantum system. It is shown that in these latter cases all POVM elements necessarily are projection operators of equal rank. This significantly constrains the possible parameters for constructing optimal \((N,M)\)-POVMs. For the special case of \(M=2\) a necessary and sufficient condition for the existence of optimal \((N,2)\)-POVMs is presented.
Quantum Information Science, Quantum Correlations in Quantum Information Science, Quantum Entanglement Detection pacs: 03.67.-a,03.65.Ud,03.67.Bg,03.67.Mn
## I Introduction
The development of efficient quantum measurement techniques is important for central tasks of quantum information processing (Bergou _et al._, 2021; Holevo, 2001), such as quantum state reconstruction or the detection of characteristic quantum correlations. In this context \((N,M)\)-POVMs (Siudzinska, 2022) have been introduced recently as interesting one-parameter-continuous families of positive operator valued measures (POVMs). They describe numerous important quantum measurements in a unified way. These measurements include projective measurements with complete sets of mutually unbiased bases (MUBs) (Wootters and Fields, 1989), mutually unbiased measurements (Kalev and Gour, 2014), symmetric informationally complete POVMs (SIC-POVMs) (Rastegin, 2014; Renes _et al._, 2004) and their generalizations (GSIC-POVMs) (Gour and Kalev, 2014). For purposes of quantum information processing informationally complete \((N,M)\)-POVMs are particularly interesting, because they enable a complete reconstruction of quantum states.
Recent investigations exploring characteristic features of \((N,M)\)-POVMs have concentrated on possible applications in quantum information processing and on basic theoretical questions concerning their existence. On the application side the potential of \((N,M)\)-POVMs for the local detection of provable bipartite quantum entanglement (Schumacher and Alber, 2023a) and quantum steering (Schumacher and Alber, 2023b) has been investigated. As far as basic theoretical questions are concerned, it has been demonstrated that \((N,M)\)-POVMs can always be constructed for sufficiently small values of their continuous parameter (Siudzinska, 2022). However, for larger or even maximal values of their continuous parameter, i.e. for optimal \((N,M)\)-POVMs, it generally causes major theoretical problems to guarantee the positive semidefiniteness of all POVM elements involved. Despite considerable research efforts concentrating on the subclasses of SIC-POVM and MUBs, for example, open questions concerning their existence in arbitrary dimensions still remain (Fuchs _et al._, 2017; Horodecki _et al._, 2022). Thus, questions concerning the existence and construction of \((N,M)\)-POVMs for large or even maximal values of their continuous parameter are still widely open.
In order to obtain a detailed theoretical understanding of \((N,M)\)-POVMs and of their characteristic features there is a need to develop sufficient and necessary conditions, which guarantee their existence. It is a main intention of this paper to address this issue and to explore general features of the existence and construction of \((N,M)\)-POVMs with the help of orthonormal hermitian operator bases. As a first main result we develop a sufficient condition for the existence of \((N,M)\)-POVMs. This sufficient condition yields a simple upper bound on the continuous parameter, within which this existence is guaranteed. Thus, this result complements the already known property that \((N,M)\)-POVMs can always be constructed for sufficiently small values of their continuous parameters (Siudzinska, 2022). As a second main result we present necessary conditions for the existence of optimal \((N,M)\)-POVMs. One of these necessary conditions exhibits close connections between the existence of optimal informationally complete \((N,M)\)-POVMs and the existence of isospectral, traceless, orthonormal, hermitian operator bases in cases, in which the parameter \(M\) exceeds the dimension of the quantum system to be measured. This result is based on recent work showing that informationally complete \((N,M)\)-POVMs are necessarily related to orthonormal hermitian operator bases by highly degenerate linear maps with two non-zero eigenvalues and a non-trivial kernel (Schumacher and Alber, 2023a). Thus, it generalizes an already known property of GSICs (Gour and Kalev, 2014), i.e. optimal \((1,M)\)-POVMs, to arbitrary optimal informationally complete \((N,M)\)-POVMs. Another necessary condition shows that optimal \((N,M)\)-POVMs, whose parameter \(M\) is less than the dimension of the quantum system under consideration, can necessarily only exist if all their POVM elements are projection operators of equal rank. This significantly constrains the possible parameters, for which optimal \((N,M)\)-POVMs can be constructed. Furthermore, in the special cases with \(M=2\) we present a necessary and sufficient condition for the existence of optimal \((N,2)\)-POVMS. This criterion establishes a connection to the existence of \(N\) isospectral, traceless, orthonormal, hermitian operators in even-dimensional Hilbert spaces with a prescribed spectrum.
This paper is organized as follows. In Sec.II basic features of \((N,M)\)-POVMs are summarized. In Sec.II.A their defining properties are recapitulated (Siudzinska, 2022). In Sec.II.B recent results (Schumacher and Alber, 2023a) on their necessary relation to orthonormal hermitian operator bases are summarized. Furthermore, as a motivation for the subsequent discussion typical problems are discussed originating from the positive semidefiniteness of all its POVM elements. As one of our main results, in Sec.III a sufficient condition is presented, under which for any \(d\)-dimensional quantum system \((N,M)\)-POVMs can be constructed. As a second main result in Sec.IV two necessary conditions for the existence of optimal \((N,M)\)-POVMs are presented. Firstly, it is shown that for \(M\geq d\) the existence of \((d^{2}-1)\) isospectral, traceless, orthonormal, hermitian operators is necessary for the existence of an optimal informationally complete \((N,M)\)-POVM. Thereby the form of their common spectrum is completely determined by the defining properties of the \((N,M)\)-POVM. Secondly, it is shown that in cases with \(2<M<d\) optimal \((N,M)\)-POVMs can necessarily only exist for equal-rank projection operators. In Sec.V a necessary and sufficient condition for the
existence of optimal \((N,2)\)-POVMs with \(N\leq d^{2}-1\) is derived. It is shown that they can only exist in even-dimensional Hilbert spaces. Explicit constructions are presented for dimensions \(d=2^{k}\) with \(k\in\mathbb{N}\).
## II Informationally complete positive operator valued measures
In this section elementary features of the recently developed (N,M)-POVMs (Siudzinska, 2022) are summarized. In the first subsection their definition and resulting elementary properties are recapitulated. In the second subsection recently discussed basic relations between informationally complete \((N,M)\)-POVMs and orthonormal hermitian operator bases are summarized (Schumacher and Alber, 2023a). Furthermore, some typical problems are discussed, which complicate the construction of \((N,M)\)-POVMs by basis expansions in terms of orthonormal hermitian operator bases.
### Basic properties
Let us consider a quantum system characterized by a \(d\)-dimensional Hilbert space. Ignoring the properties of a quantum state immediately after a measurement, the most general quantum measurement on this quantum system is described by a POVM (Bergou _et al._, 2021; Holevo, 2001). A \(M\)-element POVM is a set of \(M\) positive semidefinite operators, say \(\Pi=\{\Pi_{a}\geq 0\mid a=1,\cdots,M\}\), which fulfill the completeness relation
\[\sum_{a=1}^{M}\Pi_{a}\ =\ \mathds{1}_{d} \tag{1}\]
with the unit operator \(\mathds{1}_{d}\) of the quantum system's \(d\)-dimensional Hilbert space. Thereby, the indices \(a\in\{1,\cdots,M\}\) coordinatize the different \(M\) possible real-valued measurement results, say \(\mathcal{M}_{a}\). According to Born's rule the probability of measuring the result \(\mathcal{M}_{a}\) is given by \(p_{a}=\mathrm{Tr}\{\varrho\Pi_{a}\}\), if the quantum system has been prepared in the quantum state \(\varrho\geq 0\) immediately before the measurement. If the positive semidefinite operators \(\Pi\) are linearly independent for \(a\in\{1,\cdots,M\}\) and \(M=d^{2}\), a POVM is called informationally complete. In such a case an arbitrary quantum state \(\varrho\) can be reconstructed from all the \(d^{2}\) measurement results. In the special case of orthogonal projection operators, i.e. \(\Pi_{a}\Pi_{a^{\prime}}=\delta_{aa^{\prime}}\Pi_{a}\) for \(a,a^{\prime}\in\{1,\cdots,M\}\), a POVM describes a von Neumann measurement.
Recently \((N,M)\)-POVMs (Siudzinska, 2022) have been introduced as a unified way for describing numerous important quantum measurements, such as projective measurements with MUBs (Wootters and Fields, 1989), MUMs (Kalev and Gour, 2014), SIC-POVMs (Rastegin, 2014; Renes _et al._, 2004) and their generalizations (GSICs) (Gour and Kalev, 2014). A \((N,M)\)-POVM \(\Pi\) is a one-continuous-parameter family of \(N\) different \(M\)-element POVMs, i.e. \(\Pi=\{\Pi_{i(\alpha,a)}\mid\alpha\in\{1,\cdots,N\},a\in\{1,\cdots,M\}\}\), defined by the following relations
\[\mathrm{Tr}\{\Pi_{i(\alpha,a)}\ \} = \frac{d}{M}, \tag{2}\] \[\mathrm{Tr}\{\Pi_{i(\alpha,a)}\ \Pi_{i(\alpha,a^{\prime})}\} = x\ \delta_{a,a^{\prime}}+(1-\delta_{a,a^{\prime}})\frac{d-Mx}{M(M-1)},\] (3) \[\mathrm{Tr}\{\Pi_{i(\alpha,a)}\ \Pi_{j(\beta,b)}\} = \frac{d}{M^{2}} \tag{4}\]
for all \(\beta\neq\alpha\in\{1,\cdots,N\}\) and \(a,a^{\prime},b\in\{1,\cdots,M\}\). Thereby, for the sake of convenience we have introduced the coordinate function \(i:(\alpha,a)\to i(\alpha,a)\). It maps the \(N\)\(M\)-tuples of the form \((\alpha,a)\), which identify a particular POVM element uniquely, bijectively onto the \(NM\) natural numbers \(i,j\in\{1,\cdots,NM\}\). For given values of \((d,N,M)\) the possible values of the continuous parameter \(x\) are constrained by the relation (Siudzinska, 2022)
\[\frac{d}{M^{2}}<x\leq\min\left(\frac{d^{2}}{M^{2}},\frac{d}{M}\right). \tag{5}\]
A \((N,M)\)-POVM with maximal possible value of \(x\) is called optimal. Furthermore, a \((N,M)\)-POVM \(\Pi\) is informationally complete, if it contains \(d^{2}\) linearly independent positive semidefinite operators. As each of the \(N\)\(M\)-element POVMs involved fulfills the completeness relation (1), this is equivalent to the requirement
\[(M-1)N+1\ =\ d^{2}. \tag{6}\]
For arbitrary dimensions four possible solutions of this relation are \((N,M)\in\{(1,d^{2}),\ (d+1,d),\ (d^{2}-1,2),\ (d-1,d+2)\}\). The solution \((N,M)=(1,d^{2})\) describes a one-parameter family of GSIC-POVMs (Gour and Kalev, 2014) parameterized by the parameter \(x\). SIC-POVMs are special cases of GSIC-POVMs with \(x=1/d^{2}\). The solution \((N,M)=(d+1,d)\) describes MUMs (Kalev and Gour, 2014). In the special case of \(x=d^{2}/M^{2}=d/M=1\) these MUMs describe projective measurements of unit rank with maximal sets of \(d+1\) MUBs.
### Orthonormal hermitian operator bases and informationally complete (N,M)-POVMs
In this subsection we first recapitulate the general relations between informationally complete \((N,M)\)-POVMs and orthonormal hermitian operator bases, which necessarily have to be fulfilled irrespectively of the positive semidefiniteness of the POVM elements involved (Schumacher and Alber, 2023a). They govern the construction of \((N,M)\)-POVMs by basis expansions in terms of orthonormal hermitian operator bases. Subsequently, concentrating on the particular example of a MUM in dimension \(d=3\), it is exemplified that such a MUM can always be constructed for sufficiently small values of the continuous parameter \(x\). This example demonstrates, why constructing \((N,M)\)-POVMs for parameters \(x\) close to their lower bounds is always possible, while constructing optimal \((N,M)\)-POVMs is rather difficult. These problems motivate the development of simple sufficient and necessary conditions for the existence of \((N,M)\)-POVMs, which is pursued in the subsequent sections.
In a Hilbert space \({\cal H}_{d}\) of a \(d\)-dimensional quantum system an informationally complete \((N,M)\)-POVM can always be expanded in a basis of \(d^{2}\) linearly independent linear operators, say \(G=(G_{1},\cdots,G_{d^{2}})^{T}\). These operators can be chosen as orthonormal hermitian operators with respect to the Hilbert-Schmidt (HS) scalar product \(\langle G_{\mu}|G_{\nu}\rangle_{HS}:={\rm Tr}\{G_{\mu}^{\dagger}G_{\nu}\}\) with \(G_{\mu}^{\dagger}=G_{\mu}\). They form a basis of the Hilbert space \({\cal H}_{d^{2}}=({\rm Span}(G),\langle\cdot|\cdot\rangle_{HS})\) of linear operators in \({\cal H}_{d}\) over the field of real numbers. This latter Hilbert space is a Euclidean vector space. Furthermore, such an orthonormal hermitian operator basis \(G\) can always be chosen so that
\[G_{1}\ =\ {\rm 1\kern-3.8pt1}_{d}/\sqrt{d},\ \ \ {\rm Tr}\{G_{\mu}\}=0 \tag{7}\]
for \(\mu\in\{2,\cdots,d^{2}\}\). The resulting basis expansion of an arbitrary informationally complete \((N,M)\)-POVM in such an orthonormal hermitian basis has the general form
\[\Pi=G^{T}S. \tag{8}\]
Thereby, \(S\) denotes the linear operator mapping \({\cal H}_{d^{2}}\) into the Hilbert space \({\cal H}_{NM}\) of hermitian operators, which contains all the elements of the \((N,M)\)-POVM. Recently, it has been shown that for informationally complete \((N,M)\)-POVMs the structure of this linear map \(S\) and of its corresponding real-valued \(d^{2}\times NM\) matrix \(S_{\mu,i}\) is significantly constrained by the defining relations (3) and (4) (Schumacher and Alber, 2023a). Ignoring the positive semidefiniteness constraints of the POVM elements, it has been shown that the most general form of the linear operator \(S:{\cal H}_{d^{2}}\rightarrow{\cal H}_{NM}\) is given by a \(d^{2}\times NM\) matrix of the form
\[S_{\mu,i(\alpha,a)}\ =\ \sqrt{\Lambda_{\mu}}X_{\mu,i(\alpha,a)}^{T} \tag{9}\]
with the diagonal \(d^{2}\times d^{2}\) matrix \(\Lambda\) and the \(NM\times d^{2}\) matrix \(X_{i,\mu}\). The diagonal matrix \(\Lambda\) has only two different non-vanishing entries, which are the non-zero eigenvalues of \(S\). They are given by
\[\Lambda_{1}\ =\ \frac{dN}{M},\ \ \Lambda_{\mu}=\Gamma=\frac{xM^{2}-d}{M(M-1)} \tag{10}\]
for \(\mu\in\{2,\cdots,d^{2}\}\). Thus, the eigenvalue \(\Gamma\) is \((d^{2}-1)\)-fold degenerate, and the eigenvalue \(\Lambda_{1}\) is non-degenerate. The real-valued \(NM\times d^{2}\) matrix \(X_{i,\mu}\) consists of \(d^{2}\)\(NM\)-dimensional orthonormal arrays, i.e.
\[\sum_{i=1}^{NM}X_{i,\mu}X_{i,\nu}\ =\ \delta_{\mu\nu} \tag{11}\]
for \(\mu,\nu\in\{1,\cdots,d^{2}\}\). They fulfill the relations
\[X_{i,1}\ =\ \frac{1}{\sqrt{NM}},\ \ \sum_{a=1}^{M}X_{i(\alpha,a),\mu}=0 \tag{12}\]
for \(\mu\in\{2,\cdots,d^{2}\}\). As a consequence of the defining constraints (2) and (3) of \((N,M)\)-POVMs these orthonormal \(NM\)-dimensional arrays also fulfill the relation
\[\sum_{\mu=2}^{d^{2}}(X_{i,\mu})^{2}\ =\ \frac{M-1}{M}. \tag{13}\]
It is apparent from (9) that all basis operators \(G_{\mu}\) with \(\mu\in\{2,\cdots,d^{2}\}\) are mapped conformally onto a \((d^{2}-1)\)-dimensional subspace of \({\cal H}_{NM}\) by stretching the norms of all its elements by the factor \(\sqrt{\Gamma}\). Only the basis operator \(G_{1}\) is stretched by a different factor, namely \(\sqrt{\Lambda_{1}}\). Therefore, ignoring the positive semidefiniteness constraints the defining properties of \((N,M)\)-POVMs (3) and (4) imply the basis expansion
\[\Pi_{i(\alpha,a)}\ =\ \frac{\openone_{d}}{M}+\sqrt{\Gamma}\sum_{\mu=2}^{d^{2}}X _{i(\alpha,a),\mu}G_{\mu} \tag{14}\]
for each element of an informationally complete \((N,M)\)-POVM with \(i\in\{1,\cdots,NM\}\). In view of (7) the orthonormal hermitian operators \(G_{\mu}\) for \(\mu\in\{2,\cdots,d^{2}\}\) are only determined up to an orthogonal transformation of the orthogonal group \(O(d^{2}-1)\). Furthermore, there is an additional freedom in choosing the \(N\times d^{2}\) matrices \(X_{i(\alpha,a)}\) within the constraints imposed by relation (12). From the basis expansion (14) it is apparent that possibilities for constructing positive semidefinite POVM elements may be severely constrained by not fully taking advantage of the freedom of choice of the orthonormal hermitian operator basis.
According to relation (6) an informationally complete \((N,M)\)-POVM consists of \(d^{2}-1=N(M-1)\) linear independent POVM elements. A strategy for its construction is to partition the orthonormal traceless hermitian operator basis \(\{G_{\mu}\}\) with \(\mu\in\{2,\cdots,d^{2}\}\) into \(N\) basis tuples \(B_{\alpha}\), each of which corresponds to a particular value of \(\alpha\in\{1,\cdots,N\}\). This partitioning of the basis elements ensures that condition (4) is fulfilled. Accordingly, the basis expansion (14) is restricted to an ansatz of the form
\[\Pi_{i(\alpha,a)}\ =\ \frac{\openone_{d}}{M}+\sqrt{\Gamma}\sum_{G_{\mu}\in B _{\alpha}}X_{i(\alpha,a),\mu}G_{\mu}. \tag{15}\]
Using this ansatz the allowed transformations are restricted to the orthogonal group \(O(M-1)\) for each value of \(\alpha\), thus also restricting the achievable positive semidefinite operators for a given basis \(\{G_{\mu}\}\). Therefore, the possible values of the continuous parameters \(x\) of such a construction depend on the chosen basis \(\{G_{\mu}\}\) and its partitioning. In order to demonstrate this, let us consider the construction of a MUM for \(d=3\) as a special example of an informationally complete \((4,3)\)-POVM with \(1/3<x\leq 1\). According to (15) the construction of this MUM can be interpreted geometrically. For this purpose let us identify the operator \(\openone_{d}/3\) with the origin of an 8-dimensional Euclidean space spanned by the hermitian operators \(G_{\mu}\) for \(\mu\in\{2,\ldots,9\}\). Accordingly, we have to construct \(N=4\) equilateral triangles with this origin as their centroids to fulfill the characteristic completeness relation of POVMs (12) for each \(\alpha\). Furthermore, condition (5) implies that
\[\Gamma\sum_{G_{\mu}\in B_{\alpha}}(X_{i(\alpha,a),\mu})^{2}\leq r_{>}^{2}:=2/3. \tag{16}\]
A \((N,M)\)-POVM is optimal if the two sides of inequality (16) are equal.
In Fig.1 the constraints imposed by the positive semidefiniteness of all POVM elements of this MUM are visualized graphically for the different partitions \(B_{\alpha}\). Thereby, the Gell-Mann matrices as defined in appendix A have been used as an orthonormal basis of traceless, hermitian operators. Accordingly, the four partitions have been chosen as \(B_{1}=\{g_{1},g_{8}\},B_{2}=\{g_{3},g_{4}\},B_{3}=\{g_{2},g_{5}\},B_{4}=\{g_{ 6},g_{7}\}\). For arbitrary partitionings the corresponding positive semidefinite regions have already been discussed recently (Kimura, 2003). In Fig.1a the two-dimensional Euclidean subspace spanned by the unit vectors of partition \(B_{1}\) is depicted. All points inside the blue triangle correspond to the convex set of positive semidefinite matrices according to (15). The vertices of any equilateral triangle within this blue triangle with the origin as its centroid constitute a triple of possible POVMs with \(a\in\{1,\cdots,3\}\) and \(\alpha=1\). The points inside the yellow circle correspond to all hermitian operators, which fulfill the necessary constraint (5). From inequality (16) it follows that an optimal POVM is an equilateral triangle, whose vertices are on the boundary of the yellow area so that they have maximal distance to the origin. The blue triangle itself constitutes a single optimal POVM, which can be constructed with the help of the partitioning \(B_{1}\) of the Gell-Mann basis. The green circle is the maximal circle around the origin, which can be constructed inside the triangle of positive semidefinite
elements. Its radius is given by \(r_{<}=1/\sqrt{6}\). The blue region of Fig.1b shows the convex set of positive semidefinite hermitian matrices, which can be constructed in the two-dimensional Euclidean subspace spanned by unit vectors of partition \(B_{2}\). The vertices of any equilateral triangle constructed within this blue region with the origin as its centroid constitute a triple of possible POVM elements with \(a\in\{1,\cdots,3\}\) and \(\alpha=2\). Again, the points inside the yellow circle correspond to all hermitian matrices, which fulfill the necessary constraint (5). The two points of the blue area intersecting with the yellow area's boundary cannot be used for constructing an optimal POVM for \(\alpha=2\). The green circle is the maximal circle around the origin again with radius \(r_{<}=1/\sqrt{6}\), which can be constructed inside the blue region of positive semidefinite elements. The blue region of Fig.1c shows the convex set of positive, semidefinite, hermitian matrices, which can be constructed in the two-dimensional Euclidean subspace spanned by unit vectors of partitions \(B_{3}\) or \(B_{4}\). The vertices of any equilateral triangle constructed within this blue region with the origin as its centroid constitute a triple of possible POVM with \(a\in\{1,\cdots,3\}\) and \(\alpha=3\) or \(\alpha=4\). Again the points inside the yellow circle correspond to all hermitian matrices fulfilling the necessary constraint (5). A MUM is given by four equilateral triangles of identical size inside the positive semidefinite area of each partition with the origin as its centroid. The red triangles in Figs.1a-b and the two triangles in Fig.1c show four equilateral triangles of maximal sizes, which can be constructed inside the blue regions of Figs.1a-c. These four triangles constitute an informationally complete MUM in dimension \(d=3\) with the maximal possible value of \(x=5/9\). The directions of some of these triangles with respect to the chosen partitioning of the Gell-Mann basis are not determined uniquely. In Fig.1a, for example the red triangle can be rotated around its centroid continuously as long as it stays within the blue region of positive semidefiniteness. In particular, this implies that not all rotation angles are possible. However, in Fig.1c rotations of the two triangles around the origin are possible for arbitrary angles. But in Fig.1b the shape of the blue region implies that the position of the red triangle is fixed uniquely. From Figs.1a-c it is apparent that the vertices of the equilateral triangles representing the maximal informationally complete MUM are located outside of the green circle of radius \(r_{<}=1/\sqrt{6}\). In contrast, all POVM elements, whose equilateral triangles are constructed inside this green circle, can be rotated arbitrarily around the origin without affecting the positive semidefiniteness of their corresponding POVM elements.
In general it is a complicated task to determine criteria, under which all POVM elements of a \((N,M)\)-POVM are positive semidefinite. This example demonstrates that, although for sufficiently small regions around the minimal possible value of \(x=d/M^{2}\)\((N,M)\)-POVMs can be constructed, complications increase with increasing values of \(x\). Typically, the most complicated situations arise for constructions of optimal \((N,M)\)-POVMs. In view of these difficulties it is of interest to develop conditions for the existence of optimal \((N,M)\)-POVMs. Motivated by this need in the following such conditions will be developed.
Figure 1: Regions of positive semidefiniteness corresponding to four different partitions \(B_{\alpha}\), \(\alpha\in\{1,\cdots,4\}\) of the traceless, hermitian operators for an informationally complete \((4,3)\)-POVM (MUM) in dimension \(d=3\) with \(\alpha=1\) (a), \(\alpha=2\) (b), \(\alpha=\{3,4\}\) and \((i,j)\in\{(2,5),(6,7)\}\) (c): Geometrically each equilateral triangle with \((0,0)\) as its centroid represents a set of three operators fulfilling (15). The restrictions imposed by positive semidefiniteness are visualized by the blue regions. The yellow regions represent the constraints (5). The green region is the circle of maximal radius \(r_{<}=1/\sqrt{6}\) with center \((0,0)\) located within the intersection of all blue regions of all four partitions. Within this circle equilateral triangles with centroid \((0,0)\) can be rotated by arbitrary angles. The two red equilateral triangles of Fig.1a-b and the two equilateral triangles of Fig.1c represent a possible maximal MUM with \(x=5/9\). Their vertices lie outside of the green circles.
## III A sufficient condition for the construction of \((N,m)\)-POVMs
In this section a sufficient condition is derived, under which for a \(d\)-dimensional quantum system \((N,M)\)-POVMs can always be constructed. This sufficient condition yields a simple upper bound on the continuous \(x\)-parameter (cf. inequality (21)), within which this can be achieved.
In general it is a complicated task to determine criteria, under which all POVM elements of a \((N,M)\)-POVM of the form (14) are positive semidefinite. However, a general sufficient condition for positivity can be derived by using general properties of positive semidefinite linear operators (Bengtsson and Zyczkowski, 2006; Kimura and Kossakowski, 2005). For this purpose let us consider an arbitrary POVM element of a \((N,M)\)-POVM in a \(d\)-dimensional Hilbert space. Its spectral representation is given by
\[\Pi_{i(\alpha,a)}\ =\ \sum_{\sigma=1}^{d}\lambda_{\sigma}P_{\sigma} \tag{17}\]
with its non-negative eigenvalues \(\lambda_{\sigma}\) and with the associated one-dimensional orthogonal projection operators \(P_{\sigma}\) fulfilling the completeness and orthogonality relations \(\sum_{\sigma=1}^{d}P_{\sigma}=\openone_{d}\) and \(P_{\sigma}P_{\sigma^{\prime}}=\delta_{\sigma,\sigma^{\prime}}P_{\sigma}\). The constraint (2) yields the relation
\[{\rm Tr}\{\Pi_{i(\alpha,a)}\}\ =\ \frac{d}{M}=\sum_{\sigma=1}^{d}\lambda_{ \sigma}. \tag{18}\]
Therefore, for given projection operators \(P_{\sigma}\) the set of all positive semidefinite POVM elements of this \((N,M)\)-POVM constitute a \((d-1)\)-dimensional simplex \(\Delta_{d-1}\) in the \(d\)-dimensional Hilbert space (compare with Fig.2). The centroid of this \((d-1)\)-simplex is given by
\[C_{d-1}\ =\ \frac{{\rm Tr}\{\Pi_{i(\alpha,a)}\}}{d}\sum_{\sigma=1}^{d}P_{ \sigma}=\frac{1}{M}\openone_{d}. \tag{19}\]
The boundary of this \((d-1)\)-simplex is a \((d-2)\)-simplex, i.e. \(\Delta_{d-2}:=\partial\Delta_{d-1}\). The boundary of \(\Delta_{d-1}\) consists of all possible elements of the form (17) with at least one of the \(d\) eigenvalues vanishing. The centroid \(C_{d-1}\) has equal distances \(r_{in}\) to the centroids of all the \(d\) parts of \(\Delta_{d-2}\). This distance \(r_{in}\) defines the radius of the largest possible circle with center \(C_{d-1}\), which lies within \(\Delta_{d-1}\) and touches \(\Delta_{d-2}\) in one of its \(d\) centroids \(C_{d-2}\). It is determined by the relation
\[r_{in}^{2}\ =\ {\rm Tr}\{(C_{d-1}-C_{d-2})^{2}\}=({\rm Tr}\{\Pi_{i( \alpha,a)}\})^{2}\left[(d-1)\left(\frac{1}{d}-\frac{1}{d-1}\right)^{2}+\frac{ 1}{d^{2}}\right]=\frac{({\rm Tr}\{\Pi_{i(\alpha,a)}\})^{2}}{d(d-1)}=\frac{d}{ M^{2}(d-1)}. \tag{20}\]
Using (2) and (3) we arrive at the inequality
\[0<{\rm Tr}\{(\Pi_{i(\alpha,a)}-\openone_{d}/M)^{2}\}=x-\frac{d}{M^{2}}\leq r _{in}^{2}=\frac{d}{M^{2}(d-1)}. \tag{21}\]
According to (19) and (20) the centroid \(C_{d-1}\) as well as the radius \(r_{in}\) are independent of the choice of the orthonormal projection operators \(P_{\sigma}\) so that (21) applies to all POVM elements. Therefore, it can be concluded that fulfillment of inequality (21) is sufficient for the existence of a \((N,M)\)-POVM. It guarantees the positive semidefiniteness of all its POVM elements. Note that in the special cases considered in Figs.1a-c, i.e. \(M=3,d=3\), the distance \(r_{in}\) reduces to the value \(r_{in}=1/\sqrt{6}=r_{<}\), which is a basis and partition independent value.
One can also define a circle with center \(C_{d-1}\) and radius \(r_{out}\), within which all possible \((N,M)\)-POVMs are located according to the constraint (5). It is defined by
\[r_{out}^{2}\ =\ \min\left(\frac{d(M-1)}{M^{2}},\frac{d(d-1)}{M^{2}}\right) \tag{22}\]
so that (5) reduces to the relation \(0<x-d/M^{2}\leq r_{out}^{2}\). A \((N,M)\)-POVM with \(x-d/M^{2}=r_{out}^{2}\) is called optimal. The ratio between the range of \(x\)-values around its minimal value of \(d/M^{2}\), for which a \((N,M)\)-POVM can always be constructed, i.e. \(r_{in}^{2}\), and the corresponding maximal possible range, i.e. \(r_{out}^{2}\), is given by
\[R(d)\ =\ \frac{r_{in}^{2}}{r_{out}^{2}}=\left\{\begin{array}{ll}\frac{1}{(d -1)^{2}}&M\geq d\\ \frac{1}{(M-1)(d-1)}&2\leq M<d\end{array}\right.. \tag{23}\]
In Fig.3 the dependence of this ratio \(R(d)\) is depicted for cases with \(M\geq d\) (blue points) and for \(2=M<d\) (orange points). It is apparent that this ratio converges to zero rapidly with increasing dimensions of the quantum system's Hilbert space \(d\). Correspondingly, the size of the interval of \(x\)-values, for which \((N,M)\)-POVMs can be constructed with arbitrary choices of traceless, orthonormal, hermitian operator bases, rapidly tends to zero. For cases with \(2<M<d\) the values of \(R(d)\) are located inside the region between the two series of dotted points of Fig.3. In the exceptional case of a qubit, i.e. \(d=2\), the inner and outer radius are identical, i.e. \(r_{in}^{2}=r_{out}^{2}=2/M^{2}\), and for
Figure 3: Dimensional dependence of \(R(d)\) according to (23) for \(M\geq d\) (blue points) and for \(2=M<d\) (orange points): Cases with \(2<M<d\) are located between these two series of points and also rapidly converge to zero with increasing dimensions \(d\).
Figure 2: Visualization of the simplices \(\Delta_{d-1}\) and \(\Delta_{d-2}=\partial\Delta_{d-1}\) and of their corresponding centroids in the elementary case \(d=3\): The orthogonal axes \(x,y,z\) are defined by the one-dimensional orthogonal projection operators \(\{P_{\sigma}\mid\sigma\in\{1,2,3\}\}\) and \(p=\mathrm{Tr}\{\Pi_{i(a,a)}\}=d/M\). The simplex \(\Delta_{2}\) is a triangle (red area). Its centroid \(C_{2}\) is represented by the green point. The 1-simplex \(\Delta_{1}=\partial\Delta_{2}\) is the boundary of the red triangle. The dark blue circle centered around \(C_{2}\) with radius \(r_{in}\) is the maximal circle, which can be constructed inside \(\Delta_{2}\). It touches \(\Delta_{1}\) in its three centroids. The light blue circle centered around \(C_{2}\) with radius \(r_{out}\) represents the constraint (5).
\(M=4\) the set of positive semidefinite operators forms the Bloch sphere.
Fulfillment of the sufficient condition (21) allows the construction of \((N,M)\)-POVMs for arbitrary choices of the \((d^{2}-1)\) traceless elements of the hermitian operator basis of \({\cal H}_{d^{2}}\) according to the ansatz (14). However, from the explicit expression of \(r_{in}^{2}\) (cf. (20) and Fig.3)) it is also apparent that the range of \(x\)-values, for which this sufficient condition can be fulfilled, decreases rapidly with increasing values of \(M\). Thus, in general it is an intricate problem to construct \((N,M)\)-POVMs, if the sufficient condition of (21) is not applicable. In particular, in these cases the choice of the traceless hermitian operator basis elements entering (14) can be crucial for the construction. Typically the most complicated situations arise for the construction of optimal \((N,M)\)-POVMs. Motivated by these problems in the subsequent section we explore necessary conditions for the construction of optimal \((N,M)\)-POVMs.
## IV Necessary conditions for the existence of optimal \((n,m)\)-POVMs
In this section two necessary conditions for the existence of optimal \((N,M)\)-POVMs of a \(d\)-dimensional quantum system are presented. As a first main result it is shown that for \(M\geq d\) the existence of \((d^{2}-1)\) isospectral, traceless, orthonormal, hermitian operators is necessary for the existence of an optimal informationally complete \((N,M)\)-POVM. The form of their common spectrum is completely determined by the defining properties of the optimal informationally complete \((N,M)\)-POVM. As a second main result it is demonstrated that in cases with \(2<M<d\) the existence of an optimal \((N,M)\)-POVM is only possible, if all its POVM elements are projection operators of equal rank.
### Optimal informationally complete \((N,m)\)-POVMs for \(M\geq d\)
Let us consider an optimal informationally complete \((N,M)\)-POVM \(\Pi\) of a \(d\)-dimensional quantum system with \(M\geq d\). According to the defining relations (2) and (3), each element \(\Pi_{i(\alpha,a)}\) of this \((N,M)\)-POVM fulfills the relations
\[{\rm Tr}\{\Pi_{i(\alpha,a)}\}\ =\ \sum_{\sigma=1}^{d}\lambda_{\sigma}=\frac{d} {M},\ \ \ {\rm Tr}\{(\Pi_{i(\alpha,a)})^{2}\}=\sum_{\sigma=1}^{d}\lambda_{\sigma}^{2}= \frac{d^{2}}{M^{2}} \tag{24}\]
with non-negative eigenvalues \(\lambda_{\sigma}\) for \(\sigma\in\{1,\cdots,d\}\). Both relations constrain the possible values of these eigenvalues because they imply \(\sum_{1=\sigma<\sigma^{\prime}}^{d}\lambda_{\sigma}\lambda_{\sigma^{\prime}}=0\). Therefore, the positive semidefiniteness of all eigenvalues \(\lambda_{\sigma}\) for \(\sigma\in\{1,\cdots,d\}\) implies that there is only one non-zero eigenvalue of magnitude \(d/M\). Correspondingly, an arbitrary POVM element of the optimal informationally complete \((N,M)\)-POVM \(\Pi\) has to be of the general form
\[\Pi_{i(\alpha,a)}\ =\ \frac{d}{M}|i(\alpha,a)\rangle\langle i(\alpha,a)|. \tag{25}\]
The defining relations (2), (3) and (4) constrain the scalar products of the generally non-orthogonal but normalized eigenstates \(|i(\alpha,a)\rangle\) by the relations
\[|\ \langle i(\alpha,a)|i(\alpha,a^{\prime})\rangle\ |\ =\ \sqrt{\frac{M/d-1}{M-1}},\ \ \ |\ \langle i(\alpha,a)|i(\beta,b)\rangle\ |\ =\ \sqrt{\frac{1}{d}} \tag{26}\]
for \(\alpha\neq\beta\in\{1,\cdots,N\}\), \(a\neq a^{\prime}\in\{1,\cdots,M\}\) and \(a,a^{\prime},b\in\{1,\cdots,M\}\). For each \(\alpha\in\{1,\cdots,N\}\) and \(a\in\{1,\cdots,M-1\}\) one can construct the traceless, hermitian operators
\[G_{i(\alpha,a)}\ =\ \frac{M-1}{(\sqrt{M}+1)\sqrt{d^{2}-d}}\left({ \rm 1\kern-2.27622ptl}_{d}+A_{i(\alpha,a)}\right)\ \ {\rm with}\ \ A_{i(\alpha,a)}=\sqrt{M}\Pi_{i(\alpha,M)}-\sqrt{M}(\sqrt{M}+1)\Pi_{i( \alpha,a)}. \tag{27}\]
It is straightforward to demonstrate that these \((M-1)N=d^{2}-1\) (cf. (6)) hermitian operators \(G_{i(\alpha,a)}\) are an orthonormal basis in the Hilbert space \({\cal H}_{d^{2}-1}\) of traceless, hermitian operators of the \(d\)-dimensional quantum system. According to (25) and (27) each hermitian operator \(A_{i(\alpha,a)}\) has a maximal rank of two. Therefore, the characteristic polynomials determining the eigenvalues \(\Lambda\) of \(A_{i(\alpha,a)}\) have the general form
\[\Lambda^{d}+c_{d-1}\Lambda^{d-1}+c_{d-2}\Lambda^{d-2}\ =\ 0. \tag{28}\]
Using the Cayley-Hamilton theorem (Frobenius, 1878) it is easily found that
\[c_{d-1}\ =\ -\,{\rm Tr}\{A_{i(\alpha,a)}\}=d,\ \ c_{d-2}\ =\ \frac{1}{2}\left(({\rm Tr}\{A_{i( \alpha,a)}\})^{2}-{\rm Tr}\{A_{i(\alpha,a)}^{2}\}\right)=\frac{d-d^{2}}{\sqrt{ M}-1}. \tag{29}\]
Consequently, all traceless, orthonormal, hermitian operators \(G_{i(\alpha,a)}\) have the same spectrum \({\rm Sp}(G_{i(\alpha,a)})\) determined by the solutions of (28), i.e.
\[{\rm Sp}(G_{i(\alpha,a)})\ =\ \left\{\left(\frac{\sqrt{M-1}}{(\sqrt{M}+1)\sqrt{d^ {2}-d}}(1+\Lambda_{+})\right)^{(1)},\left(\frac{\sqrt{M-1}}{(\sqrt{M}+1)\sqrt {d^{2}-d}}(1+\Lambda_{-})\right)^{(1)},\left(\frac{\sqrt{M-1}}{(\sqrt{M}+1) \sqrt{d^{2}-d}}\right)^{(d-2)}\right\} \tag{30}\]
with
\[\Lambda_{\pm}\ =\ \frac{1}{2}\left(-d\pm\sqrt{d^{2}+4\frac{d^{2}-d}{\sqrt{M}-1 }}\right). \tag{31}\]
The numbers in brackets in the exponents of (30) indicate the multiplicities of the corresponding eigenvalues.
Therefore, it can be concluded that for \(M\geq d\) the existence of an optimal informationally complete \((N,M)\)-POVM \(\Pi\) implies the existence of a set of \((d^{2}-1)\) isospectral, traceless, orthonormal hermitian operators \(G_{i(\alpha,a)}\) defined by (27), whose common spectrum is given by (30). Stated differently, the existence of a set of \((d^{2}-1)\) isospectral, traceless, orthonormal, hermitian operators \(G_{i(\alpha,a)}\), whose common spectrum is given by (30), is necessary for the existence of an optimal informationally complete \((N,M)\)-POVM. This result generalizes an already known property of GSICs (Gour and Kalev, 2014), i.e. optimal informationally complete \((1,M)\)-POVMs, to all optimal informationally complete \((N,M)\)-POVMs with \(M\geq d\).
### Optimal \((N,m)\)-POVMs for \(2<m<d\)
In this subsection we prove a necessary condition for the existence of an optimal \((N,M)\)-POVM for cases with \(2<M<d\). It is shown that in these cases it is necessary that all POVM elements are projection operators of rank \(d/M\). This significantly constrains the possible parameters \((d,N,M)\), for which optimal \((N,M)\)-POVMs can be constructed. Although the discussion of this necessary condition also applies to \(M=2\), these special cases will be discussed separately in Sec.V.
Let us consider an optimal \((N,M)\)-POVM of a \(d\)-dimensional quantum system with \(2<M<d\). According to (5) it is characterized by the parameter \(x=d/M\). Furthermore, the defining conditions (2) and (3) imply that each element \(\Pi_{i(\alpha,a)}\) (cf. (17)) of this optimal \((N,M)\)-POVM \(\Pi\) fulfills the relations
\[{\rm Tr}\{\Pi_{i(\alpha,a)}\}\ =\ \sum_{\sigma=1}^{d}\lambda_{\sigma}=\frac{d} {M},\ \ \ {\rm Tr}\{(\Pi_{i(\alpha,a)})^{2}\}\ =\ \sum_{\sigma=1}^{d}\lambda_{\sigma}^{2}=\frac{d}{M} \tag{32}\]
with non-negative eigenvalues \(\lambda_{\sigma}\) for \(\sigma\in\{1,\cdots,d\}\). Consequently, the only possible eigenvalues are given by \(\lambda_{\sigma}\in\{0,1\}\). Therefore, an optimal \((N,M)\)-POVM can exist only in dimensions \(d\), for which \(d/M\in\mathbb{N}\) is a natural number. Furthermore, all POVM elements are projection operators of rank \(d/M\), i.e. \(\Pi_{i(\alpha,a)}^{2}=\Pi_{i(\alpha,a)}\). Stated differently, the existence of POVM elements of rank \(d/M\in\mathbb{N}\), which are projection operators, is necessary for the existence of an optimal \((N,M)\)-POVM for \(2<M<d\). According to this necessary condition the smallest dimension \(d\), for example, in which an optimal informationally complete \((N,M)\)-POVM can possibly be constructed, is given by \(d=8\). It is a \((21,4)\)-POVM with \(x=2\) and all its POVM elements are of rank two.
## V Optimal \((N,2)\)-POVMs
In this section optimal \((N,2)\)-POVMs of \(d\)-dimensional quantum systems are investigated. For them additional properties can be derived, which transcend the necessary condition discussed in Sec.IV.B. These additional properties are strong enough for deriving a necessary and sufficient condition for the existence of optimal \((N,2)\)-POVMs with \(N\leq d^{2}-1\), including informationally complete ones for \(N=d^{2}-1\).
Let us consider an arbitrary POVM element of an optimal \((N,2)\)-POVM as given by (17) with \(x=d/2\) and positive semidefinite eigenvalues of the form \(\lambda_{\sigma}=1/2+\eta_{\sigma}\) for \(\sigma\in\{1,\cdots,d\}\). According to the arguments of Sec.IV.B \(\lambda_{\sigma}\in\{0,1\}\) so that \(\mid\eta_{\sigma}\mid\ =\ 1/2\). In addition, we have \(d/2\in\mathbb{N}\) so that the dimension \(d\) has to be even. Therefore, relation (32) can only be fulfilled, if the spectrum of the traceless, normalized and hermitian operators
\[K_{i(\alpha,a)}\ =\ \frac{\Pi_{i(\alpha,a)}-\mathds{1}_{d}/2}{\sqrt{d/4}} \tag{33}\]
is given by
\[\mathrm{Sp}\left(K_{i(\alpha,a)}\right)\ =\ \left\{+\frac{1}{\sqrt{d}}^{(d/2)},- \frac{1}{\sqrt{d}}^{(d/2)}\right\} \tag{34}\]
for each \(i(\alpha,a)\in\{1,\cdots,2N\}\). The numbers in brackets in the exponents of (34) indicate the multiplicities of the corresponding eigenvalues. In view of relation (4) the operators \(K_{i(\alpha,a)}\) and \(K_{i(\beta,b)}\) are also orthogonal for \(\alpha\neq\beta\in\{1,\cdots,N\leq d^{2}-1\}\) and \(a,b\in\{1,2\}\). Thereby, we have taken into account that for a \(d\)-dimensional quantum system the number of traceless, orthogonal, hermitian operators cannot exceed \(d^{2}-1\). But these operators \(K_{i(\alpha,a)}\) and \(K_{i(\beta,b)}\) are not orthogonal for \(\alpha=\beta\) and \(a\neq b\) and fulfill the relation
\[K_{i(\alpha,2)}\ =\ -K_{i(\alpha,1)}. \tag{35}\]
Therefore, for \(N\leq d^{2}-1\) the existence of an optimal \((N,2)\)-POVM implies the existence of \(N\) isospectral, traceless, orthonormal, hermitian operators \(\{K_{i(\alpha,1)}\ |\ \alpha\in\{1,\cdots,N\}\}\), whose common spectrum is given by (34). However, in view of (33) and (35) this conclusion can also be turned around. Thus, for \(N\leq d^{2}-1\) the existence of an optimal \((N,2)\)-POVM is sufficient and necessary for the existence of \(N\) isospectral, traceless, orthonormal, hermitian operators \(\{K_{i(\alpha,1)}\ |\ \alpha\in\{1,\cdots,N\}\}\), whose common spectrum is given by (34). Thereby, the case \(N=d^{2}-1\) covers optimal informationally complete \((N,2)\)-POVMs. It should be mentioned that this existence criterion for optimal \((N,2)\)-POVMs with \(N\leq d^{2}-1\) generalizes a recent weaker result (Siudzinska, 2022), which was based on the weaker assumption \(|\ \eta_{\sigma}\ |\leq 1/2\).
In the special cases of even dimensions of the form \(d=2^{k}\) with \(k\in\mathbb{N}\), \(N\leq d^{2}-1\) isospectral, traceless, orthonormal, hermitian operators can easily be constructed with the help of the Clifford algebra generated by tensor products of Pauli operators. Accordingly, these operators are given by
\[\frac{1}{\sqrt{2^{k}}}\sigma_{i_{1}}\otimes\sigma_{i_{2}}\otimes\cdots\sigma_ {i_{k}} \tag{36}\]
with \((i_{1},\cdots,i_{k})\neq(0,\cdots,0)\) and with the Pauli operators
\[\sigma_{0}\ =\ \left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\ \ \sigma_{1}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\ \ \sigma_{2}=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\ \ \sigma_{3}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right). \tag{37}\]
Therefore, for \(N\leq d^{2}-1\) optimal \((N,2)\)-POVMs in dimension \(d=2^{k}\) can easily be constructed. Their \(2N\) elements are given by
\[\Pi_{i((i_{1},\cdots,i_{k}),1)}\ =\ \frac{\openone_{d}}{2}+\frac{1}{2} \sigma_{i_{1}}\otimes\sigma_{i_{2}}\otimes\cdots\sigma_{i_{k}},\ \ \Pi_{i((i_{1},\cdots,i_{k}),2)}\ =\ \frac{ \openone_{d}}{2}-\frac{1}{2}\sigma_{i_{1}}\otimes\sigma_{i_{2}}\otimes\cdots \sigma_{i_{k}} \tag{38}\]
with \((i_{1},\cdots,i_{k})\neq(0,\cdots,0)\). Nevertheless, this explicit basis construction still leaves the question open, to which extent such optimal \((N,2)\)-POVMs also exist in other even dimensions.
## VI Summary and conclusions
Motivated by the recent interest in basic theoretical properties of \((N,M)\)-POVMs, we have explored general features of their existence and construction with the help of orthonormal, hermitian operator bases for arbitrary \(d\)-dimensional quantum systems. A sufficient condition has been derived for the existence of an arbitrary \((N,M)\)-POVM. It generalizes the already known property, that \((N,M)\)-POVMs can always be constructed for sufficiently small values of their continuous \(x\)-parameter (Siudzinska, 2022). In particular, it yields an explicit expression for an upper bound on this continuous \(x\)-parameter, below which all POVM elements are guaranteed to be positive semidefinite. Also necessary conditions for the existence of optimal \((N,M)\)-POVMs have been presented. One of these necessary conditions exhibits a close connection between the existence of optimal informationally complete \((N,M)\)-POVMs and the existence of isospectral, traceless, orthonormal, hermitian operator bases in cases with \(M\geq d\). Thereby, we have built on recent results, which have established that the relation between \((N,M)\)-POVMs and orthonormal hermitian operator bases is necessarily governed by highly degenerate linear maps (Schumacher and Alber, 2023a). This necessary condition generalizes a property, recently found for the special case of GSICs (Gour and Kalev, 2014), to arbitrary optimal informationally complete \((N,M)\)-POVMs. This connection motivates further research on the construction of such
isospectral, traceless, orthonormal hermitian operator bases in order to shed new light on the construction of optimal informationally complete \((N,M)\)-POVMs. Another necessary condition has been derived for optimal \((N,M)\)-POVMs for cases with \(M<d\). In particular, it has been shown that in these cases all POVM elements necessarily have to be projection operators of equal rank. This significantly constrains the possible parameters for constructing optimal \((N,M)\)-POVMs. This necessary condition has been derived solely by using properties, which have to be fulfilled necessarily by all POVM elements. Therefore, it motivates further research on conditions, which also take into account additional relations between the different POVM elements of an optimal \((N,M)\)-POVM. For the special cases with \(M=2\) a necessary and sufficient condition has been derived for the existence of optimal \((N,2)\)-POVMs with \(N\leq d^{2}-1\). Thereby, a relation to the existence of a set of \(N\) isospectral, traceless, orthonormal, hermitian operators has been established. Such operators with the required spectrum can only exist in even dimensions. For dimensions \(d=2^{k},\ k\in\mathbb{N}\) these operators can easily be constructed with the help of the Clifford algebra generated by the \(k\)-fold tensor products of the Pauli operators.
The recently introduced \((N,M)\)-POVMs (Siudzinska, 2022) are potentially interesting for numerous tasks of quantum information processing, such as the exploration of provable entanglement in quantum communication or quantum state tomography. Our presented sufficient and necessary conditions do not only shed new light on currently open questions concerning their existence and construction but also concerning their application for practical purposes. Our presented sufficient condition for their existence, for example, has established an explicit upper bound on the continuous \(x\)-parameters guaranteeing their existence. Combining this result with the recent observation (Schumacher and Alber, 2023a), that typical bipartite entanglement can be detected locally in an optimal way by local \((N,M)\)-POVMs fulfilling this sufficient condition, suggests interesting applications of \((N,M)\)-POVMs for the detection of provable entanglement in quantum communication protocols. In view of these promising aspects also for applications we are confident that \((N,M)\)-POVMs will play an interesting and practically useful role in future work exploring the intricacies of quantum correlations.
###### Acknowledgements.
G.A. is grateful to his friend and regular collaborator A.R.P.Rau for numerous inspiring discussions on symmetries in quantum physics and beyond. It is a pleasure to dedicate this work to him. This research is supported by the Deutsche Forschungsgemeinschaft (DFG) - SFB 1119 - 236615297.
## Appendix A A Gell-Mann basis for \(d=3\)
The Gell-Mann basis, which has been used in obtaining Fig.1a-c, is defined by the matrices
\[g_{1} = \frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&1&0\\ 1&0&0\\ 0&0&0\end{array}\right),\ \ g_{2}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&-i&0 \\ i&0&0\\ 0&0&0\end{array}\right),\ \ g_{3}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}1&0&0\\ 0&-1&0\\ 0&0&0\end{array}\right),\ \ g_{4}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&0&1\\ 0&0&0\\ 1&0&0\end{array}\right),\] \[g_{5} = \frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&0&-i\\ 0&0&0\\ i&0&0\end{array}\right),\ \ g_{6}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&0&0 \\ 0&0&1\\ 0&1&0\end{array}\right),\ \ g_{7}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&0&0 \\ 0&0&-i\\ 0&i&0\end{array}\right),\ \ g_{8}=\frac{1}{\sqrt{6}}\left(\begin{array}{ccc}1&0&0 \\ 0&1&0\\ 0&0&-2\end{array}\right). \tag{10}\]
These \((d^{2}-1)=8\) matrices have vanishing traces and are orthogonal with respect to the Hilbert-Schmidt scalar product. Together with the properly normalized unit matrix they form an orthonormal basis of the Hilbert space \(\mathcal{H}_{d^{2}}\) for \(d=3\).
|
2303.05746 | Singular weak solutions near boundaries in a half space away from
localized force for the Stokes and Navier-Stokes equations | We prove that there exists a weak solution of the Stokes system with a
non-zero external force and no-slip boundary conditions in a half space of
dimensions three and higher so that its normal derivatives are unbounded near
boundary. A localized and divergence free singular force causes, via non-local
effect, singular behaviors of normal derivatives for the solution near
boundary, although such boundary is away from the support of the external
force. The constructed one is a weak solution that has finite energy globally,
and it can be comparable to the one in \cite{Seregin-Sverak10} as a form of a
shear flow that is of only locally finite energy. Similar construction is
performed for the Navier-Stokes equations as well. | Tongkeun Chang, Kyungkeun Kang | 2023-03-10T07:09:52Z | http://arxiv.org/abs/2303.05746v1 | Singular weak solutions near boundaries in a half space away from localized force for the Stokes and Navier-Stokes equations
###### Abstract.
We prove that there exists a weak solution of the Stokes system with a non-zero external force and no-slip boundary conditions in a half space of dimensions three and higher so that its normal derivatives are unbounded near boundary. A localized and divergence free singular force causes, via non-local effect, singular behaviors of normal derivatives for the solution near boundary, although such boundary is away from the support of the external force. The constructed one is a weak solution that has finite energy globally, and it can be comparable to the one in [10] as a form of a shear flow that is of only locally finite energy. Similar construction is performed for the Navier-Stokes equations as well.
2020 _Mathematics Subject Classification._ primary 35Q30, secondary 35B65.
_Keywords and phrases: Stokes equations, Navier-Stokes equations, local regularity near boundary_
## 1. Introduction
In this paper, we consider the following non-stationary Stokes system with non-zero external force, zero initial data and no-slip boundary condition in half space \(\mathbb{R}^{n}_{+}\), \(n\geq 3\);
\[\left\{\begin{array}{l}w_{t}-\Delta w+\nabla\Pi=f,\quad\operatorname{div}w= 0\qquad\mathbb{R}^{n}_{+}\times(0,1),\\ w\big{|}_{x_{n}=0}=0,\qquad w\big{|}_{t=0}=0.\end{array}\right. \tag{1.1}\]
Here we assume that \(f\) is compactly supported in \(\overline{\mathbb{R}^{n}_{+}}\times(0,1)\). Our concern is local analysis of the solution of the Stokes system (1.1) near boundary, in particular, in the region near boundary away from the support of \(f\). A specific form of localized external force \(f\) in (1.1) is described in **Assumption 1**. One can imagine similar situation for the heat equation in a half space
\[\left\{\begin{array}{l}u_{t}-\Delta u=f,\qquad\mathbb{R}^{n}_{+}\times(0,1),\\ u\big{|}_{x_{n}=0}=0,\quad u\big{|}_{t=0}=0.\end{array}\right.\]
We suppose \(f\) is compactly supported, e.g., in \(\overline{B^{+}_{1}}\times(0,1)\), where \(B^{+}_{r}=\{x\in\mathbb{R}^{n}:|x|<r,x_{n}>0\}\). Even in the case that \(f\) is singular in \(B^{+}_{1}\times(0,1)\), it is known due to classical regularity theory that \(u\) becomes regular, in particular, near the boundary, away from the support of \(f\), namely
\[\left\|\partial_{t}^{m}\partial_{x}^{l}u\right\|_{L^{\infty}(B^{+}_{x^{\prime },r}\times(t-r^{2},t))}\leq c\left\|u\right\|_{L^{2}(B^{+}_{x^{\prime},2r} \times(t-4r^{2},t))},\qquad m,l\geq 0, \tag{1.2}\]
where \(x^{\prime}\in\left\{y^{\prime}\in\mathbb{R}^{n-1}:|y^{\prime}|>2\right\}\) and \((t-4r^{2},t)\subset(0,1)\). It is, however, unclear, due to the nonlocal effect, whether or not such an estimate is available for the Stokes system (1.1).
One can compare to the Stokes system with nonzero boundary data, instead nonzero force, in half space \(\mathbb{R}^{n}_{+}\), that is
\[\left\{\begin{array}{l}w_{t}-\Delta w+\nabla\Pi=0,\quad\operatorname{div}w=0,\qquad\mathbb{R}^{n}_{+}\times(0,1),\\ w|_{x_{n}=0}=\varphi(x^{\prime},t),\quad\left.w\right|_{t=0}=0.\end{array}\right. \tag{1.3}\]
In this case, for the localized boundary data, it has been shown that the estimate (1.2) is, in general, not true for the Stokes system (1.3), and furthermore, construction of solutions with the same singular behaviors has been also constructed for the Navier-Stokes equations as well (see [3, 4, 6, 7]). In particulr, it was shown that the constructed singular solutions in [7] are indeed global energy solutions, i.e. \(w\in L^{\infty}((0,1);L^{2}(\mathbb{R}^{n}_{+}))\cap L^{2}((0,1);\dot{H}( \mathbb{R}^{n}_{+}))\), \(n\geq 3\), for the Stokes system and Navier-Stokes equations as well. Therefore, we can say that, unlike the heat equation, non-local effect of the Stokes system with singular non-zero boundary data may cause violation of local smoothing effects of solutions near the boundary. However, most of examples have been constructed via nonzero flux at the boundary and it is not clear whether or not singular behaviors of solutions with finite global energy can be developed in the case of no-slip boundary condition on the every boundary. Nevertheless, Seregin and Sverak found in [10] the form of shear flow whose normal derivatives of solutions are unbounded near boundary in the half-space. More precisely, in [10], they constructed the following form of shear flow:
\[w(x,t)=(u(x_{3},t),0,0),\quad\Pi(x,t)=-g(t)x_{1},\quad\mathbb{R}^{3}_{+}\times (-4,0)\]
with homogeneous initial and boundary conditions and \(g(t)=|t|^{-1+\alpha}\), \(\alpha\in(0,\frac{1}{2})\). Then the solution is explicitly given as
\[w(x_{3},t)=\frac{2}{\sqrt{\pi}}\int_{-4}^{t}g(t-\tau-4)d\tau\int_{0}^{\frac{x_ {3}}{\sqrt{4(\tau+4)}}}e^{-\xi^{2}}d\xi \tag{1.4}\]
and one can see that \(w\) is bounded but \(\partial_{x_{3}}w\geq Cx_{3}^{-1+2\alpha}\) in the region near \(x_{3}=0\) with \(x_{3}^{2}>-4t\). We remark that the solution is not of finite energy in the half space and it is not even decaying, as \(x_{3}\) tends to infinity.
Our main objective of this paper is to construct solutions of (1.1) such that it is of finite energy in the half space, i.e. global weak solutions, and however it has a singular behavior near boundary, namely, unbounded normal derivatives such that main features of blow-up profiles are similar to the one (1.4) specified in [10].
Firstly, we specify the external force \(f\), which is of divergence free and has a certain type of singular behaviors for normal variable \(x_{n}\) and time variable \(t\). For convenience, we denote \(x=(x^{\prime},x_{n})\in\mathbb{R}^{n}\) with \(x^{\prime}\in\mathbb{R}^{n-1}\).
**Assumption 1**.: _Let \(n\geq 3\) and \(0<\alpha,\,\beta<1\). Suppose that \(g:\mathbb{R}^{n}_{+}\to\mathbb{R}\) is a real-valued function of the form \(g(x)=g^{\mathcal{T}}(x^{\prime})g^{\mathcal{N}}(x_{n})\), where non-negative functions \(g^{\mathcal{T}}:\mathbb{R}^{n-1}\to\mathbb{R}\) and \(g^{\mathcal{N}}:\mathbb{R}_{+}\to\mathbb{R}\) satisfy_
\[g^{\mathcal{T}}\in C^{\infty}_{c}(\mathbb{R}^{n-1}),\qquad\operatorname{supp}g ^{\mathcal{T}}\Subset B^{{}^{\prime}}_{1}=\left\{x^{\prime}\in\mathbb{R}^{n-1} :|x^{\prime}|<1\right\},\]
\[g^{\mathcal{N}}\in C^{\infty}(\mathbb{R}_{+}),\quad\operatorname{supp}g^{ \mathcal{N}}\subset(0,2),\quad g^{\mathcal{N}}(x_{n})=x_{n}^{1-\beta},\ \text{ for }\ x_{n}\in(0,1].\]
_Let \(a>0\) be a constant. Furthermore, we suppose that a vector field \(f=(f_{1},\cdots,f_{n}):\mathbb{R}^{n}_{+}\times\mathbb{R}_{+}\to\mathbb{R}^{n}\) is given as \(f_{2}=a\frac{\partial g}{\partial x_{n}}(x)h(t)\), \(f_{n}=-a\frac{\partial g}{\partial x_{2}}(x)h(t)\) and \(f_{i}=0\) for \(i\neq 2\), \(i\neq n\), i.e._
\[f(x,t)=\left(0,a\frac{\partial g}{\partial x_{n}}(x)h(t),0,\cdots,0,-a\frac{ \partial g}{\partial x_{2}}(x)h(t)\right), \tag{1.5}\]
_where a non-negative function \(h:\mathbb{R}_{+}\to\mathbb{R}\) is given by_
\[h(t)=(t-\frac{1}{2})^{-\alpha}\chi_{(\frac{1}{2},\infty)}(t).\]
**Remark 1.1**.: _We note that the vector field \(f\) in (1.5) is divergence free in \(\mathbb{R}^{n}_{+}\) and the normal component vanishes at the boundary, namely \(\operatorname{div}f=0\) and \(\left.f_{n}\right|_{x_{n}=0}=0\). It is straightforward that_
\[f\in L^{q_{1}}_{t}L^{p_{1}}_{x}(\mathbb{R}^{n}_{+}\times(0,\infty)),\qquad q_{ 1}\in[1,\frac{1}{\alpha}),\quad p_{1}\in[1,\frac{1}{\beta}). \tag{1.6}\]
_We remind that \(f_{2}\) and \(f_{n}\) near \((x_{n},t)=(0,1/2)\) behave as follows:_
\[f_{2}\sim x_{n}^{-\beta}\left(t-\frac{1}{2}\right)^{-\alpha},\qquad f_{n}\sim x _{n}^{1-\beta}\left(t-\frac{1}{2}\right)^{-\alpha}.\]
It was shown in [11] that, in case that a given vector field \(f\) in a half space holds \(\operatorname{div}f=0\) and \(\left.f_{n}\right|_{x_{n}=0}=0\), the solution \(w\) and the associated pressure \(\Pi\) in (1.1) are represented by
\[w(x,t)=\int_{0}^{t}\int_{\mathbb{R}^{n}_{+}}K(x,y,t-s)f(y,s)dyds, \tag{1.7}\]
\[\Pi(x,t)=\int_{0}^{t}\int_{\mathbb{R}^{n}_{+}}P(x,y,t-s)\cdot f(y,s)dyds, \tag{1.8}\]
where the Green tensor \(K=(K_{ij})\) and the pressure vector \(P=(P_{j})\) are given as
\[K_{ij}(x,y,t)=\delta_{ij} \big{(}\Gamma(x-y,t)-\Gamma(x-y^{*},t)\big{)}\] \[-4(1-\delta_{jn})D_{x_{j}}\int_{0}^{x_{n}}\int_{\mathbb{R}^{n-1} }\Gamma(x-y^{*}-z,t)D_{z_{i}}N(z)dz, \tag{1.9}\]
\[P_{j}(x,y,t)=4(1-\delta_{jn})D_{x_{j}}(D_{x_{n}}+D_{y_{n}})\int_{\mathbb{R}^{n -1}}N(x-z^{\prime})\Gamma(z^{\prime}-y,t)dz^{\prime}, \tag{1.10}\]
where \(y^{*}=(y^{\prime},-y_{n})\), \(\delta_{jn}\) is Kronecker delta function, and \(\Gamma(x,t)=(4\pi t)^{-\frac{n}{2}}e^{-\frac{|x|^{2}}{4t}}\) and \(N(x)=-c_{n}|x|^{2-n}\) with \(c_{n}=(n(n-2)\omega_{n})^{-1}\) denote Gaussian kernel and Newtonian kernel in \(n\) dimensions, \(n\geq 3\), respectively.
Next, we introduce the notion of weak solution for the Stokes system (1.1).
**Definition 1.2**.: _Let \(T\in(0,\infty)\) and \(f\in L^{q}_{t}L^{p}_{x}(\mathbb{R}^{n}_{+}\times(0,T))\) for \(1<p,q<\infty\). We say that a vector field \(w\in L^{2}(0,T;\dot{H}^{1}(\mathbb{R}^{n}_{+}))\cap L^{\infty}(0,T;L^{2}( \mathbb{R}^{n}_{+}))\) is a weak solution of the Stokes system (1.1), if the following equality is satisfied:_
\[\int_{0}^{T}\int_{\mathbb{R}^{n}_{+}}\nabla w:\nabla\Phi dxdt=\int_{0}^{T} \int_{\mathbb{R}^{n}_{+}}\left(w\cdot\Phi_{t}+f\cdot\Phi\right)dxdt \tag{1.11}\]
_for every vector field \(\Phi\in C^{2}_{c}(\mathbb{R}^{n}_{+}\times[0,T))\) with \(\mathrm{div}\,\Phi=0\), and in addition, for every scalar function \(\Psi\in C^{1}_{c}(\overline{\mathbb{R}^{n}_{+}})\)_
\[\int_{\mathbb{R}^{n}_{+}}w(x,t)\cdot\nabla\Psi(x)dx=0\quad\text{for all}\quad 0 <t<T.\]
_Furthermore, for every vector field \(\varphi\in C^{0}_{c}(\overline{\mathbb{R}^{n}_{+}})\)_
\[\lim_{t\to 0}\int_{\mathbb{R}^{n}_{+}}w(x,t)\cdot\varphi(x)dx=0\]
From now on, for simplicity, we assume that \(T=1\), without loss of generality, in Definition 1.2. The concept of weak solutions can be relaxed by removing the restriction that solutions belong to energy class, i.e. \(L^{2}(0,1;\dot{H}^{1}(\mathbb{R}^{n}_{+}))\cap L^{\infty}(0,1;L^{2}(\mathbb{R }^{n}_{+}))\). Indeed, for comparison, we also introduce a notion of _very weak solutions_ (see Definition 2.1 in Section 2).
Our main objective of the paper is to construct a weak solution of the Stokes system (1.1) with singular behavior near boundary. To be more precise, normal derivatives of weak solutions are unbounded at the boundary away from the support of \(f\), although solutions are in energy classes and even locally bounded.
**Notation 1**.: _Let \(i\) be an integer with \(1\leq i\leq n-1\) and \(i\neq 2\). We introduce, for convenience, a set \(A_{i}\subset\mathbb{R}^{n-1}\) defined by_
\[A_{i}=\left\{x^{\prime}\in\mathbb{R}^{n-1}:\frac{1}{2}|x_{i}|\leq|x_{2}|\leq 2 |x_{i}|,\ |x^{\prime}|^{2}\leq 2\big{(}|x_{i}|^{2}+|x_{2}|^{2}\big{)},\ |x_{i}|,|x_{2}|>2\right\}. \tag{1.12}\]
_We split \(A_{i}\) into two disjoint sets, denoted by \(A_{i1}\) and \(A_{i2}\), as follows:_
\[A_{i1}=A_{i}\cap\left\{x^{\prime}\in\mathbb{R}^{n-1}:x_{i}x_{2}>0\right\}, \qquad A_{i2}=A_{i}\cap\left\{x^{\prime}\in\mathbb{R}^{n-1}:x_{i}x_{2}<0 \right\}. \tag{1.13}\]
_We also denote \(B_{i}:=B_{i1}\cup B_{i2}\), where \(B_{i1}\) and \(B_{i2}\) are defined by_
\[B_{i1}=\left\{x^{\prime}\in\mathbb{R}^{n-1}\,\big{|}\,\frac{1}{4\sqrt{n}}\,|x^ {\prime}|>|x_{2}|,\ 2<|x_{i}|<\infty\right\}, \tag{1.14}\]
\[B_{i2}=\left\{x^{\prime}\in\mathbb{R}^{n-1}\,\big{|}\,4\sqrt{n}\,\big{|}x^{\prime} \big{|}<|x_{2}|,\ 2<|x_{2}|<\infty\right\}. \tag{1.15}\]
A two dimensional cartoon of sets defined above, for example, is pictured in the Appendix 7.3.
Now we are ready to state first main result.
**Theorem 1.3**.: _Let \(f\) be given in Assumption 1, and \(A_{i}\) and \(B_{i}\) disjoint sets defined in Notation 1._
* _If_ \(0<\beta<\frac{1}{2}\)_, then the solution_ \(w\) _defined in (_1.7_) of the Stokes system (_1.1_) becomes the weak solution satisfying_ (1.16) \[\|w\|_{L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n}_{+}\times(0,1))}+\|D_{x}w\|_{L^{ 2}(\mathbb{R}^{n}_{+}\times(0,1))}\leq c=c(\|f\|_{L^{q_{1}}_{t}L^{p_{1}}_{x}( \mathbb{R}^{n}_{+}\times(0,1))}),\] _where_ \(q_{1}\in[1,\frac{1}{\alpha})\) _and_ \(p_{1}\in[1,\frac{1}{\beta})\)_._
* _If_ \(q>6\) _and_ \(2+\frac{3}{q}<2\alpha+\beta\)_, then normal derivatives of_ \(w\) _are singular on any subset of_ \(A_{i}\cup B_{i}\)_, i.e._ (1.17) \[\|\partial_{x_{n}}w\|_{L^{l}(D\times(0,1)\times(0,1))}=\infty,\qquad\text{ for any $l\geq q$ and $D\subset A_{i}\cup B_{i}$},\]
**Remark 1.4**.:
* _We note that in case_ \(6<q<\infty\)_, there are_ \(\alpha\in(0,1)\) _and_ \(\beta\in(0,1/2)\) _such that_ \(2+\frac{3}{q}<2\alpha+\beta\)_, and thus_ \[\left\{(\alpha,\beta)\in(0,1)\times(0,\frac{1}{2})\,:\,2+\frac{3}{q}<2\alpha+ \beta\right\}\neq\emptyset.\]
* _We remark that if we don't require that solutions belongs to the energy class and instead, if we allow it to be a very weak solution (see Definition_ 2.1_), then it is unnecessary to assume that_ \(0<\beta<\frac{1}{2}\)_, and thus it is possible to construct a very weak solution_ \(u\) _such that_ \(\nabla u\) _becomes unbounded in_ \(L^{q}_{\mathrm{loc}}\)_,_ \(q>3\)_. near boundary away from the support of_ \(f\)_. It turns out that such examples show similar singular behaviors as those of the example constructed in_ _[_10_]__. Since our concern is about weak solutions, we are not going to pursue the matter on very weak solutions in this paper._
Secondly, we similarly analyze pressure both globally and locally, and obtain the following:
**Theorem 1.5**.: _Let \(\alpha\), \(\beta\) and \(f\) be the numbers and the vector field in Theorem 1.3. Let \(\Pi\) be a pressure associated with the weak solution \(w\) of the Stokes system in Theorem 1.3, defined by (1.8)._
* _Let_ \(p>\frac{2n}{n-1}\) _and_ \(q>1\)_. Suppose that_ \(\alpha\in(0,1)\) _and_ \(\beta\in(0,\frac{1}{2})\) _are numbers satisfying_ \[\beta>\frac{n}{(n-1)p},\qquad 2\alpha+n\beta<\frac{2}{q}+\frac{n}{p}+1.\] _Then, the pressure_ \(\Pi\) _is globally bounded in_ \(L^{q}_{t}L^{p}_{x}\) _and satisfies_ (1.18) \[\|\Pi\|_{L^{q}(0,1;L^{p}(\mathbb{R}^{n}_{+}))}\leq c=c(\|f\|_{L^{q_{1}}_{t}L^{ p_{1}}_{x}(\mathbb{R}^{n}_{+}\times(0,1))}),\] _where_ \(q_{1}\in[1,\frac{1}{\alpha})\) _and_ \(p_{1}\in[1,\frac{1}{\beta})\)
_._
* _If_ \(q\in(1,\infty)\) _satisfies_ (1.19) \[1+\frac{2}{q}<2\alpha+\beta,\] _then_ \(\Pi\) _is locally unbounded in_ \(L^{q}_{x,t}\)_, that is_ (1.20) \[\|\Pi\|_{L^{q}(\{|x^{\prime}|>2\}\times(a,b)\times(0,1))}=\infty\] _for any_ \(a,b\) _with_ \(0\leq a<b<\infty\)_._
**Remark 1.6**.: _As mentioned early, in the proof of Theorem 1.3, \(\beta\in(0,\frac{1}{2})\) and \(\alpha\in(0,1)\) are imposed, which implies \(q>\frac{4}{3}\) in (ii) of Theorem 1.5. If it is, however, extend to the very weak solutions, the condition \(\beta\in(0,\frac{1}{2})\) can be relaxed as \(\beta\in(0,1)\), and thus (1.19) is valid for any \(q>1\). In addition, the condition \(p>\frac{2n}{n-1}\) in (i) can be relaxed by \(p>\frac{n}{n-1}\) for such case._
**Remark 1.7**.: _It is not difficult to see that (1.18) and (1.20) are compatible. Indeed, suppose that \(\frac{2n}{n-1}<p<\infty\), \(1<q<\infty\). Then, we can check easily that two sets \(C\) and \(D\) below have no intersection, i.e. \(C\cap D=\emptyset\), where_
\[C=\left\{(\alpha,\beta)\in(0,1)\times(0,1)\ :\ 2\alpha+n\beta< \frac{2}{q}+\frac{n}{p}+1\right\},\] \[D=\left\{(\alpha,\beta)\in(0,1)\times(0,1)\ :\ \beta>\frac{n}{(n-1)p},\ 1+ \frac{2}{q}<2\alpha+\beta\right\}.\]
_Since its verification is straightforward, we skip its details._
The constructed weak solutions in Theorem 1.3 are not \(C^{1}\) but indeed Holder continuous up to the boundary. Optimal regularity up to boundary is stated in next theorem for the solution of the Stokes system under consideration.
**Theorem 1.8**.: _Let \(f\) be given in Assumption 1, and \(0<\alpha,\ \beta<1\) such that \(\epsilon_{0}:=3-2\alpha-\beta\in(0,2)\). Set \(\mathcal{R}=\left\{(x^{\prime},x_{n})\in\overline{\mathbb{R}_{+}^{m}}\,:\,|x^ {\prime}|\geq 2,\ x_{n}\geq 0\right\}\). Suppose that \(w\) be a solution of of the Stokes system (1.1) defined by (1.7). Then, \(w\) is Holder continuous in \(\mathcal{R}\times(0,1)\) with the optimal exponent \(\epsilon_{0}\), that is,_
\[w\in C^{\epsilon_{0},\frac{1}{2}\epsilon_{0}}(\mathcal{R}\times(0,1)), \tag{1.21}\]
_and, in case that \(\epsilon>\epsilon_{0}\), we have_
\[w\notin L^{\infty}(0,1;C^{\epsilon}(\mathcal{R})\quad\text{and}\quad w\notin C ^{\frac{\epsilon}{2}}(0,1;L^{\infty}(\mathcal{R})). \tag{1.22}\]
**Remark 1.9**.: _The solution \(w\) constructed in (ii) of Theorem 1.3 is contained in \(w\in C^{\epsilon_{0},\frac{1}{2}\epsilon_{0}}(\mathcal{R}\times(0,1))\) with \(0<\epsilon_{0}<1\) and \(w\notin C^{\epsilon,\frac{1}{2}\epsilon}(\mathcal{R}\times(0,1))\) for all \(\epsilon_{0}<\epsilon\), because \(2\alpha-\beta>2\). On the other hand, in the case that (ii) of Theorem 1.5, since \(\epsilon_{0}:=3-2\alpha-\beta<2(1-\frac{1}{q})\), \(q\in(1,\infty)\), it follows that if \(q\leq 2\), then \(w\) is Holder continuous in \(\mathcal{R}\times(0,1)\) with the exponent \(\epsilon_{0}\in(0,1)\)
_On the other hand, in case \(q>2\), then \(\nabla w\) can be even Holder continuous, because \(\epsilon_{0}\) possibly belongs to \((1,2(1-\frac{1}{q}))\). We remark that in the interior, the solution \(w\) is spatially smooth, although it is just Holder continuous in temporal variable (see Proposition 3.7)._
Lastly, we consider the Navier-Stokes equations in a half space.
\[\left\{\begin{array}{l}u_{t}-\Delta u+\operatorname{div}\left(u\otimes u \right)+\nabla p=f,\quad\operatorname{div}u=0\qquad\mathbb{R}_{+}^{n}\times(0,1),\\ u|_{x_{n}=0}=0,\quad u|_{t=0}=0.\end{array}\right. \tag{1.23}\]
Via the method of perturbation, we construct a weak solution of the Navier-Stokes equations whose normal derivatives are unbounded near boundary. First we specify values of some parameters for such construction. More pertinently, we choose positive numbers \(s\) and \(r\) satisfying
\[\max\left\{\frac{n+2}{2},4\right\}<s<n+2,\qquad\frac{s(n+2)}{n+2-s}<r<\infty. \tag{1.24}\]
Since \(2s<\frac{s(n+2)}{n+2-s}\), it is obvious that \(r>2s\). It is also direct via \(n>2\) and (1.24) that
\[2+\frac{n+2}{r}<1+\frac{n+2}{s}<2+\frac{n}{2}. \tag{1.25}\]
We now fix \(\alpha\) and \(\beta\) as follows:
\[\alpha=1-\frac{n+2}{4r}+\frac{\delta}{2},\qquad\beta=\frac{n+2}{2r}-\epsilon, \tag{1.26}\]
where \(\delta\) and \(\epsilon\) are any number satisfying \(0<\epsilon<\delta<n\epsilon<\frac{n+2}{2r}\). It is immediate that \(\beta<\frac{1}{2}\), since \(r>2s\). We take \(r_{0}\in(1,\infty)\) with \(\frac{3}{r_{0}}<\delta-\epsilon\). Let \(f\) be the function introduced in Assumption 1 with \(\alpha\) and \(\beta\) defined above. Owing to (1.24)-(1.26), we can see that
\[2+\frac{3}{r_{0}}<2\alpha+\beta,\qquad 2\alpha+n\beta<2+\frac{n+2}{r}, \tag{1.27}\]
and, in addition, reminding, due to (1.6), (1.17), (2.6) and (2.7), it follows for the solution \(w\) of the Stokes system (1.1) that
\[w\in L^{2}(0,1;\dot{H}^{1}(\mathbb{R}_{+}^{n}))\cap L^{\infty}(0,1;L^{2}( \mathbb{R}_{+}^{n})),\quad w\in L^{r}(\mathbb{R}_{+}^{n}\times(0,1)), \tag{1.28}\]
\[\nabla w\in L^{s}(\mathbb{R}_{+}^{n}\times(0,1)),\quad\nabla w\notin L^{r_{0} }((A_{i}\cup B_{i})\times(0,1)\times(0,1)). \tag{1.29}\]
We are now ready to state the main result for the Navier-Stokes equations.
**Theorem 1.10**.: _Let \(f\) be given in Assumption 1, and \(\alpha,\beta,s,r\) and \(r_{0}\) numbers mentioned in (1.24)-(1.27). Then there exists a weak solution of the Navier-Stokes equations (1.23) so that_
\[\nabla u\in L^{s}(\mathbb{R}_{+}^{n}\times(0,1)),\qquad\nabla u\notin L^{r_{0 }}((A_{i}\cup B_{i})\times(0,1)\times(0,1)), \tag{1.30}\]
_where \(A_{i}\) and \(B_{i}\) are defined in Notation 1._
**Remark 1.11**.: _Here we do not provide definition of weak solutions of the the Navier-Stokes equations, since it can be defined similarly as the case of the Stokes system shown in Definition 1.2. In such case, the convection term has to be taken into account so that \(\int_{0}^{1}\int_{\mathbb{R}^{n}_{+}}u\otimes u:\nabla\Phi dxdt\) is to be included in (1.11)._
The paper is organized as follows: In Section 2, we review solution formula of the Stokes system with nonzero force in a half space and prove a lemma that is useful for our analysis. Section 3 is devoted to stating a series of propositions that are some parts of main results. Proofs of propositions are prepared in Section 4. In Section 5 we present proofs of Theorem 1.3, Theorem 1.5 and Theorem 1.8. The case of the Navier-Stokes equations is considered and the proof of Theorem 1.10 is presented in Section 6. In Appendix, a figure is drawn to indicate two dimensional cartoon of sets, parts of boundary, where singular solutions of Stokes and Navier-Stokes equations are constructed. In addtion, proofs of technical lemmas such as Lemma 2.2 and Lemma 3.1 are provided.
## 2. Preliminaries
In this section, we recall the notation of very weak solutions by comparison with weak solutions. We then remind formula of the solution for the Stokes system (1.1) and some related estimates for the solution. Finally, we provide the proof of estimates for an integral quantity, which will have beneficial use for our main results.
When given functions \(f\) and \(g\) are comparable, we use, as a convention of notations, \(f\approx g\), which indicates \(c_{1}g\leq f\leq c_{2}g\) for some positive constants \(c_{1}\) and \(c_{2}\).
The conception of weak solutions is already introduced and here we account for very weak solutions, which are a bit more generalized weak solutions.
**Definition 2.1**.: _Let \(f\in L^{q}_{t}L^{p}_{x}(\mathbb{R}^{n}_{+}\times(0,1))\) for \(1<p,q<\infty\). A vector field \(w\in L^{1}_{\rm loc}(\mathbb{R}^{n}_{+}\times(0,1))\) is called a very weak solution of the Stokes system (1.1), if the following equality is satisfied:_
\[-\int_{0}^{1}\int_{\mathbb{R}^{n}_{+}}w\cdot\Delta\Phi dxdt=\int_{0}^{1}\int_{ \mathbb{R}^{n}_{+}}\left(w\cdot\Phi_{t}+f\cdot\Phi\right)dxdt\]
_for each \(\Phi\in C^{2}_{c}(\mathbb{R}^{n}_{+}\times[0,1)\) with \(\mbox{\rm div}\,\Phi=0\), and in addition, for each \(\Psi\in C^{1}_{c}(\overline{\mathbb{R}^{n}_{+}})\)_
\[\int_{\mathbb{R}^{n}_{+}}w(x,t)\cdot\nabla\Psi(x)dx=0\quad\mbox{ for all }\quad 0<t<1.\]
_Furthermore, for each vector field \(\varphi\in C^{1}_{c}(\overline{\mathbb{R}^{n}_{+}})\)_
\[\lim_{t\to 0}\int_{\mathbb{R}^{n}_{+}}w(x,t)\cdot\varphi(x)dx=0\]
For convenience, recalling the formula (1.10), we decompose the pressure \(\Pi(x,t)\) as follows:
\[\Pi(x,t)=4\left(\Pi^{\mathcal{G}}(x,t)+\Pi^{\mathcal{B}}(x,t)\right), \tag{2.1}\]
where
\[\Pi^{\mathcal{G}}(x,t)=\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)\int_ {\mathbb{R}^{n-1}}\Gamma(z^{\prime}-y,t-\tau)D_{x_{2}}D_{x_{n}}N(x-z^{\prime}) dz^{\prime}dyd\tau, \tag{2.2}\]
\[\Pi^{\mathcal{B}}(x,t)=\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,s)\int_{ \mathbb{R}^{n-1}}D_{y_{n}}\Gamma(z^{\prime}-y,t-s)D_{x_{2}}N(x-z^{\prime})dz^{ \prime}dyds. \tag{2.3}\]
For notational conventions, we write the second term of the righthand side in (1.9) as
\[L_{ij}(x,y,t):=-4(1-\delta_{jn})D_{x_{j}}\int_{0}^{x_{n}}\int_{\mathbb{R}^{n-1 }}\Gamma(x-y^{*}-z,t)D_{z_{i}}N(z)dz. \tag{2.4}\]
It was shown in [11] that \(L_{ij}\) satisfies that for all \(k\in\mathbb{N}\cup\{0\}\), \(l=(l^{\prime},l_{n})\in(\mathbb{N}\cup\{0\})^{n}\),
\[|D_{t}^{k}D_{x_{n}}^{l_{n}}D_{x^{\prime}}^{l^{\prime}}L_{ij}(x,y,t)|\leq\frac{ ce^{-\frac{y_{n}^{2}}{t}}}{t^{k}(t+x_{n}^{2})^{\frac{ln}{2}}(|x-y^{*}|^{2}+t)^{ \frac{n+|l^{\prime}|}{2}}},\quad 1\leq i,j\leq n. \tag{2.5}\]
From now on, we denote \(Q=\mathbb{R}_{+}^{n}\times(0,1)\), unless any confusion is to be expected. Next, we recall so called maximal regularity of the Stokes system (1.1), which is known to be as follows: In case that \(f\in L_{t}^{q_{1}}L_{x}^{p_{1}}(Q)\) with \(1<p_{1},q_{1}<\infty,\ 1<p<\infty,\ 1<q\leq\infty\), then
\[\|w\|_{L_{t}^{q}L_{x}^{p}(Q)}\leq c\,\|f\|_{L_{t}^{q_{1}}L_{x}^{p_ {1}}(Q)}\,,\qquad\frac{2}{q}+\frac{n}{p}>\frac{2}{q_{1}}+\frac{n}{p_{1}}-2, \tag{2.7}\] \[\|D_{x}w\|_{L_{t}^{q}L_{x}^{p}(Q)}\leq c\,\|f\|_{L_{t}^{q_{1}}L_{ x}^{p_{1}}(Q)}\,,\qquad\frac{2}{q}+\frac{n}{p}>\frac{2}{q_{1}}+\frac{n}{p_{1}}-1. \tag{2.6}\]
In next lemma we show upper and lower bound estimates of an integral quantity that is related to singular integral of one dimensional heat kernel.
**Lemma 2.2**.: _Let \(\beta<1\), \(0<\alpha<1\) and \(\gamma\in\mathbb{R}\). For \(x_{n}>0\) and \(t>\frac{1}{2}\). We set_
\[\mathcal{G}(x_{n},t):=\int_{\frac{1}{2}}^{t}\int_{0}^{2}y_{n}^{-\beta}(\tau- \frac{1}{2})^{-\alpha}(t-\tau)^{\gamma}e^{-\frac{(x_{n}+yn)^{2}}{4(t-\tau)}} dy_{n}d\tau.\]
_If \(\gamma-\frac{\beta}{2}>-\frac{3}{2}\), then there exist positive constants \(c_{i}\), \(i=1,2\) such that_
\[c_{1}(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha+\gamma}e^{-\frac{x_ {n}^{2}}{2(t-\frac{1}{2})}}\leq\mathcal{G}(x_{n},t)\leq c_{2}(t-\frac{1}{2})^{ \frac{3}{2}-\frac{\beta}{2}-\alpha+\gamma}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2 })}}. \tag{2.8}\]
_If \(\gamma-\frac{\beta}{2}\leq-\frac{3}{2}\), then there exist positive constants \(c_{i}\), \(i=3,4\) such that_
\[c_{3}(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}e^{-\frac{x_{n}^{2}}{2(t -\frac{1}{2})}}\leq\mathcal{G}(x_{n},t)\leq c_{4}(t-\frac{1}{2})^{-\alpha}x_{n }^{3-\beta+2\gamma}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2})}}. \tag{2.9}\]
The proof of Lemma 2.2 will be presented in Appendix 7.1.
## 3. Stokes equations with external force in a half-space
Let \(f\) be the external force defined in **Assumption 1**. For convenience of computations, we decompose \(w\) by \(w=V+W\), where
\[V_{i}(x,t) :=\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}\big{(}\Gamma(x-y,t-s)- \Gamma(x-y^{*},t-s)\big{)}f_{i}(y,s)dyds, \tag{3.2}\] \[W_{i}(x,t) :=\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}L_{i2}(x,y,t-s)f_{2}(y,s)dyds. \tag{3.1}\]
We note that
\[D_{x_{n}}L_{i2}(x,y,t)=D_{x_{2}}L_{ni}(x,y,t)-4D_{x_{2}}\int_{\mathbb{R}^{n-1}} \Gamma(x-y^{*}-z^{\prime},t)D_{z_{i}}N(z^{\prime},0)dz^{\prime}. \tag{3.3}\]
Denoting, for convenience,
\[W_{i}^{\mathcal{G}}(x,t)=\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}L_{ni}(x,y,t- \tau)f_{2}(y,\tau)dyd\tau,\qquad i=1,2,\cdots,n-1, \tag{3.4}\]
it follws that
\[D_{x_{n}}w_{i}=D_{x_{n}}V_{i}+D_{x_{n}}W_{i}=D_{x_{n}}V_{i}+D_{x_{2}}W_{i}^{ \mathcal{G}}+\mathcal{B}_{i}^{w},\qquad i=1,2,\cdots,n-1, \tag{3.5}\]
where \(\mathcal{B}_{i}^{w}\) is defined as
\[\mathcal{B}_{i}^{w}(x,t)=-4D_{x_{2}}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2 }(y,s)\int_{\mathbb{R}^{n-1}}\Gamma(x-y^{*}-z^{\prime},t-s)D_{z_{i}}N(z^{ \prime},0)dz^{\prime}dyds. \tag{3.6}\]
It turns out that the above term \(\mathcal{B}_{i}^{w}\) in (3.6) is the worst term in estimating derivatives for \(w\) (see Proposition 3.2, Proposition 3.3, Proposition 3.7 and Proposition 3.8). For the pressure, \(\Pi^{\mathcal{B}}\) is more singular than \(\Pi^{\mathcal{G}}\) (see Proposition 3.4, Proposition 3.5 and Proposition 3.8).
On the other hand, since \(V\) solves the heat equation in a half space with homogeneous boundary condition, it is worth noting, due to classical regularity theory, that for \(x\in\mathcal{R}\) (\(\mathcal{R}\) is defined in Theorem 1.8), \(t>\frac{1}{2}\) and \(i=1,2,\cdots,n\),
\[|D_{x}^{l}D_{t}^{m}V_{i}(x,t)| =\left|\int_{\frac{1}{2}}^{t}\int_{B_{1}^{+}}D_{x}^{l}D_{t}^{m} \Gamma(x-y,t-s)f_{i}(y,s)dyds\right|\] \[\leq c\int_{\frac{1}{2}}^{t}(s-\frac{1}{2})^{-\alpha}(t-s)^{- \frac{n}{2}-m-\frac{l}{2}}e^{-\frac{c}{t-s}}ds\ \|g\|_{L^{1}(\mathbb{R}_{+}^{n})} \tag{3.7}\] \[\leq c_{k}(t-\frac{1}{2})^{k}\|g\|_{L^{1}(\mathbb{R}_{+}^{n})} \quad\text{ for all }\quad k\geq 0.\]
We start with a key lemma, which is useful in propositions to come after. Its verification will be given in Appendix 7.2.
**Lemma 3.1**.: _Let \(\Gamma^{\prime}\) and \(N\) be \(n-1\) dimensional Gaussian kernel and \(n\) dimensional Newtonian kernel. For \(x^{\prime}\neq 0\), \(x_{n}\geq 0\), \(t>0\), \(1\leq i\leq n-1\), \(k\in(\mathbb{N}\cup\{0\})^{n-1}\) and \(l\geq 0\), If \(|x^{\prime}|\geq 0\), then \(|x^{\prime}|\geq 0\), and \(|x^{\prime}|\geq 0\)._
Proof.: We first prove that \(|x^{\prime}|\geq 0\), and \(|x^{\prime}|\geq 0\). By the definition of \(|x^{\prime}|\), we have
\[|x^{\prime}|\geq 0\quad\text{ for all }\quad x^{\prime}\in\mathbb{R}^{n}.\]
\(\max\big{\{}1,\sqrt{t}\big{\}}\), it follows that_
\[D^{k}_{x^{\prime}}D^{l}_{x_{n}}\int_{\mathbb{R}^{n-1}}\Gamma^{\prime}(x^{\prime}- z^{\prime},t)N(z^{\prime},x_{n})dz^{\prime}=D^{k}_{x^{\prime}}D^{l}_{x_{n}}N(x^{ \prime},x_{n})+J_{kl}(x,t), \tag{3.8}\]
_such that there exists \(c=c(k,l)>0\), independent of \(x\) and \(t\), satisfying_
\[|J_{kl}(x,t)|\leq ct^{\frac{1}{2}}. \tag{3.9}\]
Next, we consider a convolution of second derivatives for Newtonian potential of \(N\) and \(g^{\mathcal{T}}\) defined in **Assumption 1**. More precisely, for \(x\in\mathcal{R}\), we define
\[\phi_{i}(x^{\prime},x_{n})=\int_{|y^{\prime}|<1}D_{x_{i}}D_{x_{2}}N(x^{\prime }-y^{\prime},x_{n})g^{\mathcal{T}}(y^{\prime})dy^{\prime}. \tag{3.10}\]
We will show that \(\phi_{i}\) is strictly positive or negative, depending on the regions under consideration.
Firstly, in case of \(x^{\prime}\in A_{i1}\) with \(i\neq 2\), since \(y^{\prime}\in B^{\prime}_{1}\), we note that
\[(x_{i}-y_{i})(x_{2}-y_{2})\geq\frac{1}{4}x_{i}x_{2}\geq\frac{1}{64}(x_{i}^{2}+ x_{2}^{2})\geq\frac{1}{128}|x^{\prime}|^{2}\geq\frac{1}{512}|x^{\prime}-y^{ \prime}|^{2}. \tag{3.11}\]
Conversely, if \(x^{\prime}\in A_{i2}\), then we can see that
\[(x_{i}-y_{i})(x_{2}-y_{2})\leq\frac{1}{4}x_{i}x_{2}\leq-\frac{1}{64}(x_{i}^{2 }+x_{2}^{2})\leq-\frac{1}{128}|x^{\prime}|^{2}\leq-\frac{1}{512}|x^{\prime}-y^ {\prime}|^{2}. \tag{3.12}\]
On the other hand, in case that \(x^{\prime}\in B_{i1}\) and \(y^{\prime}\in B^{\prime}_{1}\), it follows that
\[|x^{\prime}-y^{\prime}|^{2}-n(x_{2}-y_{2})^{2}\geq\frac{1}{64}|x^{\prime}-y^{ \prime}|^{2}. \tag{3.13}\]
Indeed, recalling (1.14), it is straightforward that
\[|x^{\prime}-y^{\prime}|^{2}-n(x_{2}-y_{2})^{2}\geq\frac{1}{4}|x^{\prime}|^{2} -2n|x_{2}|^{2}\geq\frac{1}{16}|x^{\prime}|^{2}\geq\frac{1}{64}|x^{\prime}-y^{ \prime}|^{2}.\]
Similarly, we can see that if \(x^{\prime}\in B_{i2}\), then
\[|x^{\prime}-y^{\prime}|^{2}-n(x_{2}-y_{2})^{2}\leq-\frac{1}{64}|x^{\prime}-y^{ \prime}|^{2}.\]
Hence, for \(x^{\prime}\in A_{i1}\) with \(i\neq 2\), we observe that
\[\phi_{i}(x^{\prime},x_{n}) =-c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{(x_{i} -y_{i})(x_{2}-y_{2})}{|x-y^{\prime}|^{n+2}}dy^{\prime} \tag{3.14}\] \[\leq-c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{1}{ |x-y^{\prime}|^{n}}dy^{\prime},\]
where we used (3.11). Analogously, in case that \(x^{\prime}\in A_{i2}\) with \(i\neq 2\), it follows that
\[\phi_{i}(x^{\prime},x_{n}) =-c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{(x_{i} -y_{i})(x_{2}-y_{2})}{|x-y^{\prime}|^{n+2}}dy_{n} \tag{3.15}\] \[\geq c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{1}{ |x-y^{\prime}|^{n}}dy^{\prime},\]
where (3.12) is used. Meanwhile, for \(x^{\prime}\in B_{i1}\) with \(i\neq 2\) we also note via (3.13) that
\[\phi_{2}(x^{\prime},x_{n}) =c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{|x-y^{ \prime}|^{-2}-n(x_{2}-y_{2})^{2}}{|x-y^{\prime}|^{n+2}}dy^{\prime} \tag{3.16}\] \[\geq c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{1}{ |x-y^{\prime}|^{n}}dy^{\prime}.\]
Likewisely, for \(x^{\prime}\in B_{i2}\) with \(i\neq 2\) we observe that
\[\phi_{2}(x^{\prime},x_{n}) =-c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{|x-y^{ \prime}|^{-2}-n(x_{2}-y_{2})^{2}}{|x-y^{\prime}|^{n+2}}dy^{\prime} \tag{3.17}\] \[\leq-c\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})\frac{1}{ |x-y^{\prime}|^{n}}dy^{\prime}.\]
Next proposition shows pointwise estimates of the worst term \(\mathcal{B}_{i}^{w}\) defined in (3.6). In particular, with aid of (3.14)-(3.17), lower bounds or upper bounds are provided on mutually disjoint sets near boundary. All proofs of propositions in this section will be provided in Section 4.
**Proposition 3.2**.: _Let \(1\leq i\leq n-1\). Suppose that \(\mathcal{B}^{w}\) is defined in (3.6) and \(\phi_{i}\) is defined in (3.10). Then, for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\)_
\[|\mathcal{B}_{i}^{w}(x,t)|\ \geq\left\{\begin{array}{l}c(t-\frac{1}{2})^{1-\frac{ \beta}{2}-\alpha}e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}}\phi_{i}(x^{\prime},0) \chi_{(A_{i1}\cup B_{i1})}+O\left((t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2} -\alpha}\right),\\ c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2 })}}\phi_{i}(x^{\prime},0)\chi_{(A_{i2}\cup B_{i2})}+O\left((t-\frac{1}{2})^{ \frac{3}{2}-\frac{\beta}{2}-\alpha}\right).\end{array}\right. \tag{3.18}\]
_More precisely, for \(i\neq 2\) and \(t>\frac{1}{2}\),_
\[\mathcal{B}_{i}^{w}(x,t) \leq-c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^ {2}}{2(t-\frac{1}{2})}}\phi_{i}(x^{\prime},0)+c(t-\frac{1}{2})^{\frac{3}{2}- \frac{\beta}{2}-\alpha},\quad x^{\prime}\in A_{i1}, \tag{3.20}\] \[\mathcal{B}_{i}^{w}(x,t) \geq c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^ {2}}{8(t-\frac{1}{2})}}\phi_{i}(x^{\prime},0)-c(t-\frac{1}{2})^{\frac{3}{2}- \frac{\beta}{2}-\alpha},\quad x^{\prime}\in A_{i2},\] (3.21) \[\mathcal{B}_{2}^{w}(x,t) \leq-c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^ {2}}{2(t-\frac{1}{2})}}\phi_{2}(x^{\prime},0)+c(t-\frac{1}{2})^{\frac{3}{2}- \frac{\beta}{2}-\alpha},\quad x^{\prime}\in B_{i1},\] (3.22) \[\mathcal{B}_{2}^{w}(x,t) \geq c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^ {2}}{8(t-\frac{1}{2})}}\phi_{2}(x^{\prime},0)-c(t-\frac{1}{2})^{\frac{3}{2}- \frac{\beta}{2}-\alpha},\quad x^{\prime}\in B_{i2}, \tag{3.19}\]
_where \(A_{i1},A_{i2},B_{i1}\) and \(B_{i2}\) are sets introduced in_ **Notation 1**_._
We can also obtain estimates of higher order derivatives for the term \(\mathcal{B}_{i}^{w}\).
**Proposition 3.3**.: _Suppose that \(\mathcal{B}^{w}\) is defined in (3.6). Let \(1\leq i\leq n-1\). Then, for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\)_
\[|D_{x^{\prime}}^{k}D_{x_{n}}^{l}\mathcal{B}_{i}^{w}(x,t)|\leq c\left\{ \begin{array}{ll}(t-\frac{1}{2})^{1-\frac{1+\beta+2\alpha}{2}}e^{-\frac{x_{n} ^{2}}{8(t-\frac{1}{2})}},&l=1\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2 })}},&l\geq 2,\end{array}\right. \tag{3.23}\]
_where \(c_{2}\) and \(c_{4}\) are constants in Lemma 2.2. For a given positive integer \(l\), there exists \(c_{l}>0\) such that if \(\sqrt{t-\frac{1}{2}}\leq c_{l}x_{n}\), then_
\[|D^{k}_{x^{\prime}}D^{l}_{x_{n}}\mathcal{B}^{w}_{i}(x,t)|\geq c(t-\frac{1}{2})^ {1-\frac{l+\beta+2\alpha}{2}}e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}}. \tag{3.24}\]
We haven't mentioned any integrability of the pressure corresponding to the weak solution so far. In next proposition, we show that the pressure belongs to some Lebesgue spaces, in particular, globally in space.
**Proposition 3.4**.: _Suppose that \(\Pi\) is given in (1.8) and \(f\in L^{q_{1}}(0,1;L^{p_{1}}(\mathbb{R}^{n}_{+}))\) with \(1<p_{1},q_{1}<\infty\). If \(p\) and \(q\) satisfy \(p>\frac{n}{n-1}p_{1}\) and \(\frac{2}{q_{1}}+\frac{n}{p_{1}}-1\leq\frac{2}{q}+\frac{n}{p}\), then_
\[\|\Pi\|_{L^{q}(0,1;L^{p}(\mathbb{R}^{n}_{+}))}<c\|f_{2}\|_{L^{q_{1}}(0,1;L^{p_ {1}}(\mathbb{R}^{n}_{+}))}. \tag{3.25}\]
Next proposition shows pointwise estimates of spatial derivatives for \(\Pi^{\mathcal{B}}\) near boundary. For simplicity, \(\mathrm{sgn(a)}\) indicates \(1\), if \(a>0\) and otherwise, \(\mathrm{sgn(a)}=-1\), and we also denote
\[\psi(x):=\int_{\mathbb{R}^{n-1}}D_{x_{2}}N(x^{\prime}-y^{\prime},x_{n})g^{ \mathcal{T}}(y^{\prime})dy^{\prime}. \tag{3.26}\]
**Proposition 3.5**.: _Suppose \(\Pi^{\mathcal{B}}\) is defined in (2.3). Then, for \(|x_{2}|\geq 2\) and \(\frac{1}{2}<t\leq 1\),_
\[\left|\left|D^{k}_{x^{\prime}}D^{l}_{x_{n}}\Pi^{\mathcal{B}}(x,t)\right|-c(t- \frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2}-\alpha}\left|D^{k}_{x^{\prime}}D^{l} _{x_{n}}\psi(x)\right|\right|\leq c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}, \qquad k\geq 0. \tag{3.27}\]
_In particular, in case that \(k=l=0\), we have_
\[\left|\Pi^{\mathcal{B}}(x,t)+c\,\mathrm{sgn}(x_{2})\,(t-\frac{1}{2})^{\frac{1 }{2}-\frac{\beta}{2}-\alpha}\psi(x)\right|\leq c(t-\frac{1}{2})^{1-\frac{\beta }{2}-\alpha}, \tag{3.28}\]
_where \(\chi_{A}\) is the characteristic function supported on \(A\) and \(\psi\) is defined in (3.26)._
**Remark 3.6**.: _Let \(1<p_{1}<\frac{n-1}{n}p<\infty\) and \(1<q_{1}<q<\infty\). We denote_
\[C =\{(\alpha,\beta)\in(0,1)\times(0,1)\,|\,\frac{1}{q_{1}}-\frac{1} {q}+\frac{n}{2p_{1}}-\frac{n}{2p}\leq\frac{1}{2},\ \alpha<\frac{1}{q_{1}},\ \beta<\frac{1}{p_{1}}\},\] \[D =\{(\alpha,\beta)\in(0,1)\times(0,1)\,|\,\frac{1}{2}+\frac{1}{q} <\alpha+\frac{\beta}{2}\}.\]
_Then, \(C\cap D=\emptyset\), which indicates that (3.28) is not conflict to (3.25)._
In next proposition, we state some estimates of velocity fields and the pressure excluding \(\mathcal{B}^{w}_{i}\) and \(\Pi^{\mathcal{B}}\), respectively. Those estimates turn out to be less singular than the worst terms \(\mathcal{B}^{w}_{i}\) and \(\Pi^{\mathcal{B}}\), as mentioned earlier.
**Proposition 3.7**.: _Let \(1\leq i\leq n-1\). Suppose that \(w\) and \(\Pi\) are given in (1.7) and (1.8), and \(\mathcal{B}^{w}\) and \(\Pi^{\mathcal{B}}\) are defined in (3.6) and (2.3), respectively. Then, for any \(k,\,l\geq 0\) and \(x\in\mathcal{R}\), \(t>\frac{1}{2}\), we
have_
\[\left|D_{x^{\prime}}^{k}D_{x_{n}}^{l}\left(D_{x_{n}}w_{i}(x,t)-\mathcal{B}_{i}^{ w}(x,t)\right)\right|\leq c\left\{\begin{array}{ll}(t-\frac{1}{2})^{1-\frac{l-2+ \beta+2\alpha}{2}}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2})}},&1\leq l\leq 3,\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2} )}},&l\geq 4,\end{array}\right. \tag{3.29}\]
\[\left|D_{x^{\prime}}^{k}D_{x_{n}}^{l+1}w_{n}(x,t)\right|\leq c\left\{ \begin{array}{ll}(t-\frac{1}{2})^{1-\frac{l-1+\beta+2\alpha}{2}}e^{-\frac{x_ {n}^{2}}{8(t-\frac{1}{2})}},&l\leq 1,\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta-l}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2 })}},&l\geq 2,\end{array}\right. \tag{3.30}\]
_and_
\[\left|D_{x}^{k}\Big{(}\Pi(x,t)-\Pi^{\mathcal{B}}(x,t)\Big{)}\right|\leq ct^{ 1-\frac{\beta}{2}-\alpha},\qquad k\geq 0. \tag{3.31}\]
Next proposition shows some estimates involving temporal derivative of velocity.
**Proposition 3.8**.: _Suppose that \(w\) and \(\Pi\) are given in (1.7) and (1.8), and \(\mathcal{B}^{w}\) and \(\Pi^{\mathcal{B}}\) are defined in (3.6) and (2.3), respectively. Then, for \(1\leq i\leq n-1\) and for any \(k,l\geq 0\) and \(x\in\mathcal{R},t>\frac{1}{2}\),_
\[\leq c\max\left\{(t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{-\frac{x_{n}^{2 }}{8(t-\frac{1}{2})}},(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}\right\}. \tag{3.32}\]
_The normal component \(w_{n}\) satisfies_
\[\leq c\max\left\{(t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{- \frac{x_{n}^{2}}{8(t-\frac{1}{2})}},(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha }\right\}, \tag{3.33}\]
**Remark 3.9**.: _The meaning of (3.32) is the control of \(D_{t}w_{i}-\Delta w_{i}-D_{x_{i}}\Pi\), \(i\neq n\) near boundary is majorized by \(D_{t}w_{i}-D_{x_{n}}\mathcal{B}_{i}^{w}+D_{x_{i}}\Pi^{\mathcal{B}}\). Similarly, the main profile of \(D_{t}w_{n}-\Delta w_{n}+D_{x_{n}}\Pi\) is \(D_{t}w_{n}-D_{x_{n}}^{2}w_{n}+D_{x_{n}}\Pi^{\mathcal{B}}\) for the normal component of the equation._
## 4. Proofs of Propositions
### Proof of Proposition 3.2
Note that since \(|x^{\prime}|\geq 2\), it follows that \(|x^{\prime}-y^{\prime}|\geq 1\) for \(y^{\prime}\in\operatorname{supp}g^{\mathcal{T}}\subset B_{1}^{{}^{\prime}}\). Using (3.8) in Lemma 3.1, we decompose \(\mathcal{B}_{i}^{w}\) as follows:
\[\mathcal{B}_{i}^{w}(x,t) =-c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)\frac{1} {(t-\tau)^{\frac{1}{2}}}e^{-\frac{(x_{n}+y_{n})^{2}}{4(t-\tau)}}\] \[\quad\times D_{x_{2}}\int_{\mathbb{R}^{n-1}}\Gamma^{\prime}(x^{ \prime}-y^{\prime}-z^{\prime},t-\tau)D_{z_{i}}N(z^{\prime},0)dz^{\prime}dyd\tau \tag{4.1}\] \[:=I_{i1}(x,t)+I_{i2}(x,t),\]
where
\[I_{i1}(x,t) =-c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)\frac{1}{(t -\tau)^{\frac{1}{2}}}e^{-\frac{(x_{n}+m_{n})^{2}}{4(t-\tau)}}D_{x_{2}}D_{x_{i}}N (x^{\prime}-y^{\prime},0)dyd\tau,\] \[I_{i2}(x,t) =-c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)\frac{1}{ (t-\tau)^{\frac{1}{2}}}e^{-\frac{(x_{n}+m_{n})^{2}}{4(t-\tau)}}J_{10}(x^{\prime }-y^{\prime},t-\tau)dyd\tau.\]
By (3.9), we have
\[\int_{\mathbb{R}^{n-1}}g^{\mathcal{T}}(y^{\prime})J_{10}(x^{\prime }-y^{\prime},t-\tau)dy^{\prime} \leq\int_{|y^{\prime}|\leq 1}g^{\mathcal{T}}(y^{\prime})(t-\tau)^{ \frac{1}{2}}dy^{\prime}\] \[\leq\|g^{\mathcal{T}}\|_{L^{\infty}}(t-\tau)^{\frac{1}{2}}.\]
Note that \(|g^{\mathcal{N}}(y_{n})|\leq cy_{n}^{-\alpha}\) for \(0<y_{n}<2\). From Lemma 2.2,for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\), we obtain
\[\|I_{i2}(t)\|_{L^{\infty}(|x^{\prime}|>2)} \leq c\|g^{\mathcal{T}}\|_{L^{\infty}(\mathbb{R}^{n-1})}\int_{ \frac{1}{2}}^{t}\int_{0}^{2}(\tau-\frac{1}{2})^{-\alpha}y_{n}^{-\beta}e^{- \frac{(x_{n}+y_{n})^{2}}{8(t-\tau)}}dy_{n}d\tau \tag{4.2}\] \[\leq c\|g^{\mathcal{T}}\|_{L^{\infty}(\mathbb{R}^{n-1})}(t-\frac{ 1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{ 2})}}.\]
By Lemma 2.2, for \(|x^{\prime}|\geq 2\), we have
\[I_{i1}(x,t) =-c\int_{\frac{1}{2}}^{t}\int_{0}^{2}(\tau-\frac{1}{2})^{-\alpha} y_{n}^{-\beta}(t-\tau)^{-\frac{1}{2}}e^{-\frac{(x_{n}+y_{n})^{2}}{4(t-\tau)}}dy_{n }d\tau\phi_{i}(x^{\prime},0) \tag{4.3}\] \[\left\{\begin{array}{l}\leq-c(t-\frac{1}{2})^{1-\frac{\beta}{2 }-\alpha}e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}}\phi_{i}(x^{\prime},0),\\ \geq-c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}e^{-\frac{x_{n}^{2}}{8(t-\frac {1}{2})}}\phi_{i}(x^{\prime},0).\end{array}\right.\]
Summing all estimates, we obtain (3.18) by taking the absolute value. Using from (3.14) to (3.17), we get from (3.19)-(3.22). We complete the proof of Proposition 3.2.
### Proof of Proposition 3.3
Let \(\Gamma_{1}(x_{n},t)\) be the heat kernel in one dimension. For simplicity, we denote \(\eta=\frac{x_{n}}{2\sqrt{t}}\). We then note that
\[D_{x_{n}}^{l}\Gamma_{1}(x_{n},t)=\frac{1}{2\sqrt{\pi}}t^{-\frac{l+1}{2}}P_{l} (\eta)e^{-\eta^{2}},\qquad P_{l}(\eta)=\sum_{i=0}^{\left[\frac{l}{2}\right]}c _{li}\eta^{2(i+\frac{l}{2}-\left[\frac{l}{2}\right])}, \tag{4.4}\]
where \(c_{li}\) is a nonzero constant. For example,
\[P_{0}(\eta)=1,\quad P_{1}(\eta)=-2\eta,\quad P_{2}(\eta)=-2+4\eta^{2},\quad P _{3}(\eta)=12\eta-8\eta^{3}.\]
In fact, \(P_{l}\) satisfies the following recursive formula:
\[P_{l}(\eta)=P_{l-1}^{\prime}(\eta)-2\eta P_{l-1}(\eta),\quad P_{0}(\eta)=1, \qquad l\geq 1.\]
Again, we only compute normal derivatives, since the tangential derivatives are rather easy. With aid of Lemma 3.1, for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\), we have
\[D_{x_{n}}^{l}\mathcal{B}_{i}^{w}(x,t)= -4\int_{\frac{1}{2}}^{t}\int_{0}^{2}g^{\mathcal{N}}(y_{n})h(s)D_{x_{ n}}^{l}\Gamma_{1}(x_{n}+y_{n},t-\tau)dy_{n}d\tau\phi_{i}(x^{\prime},0)\] \[-4\int_{\frac{1}{2}}^{t}\int_{0}^{2}g^{\mathcal{N}}(y_{n})h(s)D_{x_ {n}}^{l}\Gamma_{1}(x_{n}+y_{n},t-\tau)J_{0l}(x^{\prime},0,t-\tau)dy_{n}d\tau \tag{4.5}\] \[= I_{1}+I_{2}.\]
Using (4.4), by Lemma 2.2, we obtain
\[|I_{2}| \leq c\int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^ {-\frac{l}{2}}\int_{0}^{2}y_{n}^{-\beta}e^{-\frac{(x_{n}+y_{n})^{2}}{8(t-\tau) }}dy_{n}ds \tag{4.6}\] \[\leq c\left\{\begin{array}{ll}(t-\frac{1}{2})^{\frac{3-l-\beta -2\alpha}{2}}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2})}},&l\leq 2\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta-l}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2} )}},&l\geq 3,\end{array}\right.\]
where we used that \(x_{n}^{2}+y_{n}^{2}\leq(x_{n}+y_{n})^{2}\leq 2(x_{n}^{2}+y_{n}^{2})\) for \(x_{n},\ y_{n}\geq 0\). On the other hand, to estimate, by Lemma 2.2, we compute
\[|I_{1}| =c\left|\int_{0}^{t}\int_{0}^{2}y_{n}^{-\beta}(\tau-\frac{1}{2}) ^{-\alpha}D_{x_{n}}^{l}\Gamma_{1}(x_{n}+y_{n},t-\tau)dy_{n}d\tau\right||\phi_ {i}(x^{\prime},0)| \tag{4.7}\] \[\leq c\int_{0}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^{-\frac{1+ l}{2}}\int_{0}^{1}y_{n}^{-\beta}e^{-\frac{(x_{n}+y_{n})^{2}}{2(t-\tau)}}dy_{n}d \tau|\phi_{i}(x^{\prime},0)|\] \[\leq c\left\{\begin{array}{ll}(t-\frac{1}{2})^{1-\frac{l+\beta +2\alpha}{2}}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2})}}|\phi_{i}(x^{\prime},0)|,& l\leq 1\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2} )}}|\phi_{i}(x^{\prime},0)|,&l\geq 2.\end{array}\right.\]
Summing up the above estimates (4.6) and (4.7), we obtain (3.23).
For the lower bounded, note that there are \(c_{l}>0\) such that if \(\eta>c_{l}\) then
\[|P_{l}(\eta)|\geq\eta^{l}. \tag{4.8}\]
Let \(t-\frac{1}{2}<c_{l}^{-2}x_{n}^{2}\). We note that if \(\frac{1}{2}<\tau\), then \(c_{l}\sqrt{t-\tau}-x_{n}<0\). Hence, due to (4.8), we have
\[|I_{1}| \geq c\int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^ {-\frac{1+l}{2}}\int_{0}^{1}y_{n}^{-\beta}(\frac{x_{n}+y_{n}}{t-\tau})^{l}e^{- \frac{(x_{n}+y_{n})^{2}}{4(t-\tau)}}dy_{n}d\tau|\phi_{i}(x^{\prime},0)| \tag{4.9}\] \[\geq c\int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^ {-\frac{1+l}{2}}e^{-\frac{x_{n}^{2}}{2(t-\tau)}}\int_{0}^{1}y_{n}^{-\beta}( \frac{y_{n}}{t-\tau})^{l}e^{-\frac{y_{n}^{2}}{2(t-\tau)}}dy_{n}d\tau|\phi_{i}( x^{\prime},0)|\] \[=c\int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^{- \frac{1+l}{2}+\frac{1}{2}-\frac{\beta}{2}}e^{-\frac{x_{n}^{2}}{2(t-\tau)}}\int_ {0}^{\frac{1}{\sqrt{t-\tau}}}y_{n}^{l-\beta}e^{-y_{n}^{2}}dy_{n}d\tau|\phi_{i}( x^{\prime},0)|\] \[\geq c(t-\frac{1}{2})^{1-\frac{l+\beta+2\alpha}{2}}e^{-\frac{x_{n }^{2}}{2(t-\frac{1}{2})}}|\phi_{i}(x^{\prime},0)|.\]
By (4.6) and (4.9), we obtain (3.24). We complete the proof of Proposition 3.3.
### Proof of Proposition 3.4
Note that \(\Pi=\Pi^{\mathcal{G}}+\Pi^{\mathcal{B}}\), where
\[\Pi^{\mathcal{G}}(x,t)=P_{n}\left(D_{x_{2}}\left.\Gamma*f_{2}\right|_{x_{n}=0} \right)(x^{\prime},t),\qquad\Pi^{\mathcal{B}}(x,t)=P_{n}R_{2}^{\prime}\left(D_ {x_{n}}\left.\Gamma*f_{2}\right|_{x_{n}=0}\right)(x^{\prime},t),\]
where \(P_{n}\) is the Poisson Kernel of the Laplace equation in \(\mathbb{R}_{+}^{n}\) and \(R_{2}^{\prime}\) is the Riesz transform in \(\mathbb{R}^{n-1}\) for \(x_{2}\) variable. Let \(0<\epsilon<1\) be sufficiently small positive number and \(\frac{1}{r_{1}}=\frac{n}{p(n-1)}+\epsilon\) satisfying \(p_{1}<r_{1}<\frac{n-1}{n}p\). Take \(\epsilon>0\) small such that \(\theta:=\frac{n}{r_{1}}-\frac{n}{p}=\frac{n}{p(n-1)}+\epsilon n=\frac{1}{r_{1} }+\epsilon(n-1)\) satisfies \(0<\theta<1\). Choose \(1<r_{2},\ q_{2}<\infty\) satisfying \(\frac{1}{q_{1}}=\frac{1-\theta}{q_{2}}+\frac{\theta}{q}\) and \(\frac{1}{p_{1}}=\frac{1-\theta}{r_{2}}+\frac{\theta}{r_{1}}\). If \(\epsilon>0\) is sufficiently small, then \(1<r_{2}<p_{1}\) and \(1<q_{2}<q_{1}\). Note that \(\frac{n-1}{r_{1}}-\frac{n}{p}>0\). From well-known harmonic function estimate, Besov inequality and trace theorem, we have
\[\|\Pi(t)\|_{L^{p}(\mathbb{R}_{+}^{n})} \leq c\big{(}\|D_{x_{n}}\Gamma*f_{2}(t)|_{x_{n}=0}\|_{\dot{B}_{p} ^{-\frac{1}{p}}(\mathbb{R}^{n-1})}+\|D_{x_{2}}\Gamma*f_{2}(t)|_{x_{n}=0}\|_{ \dot{B}_{p}^{-\frac{1}{p}}(\mathbb{R}^{n-1})}\big{)}\] \[\leq c\|D_{x}\Gamma*f_{2}(t)|_{x_{n}=0}\|_{\dot{B}_{r_{1}}^{r_{1} }-\frac{n}{p}(\mathbb{R}^{n-1})}\] \[\leq c\|D_{x}\Gamma*f_{2}(t)\|_{\dot{H}_{r_{1}}^{\frac{n}{r_{1}}- \frac{n}{p}}(\mathbb{R}_{+}^{n})}.\]
Hence, we have
\[\|\Pi\|_{L^{q}(0,\infty;L^{p}(\mathbb{R}_{+}^{n}))}\leq c\|D_{x}\Gamma*f_{2}\| _{L^{q}(0,\infty;\dot{H}_{r_{1}}^{\frac{n}{r_{1}}-\frac{n}{p}}(\mathbb{R}_{+}^ {n}))}.\]
Note that \(\frac{1}{q}+1=\frac{1}{q_{2}}+\frac{1}{2}+\frac{n}{2r_{2}}-\frac{n}{2r_{1}}\). Then, we have
\[\|D_{x}\Gamma*f_{2}\|_{L^{q}(0,\infty;L^{r_{1}}(\mathbb{R}_{+}^{n}))}\leq c\| f_{2}\|_{L^{q_{2}}(0,\infty;L^{r_{2}}(\mathbb{R}_{+}^{n}))},\]
\[\|D_{x}\Gamma*f_{2}\|_{L^{q}(0,\infty;\dot{H}_{r_{1}}^{1}(\mathbb{R}_{+}^{n})) }\leq c\|f_{2}\|_{L^{q}(0,\infty;L^{r_{1}}(\mathbb{R}_{+}^{n}))}.\]
Using the complex interpolation property, we have \([L^{q}(0,\infty;L^{r_{1}}(\mathbb{R}_{+}^{n})),L^{q}(0,\infty;\dot{H}_{r_{1}} ^{1}(\mathbb{R}_{+}^{n}))]_{\theta}=L^{q}(0,\infty;\dot{H}_{r_{1}}^{\theta}( \mathbb{R}_{+}^{n}))\) and \([L^{q_{2}}(0,\infty;L^{r_{2}}(\mathbb{R}_{+}^{n}),L^{q}(0,\infty;\dot{L}^{r_{1} }(\mathbb{R}_{+}^{n})]_{\theta}=L^{q_{1}}(0,\infty;L^{p_{1}}(\mathbb{R}_{+}^{n} ))\) (see [1]). Hence, we have
\[\|\Pi\|_{L^{q}(0,\infty;L^{p}(\mathbb{R}_{+}^{n}))}\leq c\|D_{x}\Gamma*f_{2}\| _{L^{q}(0,\infty;\dot{H}_{r_{1}}^{\theta}(\mathbb{R}_{+}^{n}))}\leq c\|f_{2}\| _{L^{q_{1}}(0,\infty;L^{p_{1}}(\mathbb{R}_{+}^{n}))}.\]
Hence, we complete the proof of Proposition 3.4.
### Proof of Proposition 3.5
With the aid of (3.8), we decompose \(\Pi^{\mathcal{B}}\) as follows:
\[D_{x^{\prime}}^{k}D_{x_{n}}^{l}\Pi^{\mathcal{B}}(x,t)= -4c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)(t-\tau) ^{-\frac{3}{2}}y_{n}e^{-\frac{y_{n}^{2}}{t-\tau}}D_{x^{\prime}}^{k}D_{x_{n}}^ {l}D_{x_{2}}N(x^{\prime}-y^{\prime},x_{n})dyd\tau\] \[-4c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)(t-\tau )^{-\frac{3}{2}}y_{n}e^{-\frac{y_{n}^{2}}{t-\tau}}J_{k+1,l}(x^{\prime}-y^{ \prime},t-\tau)dyd\tau\] \[:= I+J.\]
Since for \(|x_{2}|\geq 2\), \(t>\frac{1}{2}\) and \(y^{\prime}\in B_{1}^{\prime}\), by (3.9) and (2.8), we have
\[|J|\leq c\int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^{-\frac{ \beta}{2}}d\tau\leq c(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha}.\]
Recalling (3.26), we note for \(|x^{\prime}|>2\), by (2.8) that
\[I =-c\int_{0}^{t}h(\tau)\int_{0}^{2}g^{\mathcal{N}}(y_{n})y_{n}(t-\tau )^{-\frac{3}{2}}e^{-\frac{y_{n}^{2}}{t-\tau}}dy_{n}d\tau D_{x^{\prime}}^{k}D_{x _{n}}^{l}\psi(x^{\prime},x_{n})\] \[\approx-(t-\frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2}-\alpha}D_{x^ {\prime}}^{k}D_{x_{n}}^{l}\psi(x^{\prime},x_{n}),\]
which implies (3.27) and (3.28). Therefore, we complete the proof of Proposition 3.5.
### Proof of Proposition 3.7
We begin with spatial estimates of \(D_{x_{n}}w_{i}(x,t)-\mathcal{B}_{i}^{w}(x,t)\). Due to (3.5) and (3.7), it is enough to estimate spatial derivatives of \(D_{x_{2}}W_{i}^{\mathcal{G}}\).
\(\bullet\) (case \(l=0\)) It suffices to estimate the \(k\)-th order tangential derivatives of \(D_{x_{2}}W_{i}^{\mathcal{G}}\). Recalling the **Assumption 1** and using the estimates (2.5) and (2.8), it follows for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\) that
\[\left|D_{x^{\prime}}^{k}D_{x_{2}}W_{i}^{\mathcal{G}}(x,t)\right|\leq c\int_{ \frac{1}{2}}^{t}\int_{0}^{2}y_{n}^{-\beta}(\tau-\frac{1}{2})^{-\alpha}e^{- \frac{y_{n}^{2}}{t-\tau}}dy_{n}d\tau\leq c(t-\frac{1}{2})^{\frac{3}{2}-\frac{ \beta}{2}-\alpha}. \tag{4.10}\]
Thus, combining (3.5), (3.7) and (4.10), we obtain
\[|(D_{x_{n}}w_{i}(x,t)-\mathcal{B}_{i}^{w}(x,t))|\leq c(t-\frac{1}{2})^{\frac{ 3}{2}-\frac{\beta}{2}-\alpha}.\]
\(\bullet\) (case \(l=1\)) We note first that the normal derivatives bring changes in estimates and tangential derivative doesn't make any difference, and therefore, we compute only normal derivatives. On the other hand, we observe that
\[D_{x_{n}}L_{ni}(x,t) =4D_{x_{n}}D_{x_{i}}\int_{0}^{x_{n}}\int_{\mathbb{R}^{n-1}}\Gamma (x-y^{*}-z,t)D_{z_{n}}N(z)dz\] \[=-\sum_{m=1}^{n-1}D_{x_{i}}L_{mm}(x,y,t)+2D_{x_{i}}\Gamma(x-y^{*},t),\]
which implies again via (3.4) that
\[D_{x_{n}}D_{x_{2}}W_{i}^{\mathcal{G}}(x,t)=-\sum_{1\leq m\leq n-1}D_{x_{2}}D_ {x_{i}}W_{mm}(x,t)+2D_{x_{2}}D_{x_{i}}\Gamma^{*}*f_{2}(x,t),\]
where
\[\Gamma^{*}*f_{2}(x,t) =\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}\Gamma(x-y^{*},t-\tau)f_{2} (y,\tau)dyd\tau,\] \[W_{m_{1}m_{2}}(x,t) =\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}L_{m_{1}m_{2}}(x,y,t-\tau)f_ {2}(y,\tau)dyd\tau,\quad 1\leq m_{1},m_{2}\leq n.\]
To sum up, we obtain
\[D_{x_{n}}\big{(}D_{x_{n}}w_{i}-\mathcal{B}_{i}^{w}\big{)}=D_{x_{n}}^{2}V_{i}- \sum_{1\leq m\leq n-1}D_{x_{2}}D_{x_{i}}W_{mm}(x,t)+2D_{x_{2}}D_{x_{i}}\Gamma^ {*}*f_{2}(x,t). \tag{4.12}\]
Repeating similar computations as (4.10), for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\), we have
\[|D_{x^{\prime}}^{k}D_{x_{n}}\big{(}D_{x_{n}}w_{i}-\mathcal{B}_{i}^{w} \big{)}(x,t)|\leq c\int_{0}^{t}\int_{0}^{2}y_{n}^{-\beta}(\tau-\frac{1}{2})^{-\alpha}e^ {-\frac{y_{n}^{2}}{t-\tau}}dy_{n}d\tau\] \[\leq c(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha}. \tag{4.13}\]
\(\bullet\) (case \(l\geq 2\)) Direct computations show that
\[D_{x_{n}}W_{mm}(x,t)=D_{x_{m}}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}L_{nm}(x,y, t-\tau)f_{2}(y,\tau)dyd\tau+\tilde{w}_{m}^{\mathcal{B}}, \tag{4.14}\]
where
\[\tilde{w}_{m}^{\mathcal{B}}:=-4D_{x_{m}}^{2}\int_{0}^{t}\int_{\mathbb{R}_{+}^ {n}}f_{2}(y,\tau)\int_{\mathbb{R}^{n-1}}\Gamma(x^{\prime}-y^{\prime}-z^{\prime },x_{n}+y_{n},t-\tau)N(z^{\prime},0)dz^{\prime}dyd\tau.\]
From (4.12) and (4.14), for \(l\geq 2\), we have
\[D_{x_{n}}^{l}\big{(}D_{x_{n}}w_{i}-\mathcal{B}_{i}^{w}\big{)}= D_{x_{n}}^{l+1}V_{i}-\sum_{1\leq m\leq n-1}D_{x_{n}}^{l-2}D_{x_{2}}D_{x_{ i}}D_{x_{m}}W_{nm}(x,t)\] \[+2D_{x_{n}}^{l-1}D_{x_{2}}D_{x_{i}}\Gamma^{*}*f_{2}(x,t)+D_{x_{n} }^{l-2}\tilde{w}_{m}^{\mathcal{B}}. \tag{4.15}\]
Following similar computations as in Proposition 3.3, we see that for any \(k\geq 0\) and \(l\geq 2\)
\[\Big{|}D_{x^{\prime}}^{k}D_{x_{n}}^{l-2}\tilde{w}_{i}^{\mathcal{B}}(x,t)\Big{|} \leq c\left\{\begin{array}{ll}(t-\frac{1}{2})^{\frac{3-l-\beta-2\alpha}{2}} e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}},&l\leq 3\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta-l}e^{-c_{4}\frac{x_{n}^{2}}{t-\frac{1} {2}}},&l\geq 4.\end{array}\right. \tag{4.16}\]
Its verification of (4.16) is very similar to that of Proposition 3.3, and thus the details are omitted.
Using the relations (4.11) and (4.14), from (2.5), (2.8) and the estimate of \(\mathcal{B}_{i}^{w}\), we have
\[|D_{x_{n}}^{l-2}D_{x_{2}}D_{x_{i}}D_{x_{m}}W_{nm}(x,t)| \tag{4.18}\] \[\leq c(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha}+\left\{ \begin{array}{ll}c(t-\frac{1}{2})^{1-\frac{l+\beta+2\alpha}{2}}e^{-\frac{x_{ n}^{2}}{2(t-\frac{1}{2})}},&l\leq 3\\ c(t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{-c_{4}\frac{x_{n}^{2}}{t-\frac{1 }{2}}},&l\geq 4.\end{array}\right. \tag{4.17}\]
Combining estimates (4.15), (4.16) and (4.17), it follows that for \(l\geq 2\),
\[\Big{|}D_{x^{\prime}}^{k}D_{x_{n}}^{l}\big{(}D_{x_{n}}w_{i}-\mathcal{B}_{i}^{w }\big{)}(x,t)\Big{|}\leq c\left\{\begin{array}{ll}(t-\frac{1}{2})^{1-\frac{l -2+\beta+2\alpha}{2}}e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}},&l\leq 3\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{2-\beta-l}e^{-c_{4}\frac{x_{n}^{2}}{t-\frac{1 }{2}}},&l\geq 4.\end{array}\right. \tag{4.19}\]
We complete the proof of the first quantity in (3.29).
\(\bullet\) (case \(i=n\)) As mentioned earlier, the tangential derivatives are rather easy to control and thus we skip its details. Using \(\mathrm{div}w=0\), it is also straightforward that
\[\Big{|}D_{x^{\prime}}^{k}D_{x_{n}}w_{n}(x,t)\Big{|}=\left|D_{x^{\prime}}^{k} \sum_{j=1}^{n-1}D_{x_{j}}w_{j}(x,t)\right|\leq c(t-\frac{1}{2})^{\frac{3}{2}- \frac{\beta}{2}-\alpha}.\]
Higher derivative of \(w_{n}\) in \(x_{n}\) variable can be rewritten as
\[D_{x_{n}}^{l+1}w_{n}(x,t)=-D_{x_{n}}^{l}\sum_{j=1}^{n-1}D_{x_{j}}w_{j}(x,t)\]
\[=-\sum_{j=1}^{n-1}D_{x_{j}}D_{x_{n}}^{l-1}\left(D_{x_{n}}w_{j}(x,t)-\mathcal{B} _{j}^{w}(x,t)\right)-\sum_{j=1}^{n-1}D_{x_{j}}D_{x_{n}}^{l-1}\mathcal{B}_{j}^ {w}(x,t). \tag{4.20}\]
Using the estimates (3.29) and Proposition 3.3, we obtain via (4.20)
\[\left|D_{x_{n}}^{l+1}w_{n}(x,t)\right|\leq c\left\{\begin{array}{ll}(t-\frac{ 1}{2})^{1-\frac{l-1+\beta+2\alpha}{2}}e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}}, &l\leq 1\\ (t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta-l}e^{-c_{4}\frac{x_{n}^{2}}{t-\frac{ 1}{2}}},&l\geq 2.\end{array}\right.\]
This completes the proof of (3.30).
Recalling the formulae of the pressure and its decomposition (2.1)-(2.3) and using (3.8), we get
\[|D_{x}^{k}\Pi^{\mathcal{G}}(x,t)| \leq c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)(t- \tau)^{-\frac{1}{2}}e^{-\frac{y_{n}^{2}}{t-\tau}}|D_{x}^{k}D_{x_{2}}D_{x_{n}}N (x^{\prime}-y^{\prime},x_{n})|dyd\tau\] \[\quad+c_{n}\int_{0}^{t}\int_{\mathbb{R}_{+}^{n}}f_{2}(y,\tau)(t- \tau)^{-\frac{1}{2}}e^{-\frac{y_{n}^{2}}{t-\tau}}|J_{1}^{2}(x^{\prime}-y^{ \prime},t-\tau)|dyd\tau\] \[:=\Phi_{1}^{\mathcal{G}}(x,t)+\Phi_{2}^{\mathcal{G}}(x,t).\]
Since \(|x^{\prime}-y^{\prime}|\geq 1\) for \(y^{\prime}\in B_{1}^{\prime}\) and \(|x_{2}|\geq 2\), from (3.8), we have
\[|\Phi_{1}^{\mathcal{G}}(x,t)| \leq c\int_{0}^{t}(\tau-\frac{1}{2})^{-\alpha}\int_{0}^{1}y_{n}^{- \beta}\int_{|y^{\prime}|\leq 1}g^{\mathcal{T}}(y^{\prime})(t-\tau)^{-\frac{1}{2}}e^ {-\frac{y_{n}^{2}}{t-\tau}}|x^{\prime}-y^{\prime}|^{-n-k}dy^{\prime}dy_{n}d\tau\] \[\leq c\int_{0}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^{-\frac{1}{ 2}}\int_{0}^{1}y_{n}^{-\beta}e^{-\frac{y_{n}^{2}}{t-\tau}}dy_{n}d\tau\leq c(t- \frac{1}{2})^{1-\frac{\beta}{2}-\alpha}\]
and
\[|\Phi_{2}^{\mathcal{G}}(x,t)|\leq c\int_{0}^{t}(\tau-\frac{1}{2})^{-\alpha} \int_{0}^{1}y_{n}^{-\beta}\int_{|y^{\prime}|\leq 1}g^{\mathcal{T}}(y^{\prime})e^{- \frac{y_{n}^{2}}{t-\tau}}dy^{\prime}dy_{n}d\tau\leq c(t-\frac{1}{2})^{\frac{3 }{2}-\frac{\beta}{2}-\alpha}.\]
Thus, we deduce (3.31), and the proof Proposition 3.7 is completed.
**Remark 4.1**.: _We remark that from the same estimate in (4.10), for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\), we obtain the following estimate:_
\[|w(x,t)|\leq c(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha}. \tag{4.21}\]
_Since its verification is rather straightforward, the details are omitted._
### Proof of Proposition 3.8
From the equations for \(i<n\), it follows that
\[D_{t}w_{i}-D_{x_{n}}w_{i}^{\mathcal{B}}+D_{x_{i}}\Pi^{\mathcal{B}}=f_{i}+\Delta^{ \prime}w_{i}+D_{x_{n}}w_{i}^{\mathcal{G}}-D_{x_{i}}\Pi^{\mathcal{G}}.\]
Using Proposition 3.7 and Lemma 3.2, we obtain (3.32). On the other hand, the equation of \(w_{n}\) can be rewritten as
\[D_{t}w_{n}-D_{x_{n}}^{2}w_{n}+D_{x_{n}}\Pi^{\mathcal{B}}=f_{n}+\Delta^{\prime} w_{n}-D_{x_{n}}\Pi^{\mathcal{G}},\]
which yields, due to Proposition 3.7 and divergence free condition, the estimate (3.33). This completes the proof of Proposition 3.8.
## 5. Proofs of Theorems for Stokes system
### Proof of Theorem 1.3
Assume that \(0<\alpha<1\) and \(0<\beta<\frac{1}{2}\). Let \(\frac{1}{q_{1}}=1-\delta\) and \(\frac{1}{p_{1}}=\frac{1}{2}+\epsilon\) for sufficiently small \(0<\frac{n\epsilon}{2}<\delta\) such that \(\frac{1}{2}<\frac{1}{p_{1}}\), \(\max(\frac{1}{2},\alpha)<\frac{1}{q_{1}}\) and
\[\frac{2}{q_{1}}+\frac{n}{p_{1}}=2-2\delta+\frac{n}{2}+n\epsilon<\frac{n}{2}+2. \tag{5.1}\]
Note that \(f_{2}\in L^{q_{1}}(0,\infty;L^{p_{1}}(\mathbb{R}^{n}_{+}))\). Then, from (2.6), (2.7) and (1.6), we get
\[\|w\|_{L^{\infty}(0,1;L^{2}(\mathbb{R}^{n}_{+}))}<\infty, \tag{5.2}\]
\[\|D_{x}w\|_{L^{2}(\mathbb{R}^{n}_{+}\times(0,1))}<\infty. \tag{5.3}\]
From (5.3) and (5.2), we obtain (1.16). Since \(q>6\) with \(1+\frac{3}{2q}<\alpha+\frac{\beta}{2}\), using Proposition 3.7 for \(i\neq 2\) and Proposition 3.2, we obtain
\[\min\left\{\|D_{x_{n}}w_{i}\|_{L^{q}_{t}(0,1;L^{2}_{x}(A_{i}\times (0,1))}^{q},\ \|D_{x_{n}}w_{2}\|_{L^{q}_{t}(0,1;L^{q}_{x}(B_{2}\times(0,1))}^{q}\right\}\] \[\geq c\int_{\frac{1}{2}}^{1}(t-\frac{1}{2})^{(1-\frac{\beta}{2}- \alpha)q+\frac{1}{2}}dt-\int_{\frac{1}{2}}^{1}(t-\frac{1}{2})^{(\frac{3}{2}- \frac{\beta}{2}-\alpha)q}dt=\infty.\]
Hence, we obtain (1.17). We complete the proof of Theorem 1.3.
### Proof of Theorem 1.5
Let \(\frac{2}{q}+\frac{n}{p}+1>2\alpha+n\beta\) for \(\frac{n}{n-1}<p\) and \(\frac{1}{\beta}<\frac{n-1}{n}p\). We take \(q_{1}<\frac{1}{\alpha}\) and \(p_{1}<\frac{1}{\beta}\) satisfying \(\frac{2}{q}+\frac{n}{p}+1>\frac{2}{q_{1}}+\frac{n}{p_{1}}\), and then by Proposition 3.4, it follows that
\[\|\Pi\|_{L^{q}(0,1;L^{p}(\mathbb{R}^{n}_{+}))}<c\|f_{2}\|_{L^{q_{1}}(0,1;L^{p_{ 1}}(\mathbb{R}^{n}_{+}))}.\]
Hence, we obtain (1.18). We note from (3.31) an (3.28) that there exists \(\delta\in(\frac{1}{2},1)\) such that if \(t\in(\frac{1}{2},\delta)\), then
\[|\Pi(x,t)|\geq c(t-\frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2}-\alpha}(1-c(t- \frac{1}{2})^{\frac{1}{2}})\geq c(t-\frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2}- \alpha}.\]
Therefore, we obtain
\[\|\Pi\|_{L^{q}(\{|x^{\prime}|>2\}\times(a,b)\times(0,1))}^{q}\geq c\int_{\frac {1}{2}}^{\delta}(t-\frac{1}{2})^{(\frac{1}{2}-\frac{\beta}{2}-\alpha)q}dt=\infty\]
for \(\frac{1}{2}+\frac{1}{q}<\alpha+\frac{\beta}{2}\). Hence, we obtain (1.20). We complete the proof of Theorem 1.5.
### Proof of Theorem 1.8
Let \(0<\alpha,\)\(\beta<1\) and \(w\) be a solution of (1.1) defined by (1.7). We recall the decomposition of \(w\) in (3.1) and (3.2), i.e. \(w=V+W\). We assume that \(\epsilon_{0}=3-\beta-2\alpha\in(0,2)\). It is clear that
\[V\in C_{x,t}^{\infty}(\mathcal{R}\times(0,1)).\]
Applying the proof of Proposition 3.2, Proposition 3.3 and Proposition 3.7, for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\), if \(\epsilon_{0}\in(0,1)\), then we have
\[x_{n}^{1-\epsilon_{0}}|D_{x}W(x,t)| \leq cx_{n}^{1-\epsilon_{0}}(t-\frac{1}{2})^{1-\frac{\beta}{2}- \alpha}e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}+cx_{n}^{1-\epsilon_{0}}\] \[\leq c(\frac{x_{n}^{2}}{t-\frac{1}{2}})^{\alpha+\frac{\beta}{2}- 1}e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}+cx_{n}^{2\alpha+\beta-2}\leq c,\]
and if \(\epsilon_{0}\in(1,2)\), then we have
\[x_{n}^{2-\epsilon_{0}}|D_{x}^{2}W(x,t)| \leq cx_{n}^{2-\epsilon_{0}}(t-\frac{1}{2})^{\frac{1}{2}-\frac{ \beta}{2}-\alpha}e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}+cx_{n}^{2-\epsilon_{0}}\] \[\leq c(\frac{x_{n}^{2}}{t-\frac{1}{2}})^{\alpha+\frac{\beta}{2}- \frac{1}{2}}e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}+cx_{n}^{2\alpha+\beta-2}\leq c,\]
which implies that
\[W\in L^{\infty}(0,1;C^{\epsilon_{0}}(\mathcal{R}\times(0,1))) \tag{5.4}\]
(see the proof of Theorem 4.1 in [5]).
For Holder continuity for temporal variable, we set \(s,t\) with \(t>s>\frac{1}{2}\).
\[W_{i}(x,t)-W_{i}(x,s) =\int_{\frac{1}{2}}^{s}\int_{\mathbb{R}_{+}^{n}}\big{(}L_{i2}(x, y,t-\tau)-L_{i2}(x,y,s-\tau)\big{)}f_{2}(y,\tau)dyd\tau\] \[\quad+\int_{s}^{t}\int_{\mathbb{R}_{+}^{n}}L_{i2}(x,y,t-\tau)f_{2 }(y,\tau)dyd\tau\] \[=I_{1}+I_{2}.\]
We firstly estimate \(I_{2}\).
\[|I_{2}| \leq\int_{s}^{t}\int_{\mathbb{R}_{+}^{n}}\frac{e^{-\frac{y_{n}^{ 2}}{t-\tau}}}{(|x-y|^{2}+t-\tau)\frac{n}{2}}y_{n}^{-\beta}(\tau-\frac{1}{2})^{ -\alpha}dyd\tau\] \[\leq\int_{s}^{t}\int_{\mathbb{R}_{+}^{n}}e^{-\frac{y_{n}^{2}}{t- \tau}}y_{n}^{-\beta}(\tau-\frac{1}{2})^{-\alpha}dyd\tau\] \[\leq\int_{s}^{t}(t-\tau)^{\frac{1}{2}-\frac{\beta}{2}}(\tau- \frac{1}{2})^{-\alpha}d\tau\leq c(t-s)^{\frac{3}{2}-\frac{\beta}{2}-\alpha}. \tag{5.5}\]
Next, we estimate \(I_{1}\).
\[I_{1}=\int_{\frac{1}{2}}^{s}\int_{\mathbb{R}_{+}^{n}}\int_{0}^{1}D_{t}L_{i2}(x,y, \theta t+(1-\theta)s-\tau)(t-s)f_{2}(y,\tau)d\theta dyd\tau\]
\[\leq c(t-s)\int_{\frac{1}{2}}^{s}\int_{\mathbb{R}_{+}^{n}}\int_{0}^{1}\frac{e^{ -\frac{y_{0}^{2}}{\theta t+(1-\theta)s-\tau}}f_{2}(y,\tau)}{(\theta t+(1- \theta)s-\tau)(|x-y|^{2}+\theta t+(1-\theta)s-\tau)^{\frac{n}{2}}}d\theta dyd\tau\]
\[\leq c(t-s)\int_{\frac{1}{2}}^{s}\int_{0}^{1}\int_{0}^{1}\frac{e^{-\frac{y_{0}^ {2}}{\theta t+(1-\theta)s-\tau}}}{(\theta t+(1-\theta)s-\tau)}y_{n}^{-\beta}( \tau-\frac{1}{2})^{-\alpha}d\theta dy_{n}d\tau\]
\[\leq c(t-s)\int_{\frac{1}{2}}^{s}\int_{0}^{1}(\theta t+(1-\theta)s-\tau)^{- \frac{1}{2}-\frac{\beta}{2}}(\tau-\frac{1}{2})^{-\alpha}d\theta d\tau\]
\[=\int_{\frac{1}{2}}^{s}(\tau-\frac{1}{2})^{-\alpha}(s-\tau)^{\frac{1}{2}- \frac{\beta}{2}}\int_{0}^{\frac{t-s}{s-\tau}}(\theta+1)^{-\frac{1}{2}-\frac{ \beta}{2}}d\theta d\tau\]
\[\leq\int_{\frac{1}{2}}^{2s-t}\int_{\frac{1}{2}}^{\frac{t-s}{s-\tau}}\cdots d \theta d\tau+\int_{2s-t}^{s}\int_{0}^{\frac{t-s}{s-\tau}}\cdots d\theta d \tau:=I_{11}+I_{12}.\]
Using \(\frac{t-s}{s-\tau}\leq 1\) for \(\tau<2s-t\), it follows for \(I_{11}\) that
\[I_{11} \leq c(t-s)\int_{\frac{1}{2}}^{2s-t}(\tau-\frac{1}{2})^{-\alpha}( s-\tau)^{-\frac{1}{2}-\frac{\beta}{2}}d\tau \tag{5.6}\] \[\leq c(t-s)^{\frac{1}{2}-\frac{\beta}{2}}\int_{\frac{1}{2}}^{2s-t }(\tau-\frac{1}{2})^{-\alpha}d\tau\leq c(t-s)^{\frac{3}{2}-\frac{\beta}{2}- \alpha}.\]
On the other hand, since \(\int_{0}^{\frac{t-s}{s-\tau}}(\theta+1)^{-\frac{1}{2}-\frac{\beta}{2}}d\theta \leq c(\frac{t-\tau}{s-\tau})^{\frac{1}{2}-\frac{\beta}{2}}\), we have
\[I_{12}\leq c(t-s)^{\frac{1}{2}-\frac{\beta}{2}}\int_{2s-t}^{s}(\tau-\frac{1}{2} )^{-\alpha}\leq c(t-s)^{\frac{3}{2}-\frac{\beta}{2}-\alpha}. \tag{5.7}\]
Adding up estimates (5.6) and (5.7), we obtain
\[I_{1}\leq c(t-s)^{\frac{1}{2}\epsilon_{0}}. \tag{5.8}\]
Hence, it follows from (5.8) and (5.5) that
\[W\in L^{\infty}(\mathcal{R};C^{\frac{1}{2}\epsilon_{0}}(0,1)). \tag{5.9}\]
From (5.4) and (5.9), we obtain (1.21).
Next, we prove (1.22). We note first due to (3.8) that
\[L_{i2}(x,y,t-s)=D_{x_{i}}\int_{0}^{x_{n}}\int_{\mathbb{R}^{n-1}} \Gamma(x-y^{*}-z,t-s)D_{z_{2}}N(z)dz\] \[\quad=c(t-s)^{-\frac{1}{2}}\int_{0}^{x_{n}}e^{-\frac{(x_{n}+y_{n} -z_{n})^{2}}{t-s}}\big{(}D_{x_{i}}D_{x_{2}}N(x^{\prime}-y^{\prime},z_{n})+J_{2 0}(x^{\prime}-y^{\prime},t-s)\big{)}dz_{n}.\]
For \(x^{\prime}\in A_{i}\) and \(|y^{\prime}|<1\), we estimate the second term in the above as follows:
\[(t-s)^{-\frac{1}{2}}\left|\int_{0}^{x_{n}}e^{-\frac{(x_{n}+y_{n}-z_{n})^{2}}{t- s}}J_{20}(x^{\prime}-y^{\prime},t-s)dz_{n}\right|\leq c\int_{0}^{x_{n}}e^{- \frac{(y_{n}+z_{n})^{2}}{t-s}}dz_{n}\leq c(t-s)^{\frac{1}{2}}e^{-\frac{y_{n}^{2 }}{t-s}}.\]
On the other hand, the first term has a lower bound. Indeed, since \(D_{x_{i}}D_{x_{2}}N(x^{\prime}-y^{\prime},z_{n})\geq\phi_{i}(x^{\prime},x_{n})\) for \(x^{\prime}\in A_{i}\), \(|y^{\prime}|\leq 1\) and \(0<z_{n}\leq x_{n}\), the first term is lower bounded by
\[(t-s)^{-\frac{1}{2}}\int_{0}^{x_{n}}e^{-\frac{(x_{n}+y_{n}-z_{n}) ^{2}}{t-s}}\phi_{i}(x,z_{n})dz_{n}\] \[\geq c(t-s)^{-\frac{1}{2}}\int_{0}^{x_{n}}e^{-\frac{(yn+z_{n})^{2}}{t -s}}dz_{n}\phi_{i}(x^{\prime},x_{n})\] \[\geq ce^{-\frac{y_{n}^{2}}{t-s}}\int_{0}^{\frac{x_{n}}{\sqrt{t-s}}}e^{ -z_{n}^{2}}dz_{n}\phi_{i}(x^{\prime},x_{n}).\]
Recalling \(W_{i}\) in (3.2). Since \(W_{i}(x^{\prime},0,t)=0\), we have for \(x^{\prime}\in A_{i}\)
\[|W_{i}(x,t)-W_{i}(x^{\prime},0,t)| \geq c\int_{\frac{1}{2}}^{t}\int_{0}^{1}e^{-\frac{y_{n}^{2}}{t-s} }y_{n}^{-\beta}(s-\frac{1}{2})^{-\alpha}dy_{n}ds|\phi_{i}(x,x_{n})|\] \[\quad-c\int_{\frac{1}{2}}^{t}\int_{0}^{1}(t-s)^{\frac{1}{2}}e^{- \frac{y_{n}^{2}}{t-s}}y_{n}^{-\beta}(s-\frac{1}{2})^{-\alpha}dy_{n}ds\] \[\geq c(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha}-c(t- \frac{1}{2})^{2-\frac{\beta}{2}-\alpha}.\]
Therefore, for \(x^{\prime}\in A_{i}\) and \(x_{n}^{2}<t-\frac{1}{2}<4x_{n}^{2}\), we obtain
\[|W_{i}(x,t)-W_{i}(x^{\prime},0,t)|\geq c(x_{n}^{3-\beta-2\alpha}-x_{n}^{4- \beta-2\alpha}),\]
which deduce that
\[W_{i}\notin L_{t}^{\infty}C_{x}^{\epsilon}(A_{i}\times(0,1)\times(0,1)) \tag{5.10}\]
for \(\epsilon>\epsilon_{0}=3-\beta-2\alpha\) if \(\epsilon_{0}\in(0,1)\).
Let \(\epsilon_{0}\in(1,2)\). From the proof of Proposition 3.7, for \(x\in\mathcal{R}\) and \(t>\frac{1}{2}\), we have
\[x_{n}^{2-\epsilon}|D_{x}D_{x_{2}}W_{i}^{\mathcal{G}}(x,t)\big{)}| \leq cx_{n}^{2-\epsilon}(t-\frac{1}{2})^{1-\frac{\beta}{2}-\alpha }e^{-\frac{x_{n}^{2}}{8(t-\frac{1}{2})}}\] \[\leq cx_{n}^{4-\epsilon-\beta-2\alpha}\] \[<\infty\quad\text{for}\quad\epsilon_{0}<\epsilon<4-\beta-2\alpha <2.\]
This implies \(D_{x_{2}}W_{i}^{\mathcal{G}}\in L^{\infty}(0,1;C^{\epsilon}(\mathcal{R}))\).
From Mean-value Theorem and Proposition (3.3), for \(\sqrt{t-\frac{1}{2}}\leq c_{l}x_{n}\), we have
\[|\mathcal{B}_{i}^{w}(x^{\prime},x_{n},t)-\mathcal{B}_{i}^{w}(x^{ \prime},\frac{1}{2}x_{n},t)| =\frac{1}{2}x_{n}|D_{x_{n}}\mathcal{B}_{i}^{w}(x^{\prime},\xi_{n},t)|\quad\text{for some}\quad\xi_{n}\in[\frac{1}{2}x_{n},x_{n}]\] \[\geq c\frac{1}{2}x_{n}(t-\frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2 }-\alpha}e^{-\frac{x_{n}^{2}}{2(t-\frac{1}{2})}}.\]
Therefore, for \(|x^{\prime}|\geq 2\) and \(c_{l}^{\prime}x_{n}<\sqrt{t-\frac{1}{2}}<c_{l}x_{n}\), we obtain
\[|\mathcal{B}_{i}^{w}(x^{\prime},x_{n},t)-\mathcal{B}_{i}^{w}(x^{\prime},\frac{1 }{2}x_{n},t)|\geq cx_{n}^{2-\beta-2\alpha}.\]
This implies that \(D_{x_{n}}\mathcal{B}_{i}^{w}\notin L^{\infty}(0,1;C^{\epsilon-1}(\mathcal{R}))\) for \(\epsilon>\epsilon_{0}\). Since \(D_{x_{n}}W_{i}=D_{x_{2}}W_{i}^{\mathcal{G}}+\mathcal{B}_{i}^{w}\), We have that \(W_{i}\notin L^{\infty}(0,1;C^{\epsilon}(\mathcal{R}))\) for \(\epsilon>\epsilon_{0}\) if \(\epsilon_{0}\in(1,2)\).
Next, we will show that \(W_{i}\notin C_{t}^{\frac{\epsilon}{2}}L_{x}^{\infty}(A_{i}\times(0,1)\times(0,1))\). Indeed, for \(t>\frac{1}{2}\) we consider
\[D_{x_{i}}D_{x_{2}}\int_{0}^{x_{n}}\int_{\mathbb{R}^{n-1}}\Gamma( x-y^{*}-z,t)N(z)dz\] \[= \int_{0}^{x_{n}}\Gamma_{1}(x_{n}+y_{n}-z_{n},t)D_{z_{i}}D_{z_{2}} N(z)dz+\int_{0}^{x_{n}}\Gamma_{1}(x_{n}+y_{n}-z_{n},t)J_{20}(z,t)dz\] \[= I_{1}+I_{2},\]
where we used Lemma 3.1. We first estimate \(I_{2}\) as follows:
\[|I_{2}|\leq c\int_{0}^{x_{n}}e^{-\frac{(x_{n}+y_{n}-z_{n})^{2}}{t}}dz_{n}\leq ce ^{-\frac{y_{n}^{2}}{t}}\int_{0}^{x_{n}}e^{-\frac{z_{n}^{2}}{t}}dz_{n}\leq ct^ {\frac{1}{2}}e^{-\frac{y_{n}^{2}}{t}}.\]
Secondly, for \(x_{n}^{2}\geq t-\frac{1}{2}\), we have a lower bound for \(I_{1}\).
\[I_{1}\geq ct^{-\frac{1}{2}}\int_{0}^{x_{n}}e^{-\frac{(x_{n}+y_{n}-z_{n})^{2}}{ t}}dz_{n}\geq ce^{-\frac{y_{n}^{2}}{t}}\int_{0}^{\frac{x_{n}}{\sqrt{t}}}e^{-z_{n}^ {2}}dz_{n}\geq ce^{-\frac{y_{n}^{2}}{t}}.\]
Collecting above estimates and noting that \(W_{i}(x,\frac{1}{2})=0\), we obtain for \(x_{n}^{2}\geq t-\frac{1}{2}\)
\[|W_{i}(x,t)|= \left|W_{i}(x,t)-W_{i}(x,\frac{1}{2})\right|=\left|\int_{\frac{1} {2}}^{t}L_{i2}(x,y,t-\tau)y_{n}^{-\beta}\tau^{-\alpha}dyd\tau\right|\] \[\geq c\int_{\frac{1}{2}}^{t}\int_{0}^{1}y_{n}^{-\beta}s^{-\alpha}e^{- \frac{y_{n}^{2}}{(t-\tau)}}dy_{n}d\tau-c\int_{\frac{1}{2}}^{t}\int_{0}^{1}y_{n }^{-\beta}\tau^{-\alpha}(t-\tau)^{\frac{1}{2}}e^{-\frac{y_{n}^{2}}{(t-\tau)}} dy_{n}d\tau\] \[\geq c\int_{\frac{1}{2}}^{t}\tau^{-\alpha}(t-\tau)^{\frac{1}{2}-\frac{ \beta}{2}}d\tau-c\int_{\frac{1}{2}}^{t}\tau^{-\alpha}(t-\tau)^{1-\frac{\beta} {2}}d\tau \tag{5.11}\] \[\geq c\big{(}(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha}-c(t- \frac{1}{2})^{2-\frac{\beta}{2}-\alpha}\big{)},\]
where we used (2.8). The estimate (5) implies that
\[W_{i}\notin L^{\infty}(A_{i}\times(0,1);\dot{C}^{\frac{1}{2}\epsilon}(0,1)) \tag{5.12}\]
for \(\epsilon>\epsilon_{0}\). From (5) and (5.12), we obtain (1.22). We complete the proof of Theorem 1.8.
## 6. Singular solutions for Navier-Stokes equations
In this section we provide the proof of Theorem 1.10.
We look for a solution of the Navier-Stokes equations (1.23) of the form \(u=w+v\) and \(p=\Pi+q\), where \((w,\Pi)\) is the singular solution of the Stokes system (1.1) satisfying (1.29). Thus, it suffices to establish existence of solution that is less singular than \((w,\Pi)\) for the following perturbed Navier-Stokes equations in \(\mathbb{R}_{+}^{n}\times(0,1)\):
\[v_{t}-\Delta v+\nabla q+\operatorname{div}\,\,(v\otimes v+v\otimes w+w\otimes v )=-\operatorname{div}\,(w\otimes w),\quad\operatorname{div}v=0 \tag{6.1}\]
with homogeneous initial and boundary data, i.e.
\[v(x,0)=0,\qquad v(x,t)=0\text{ on }\{x_{n}=0\}. \tag{6.2}\]
The first step is to construct a solution of (6.1)-(6.2) in the class of functions \(L^{r}(\mathbb{R}^{n}_{+}\times(0,1))\cap L^{\infty}(0,1;L^{2}(\mathbb{R}^{n}_{+ }))\), where \(r\) is the number imposed in (1.24). In order to do that, we use the following proposition (see [2, Theorem 1.2]).
**Proposition 6.1**.: _Let \(1<p<\infty,\,1<q<\infty\), \(p\geq p_{1}\) and \(q\geq q_{1}\). If \(F\in L^{q_{1}}(0,1;L^{p_{1}}(\mathbb{R}^{n}_{+}))\), \((\frac{n}{2p_{1}}-\frac{n}{2p})+\frac{1}{q_{1}}-\frac{1}{q}\leq\frac{1}{2}\) with \(F|_{x_{n}=0}=0\). Then, there is a unique weak solution \(u\in L^{q}(0,1;L^{p}(\mathbb{R}^{n}_{+}))\) to the Stokes equation (1.1) with \(f=\mathrm{div}F\), which satisfies the estimate_
\[\|v\|_{L^{q}(0,1;L^{p}(\mathbb{R}^{n}_{+}))}\leq c\|F\|_{L^{q_{1}}(0,1;L^{p_{1 }}(\mathbb{R}^{n}_{+}))}. \tag{6.3}\]
Now, we adopt an iterative scheme for (6.1), which is formulated as follows: For a positive integer \(m\geq 1\)
\[v^{m+1}_{t}-\Delta v^{m+1}+\nabla q^{m+1}=-\mathrm{div}\,\left(v ^{m}\otimes v^{m}+v^{m}\otimes w+w\otimes v^{m}+w\otimes w\right),\] \[\mathrm{div}\,v^{m+1}=0\]
with conditions (6.2) i.e.
\[v^{m+1}(x,0)=0\qquad v^{m+1}(x,t)=0\text{ on }\{x_{n}=0\}.\]
We set \(v^{1}=0\).
For convenience, we denote that \(L^{q}(0,1;L^{p}(\mathbb{R}^{n}_{+}))=L^{q}_{t}L^{p}_{x}\). Furthermore, if \(p=q\), then we abbreviate \(L^{q}_{t}L^{q}_{x}\) as \(L^{q}\), unless any confusion is to be expected. Due to Proposition 6.1, we have
\[\|v^{2}\|_{L^{2}}\leq c\||w|^{2}\|_{L^{2}}\leq c\|w\|^{2}_{L^{4}}.\qquad\|v^{2 }\|_{L^{r}}\leq c\||w|^{2}\|_{L^{\frac{(n+2)r}{n+2+r}}}\leq c\|w\|^{2}_{L^{ \frac{2(n+2)r}{n+2+r}}}, \tag{6.4}\]
\[\|v^{m+1}\|_{L^{r}}\leq c\big{(}\|w\otimes w\|_{L^{\frac{(n+2)r}{n+2+r}}}+\|v ^{m}\otimes w\|_{L^{\frac{(n+2)r}{n+2+r}}}+\|v^{m}\otimes v^{m}\|_{L^{\frac{(n +2)r}{n+2+r}}}\big{)}, \tag{6.5}\]
\[\|v^{m}\|_{L^{2}}\leq c\big{(}\|w\otimes w\|_{L^{2}}+\|v^{m}\otimes w\|_{L^{2} }+\|v^{m}\otimes v^{m}\|_{L^{2}}\big{)}. \tag{6.6}\]
Since \(n+2<r\), we take \(0<\eta_{1}=\frac{nr}{(r-2)(n+2)}\in(0,1)\). Then, we have
\[\|v^{m}\otimes v^{m}\|_{L^{\frac{(n+2)r}{n+2+r}}}\leq\|v^{m}\|_{L^{r}}\|v^{m} \|_{L^{n+2}}\leq\|v^{m}\|_{L^{r}}^{1+\eta_{1}}\|v^{m}\|_{L^{2}}^{1-\eta_{1}} \leq\|v^{m}\|_{L^{r}}^{1+\eta_{1}}\|v^{m}\|_{L^{2}}^{1-\eta_{1}}.\]
Let \(r_{2}\) be the number with \(\frac{1}{4}=\frac{1}{r}+\frac{1}{r_{2}}\). Since \(r>8\), it follows that \(2<r_{2}<r\) and thus
\[\|v^{m}\otimes v^{m}\|_{L^{2}}\leq\|v^{m}\|_{L^{4}}^{2}\leq\|v^{m}\|_{L^{r}}^{ 2(1-\eta_{2})}\|v^{m}\|_{L^{2}}^{2\eta_{2}}\leq\|v^{m}\|_{L^{r}}^{2(1-\eta_{2} )}\|v^{m}\|_{L^{2}}^{2\eta_{2}},\]
where \(\eta_{2}=\frac{r(r_{2}-2)}{r_{2}(r-2)}\). We note that \(\eta_{2}\geq\frac{1}{2}\). Using computations above, (6.5) and (6.6) are controlled as follows:
\[\|v^{m+1}\|_{L^{r}}\leq c\big{(}\|w\|_{L^{r}}^{1+\eta_{1}}\|w\|_{L ^{2}}^{1-\eta_{1}}+\|v^{m}\|_{L^{r}}^{1+\eta_{1}}\|w\|_{L^{2}}^{1-\eta_{1}}+\|v ^{m}\|_{L^{r}}^{1+\eta_{1}}\|v_{m}\|_{L^{2}}^{1-\eta_{1}}\big{)}, \tag{6.8}\] \[\|v^{m}\|_{L^{2}}\leq c\big{(}\|w\|_{L^{4}}^{2}+\|v^{m}\|_{L^{r}}^ {2(1-\eta_{2})}\|v^{m}\|_{L^{2}}^{2\eta_{2}}+\|w\|_{L^{r}}^{2(1-\eta_{2})}\|v ^{m}\|_{L^{2}}^{2\eta_{2}}\big{)}. \tag{6.7}\]
By (2.6), we have \(A:=\|w\|_{L^{r}}+\|w\|_{L^{2}}\leq ca\), where \(a>0\) is defined in **Assumption**1.
Taking \(a>0\) small such that \(A<\frac{1}{4c}\), where \(c\) is the constant in (6.4)-(6.8) such that
\[\|v^{2}\|_{L^{2}}+\|v^{2}\|_{L^{r}}<A.\]
Moreover, iterative arguments show for any \(m\) that
\[\|v^{m+1}\|_{L^{2}}+\|v^{m+1}\|_{L^{r}}\] \[\quad\leq c\big{(}\|w\|_{L^{r}}^{2(1-\eta_{2})}\|w\|_{L^{2}}^{2 \eta_{2}}+\|v^{m}\|_{L^{r}}^{2(1-\eta_{2})}\|v^{m}\|_{L^{2}}^{2\eta_{2}}+\|w\|_ {L^{r}}^{2(1-\eta_{2})}\|v^{m}\|_{L^{2}}^{2\eta_{2}}\big{)} \tag{6.9}\] \[\quad\leq 4cA^{2}<A.\]
Next, we will show that \(v^{m}\) converges in \(L^{2}\cap L^{r}\). For simplicity, we denote \(V^{m+1}:=v^{m+1}-v^{m}\) and \(Q^{m+1}:=q^{m+1}-q^{m}\) for \(m\geq 1\). We then see that \((V^{m+1},Q^{m+1})\) solves
\[V^{m+1}_{t}-\Delta V^{m+1}+\nabla Q^{m+1}=-\mathrm{div}\,\left(V ^{m}\otimes v^{m}+v^{m-1}\otimes V^{m}+V^{m}\otimes w+w\otimes V^{m}\right),\] \[\mathrm{div}\,V^{m+1}=0\]
with homogeneous initial and boundary data, i.e. \(V^{m+1}(x,0)=0\) and \(V^{m+1}(x,t)=0\) on \(\{x_{n}=0\}\). Taking sufficiently small \(a>0\) such that \(A<\frac{1}{6c}\). Since \(\eta_{2}\geq\frac{1}{2}\), it follows from (6.7), (6.8) and (6.9) that
\[\|V^{m+1}\|_{L^{2}}+\|V^{m+1}\|_{L^{r}} \leq c\big{(}\|v^{m}\|_{L^{r}}^{2(1-\eta_{2})}\|V^{m}\|_{L^{2}}^{ 2\eta_{2}}+\|w\|_{L^{r}}^{2(1-\eta_{2})}\|V^{m}\|_{L^{2}}^{2\eta_{2}}\big{)}\] \[\leq 3cA^{2(1-\eta_{2})}\|V^{m}\|_{L^{2}}^{2\eta_{2}}\] \[\leq 3cA\|V^{m}\|_{L^{2}}<\frac{1}{2}\left\|V^{m}\right\|_{L^{2}}. \tag{6.10}\]
Hence, we obtain
\[\|V^{m+1}\|_{L^{2}}+\|V^{m+1}\|_{L^{r}}<\frac{1}{2}\big{(}\left\|V^{m}\right\| _{L^{2}}+\|V^{m}\|_{L^{r}}\big{)}.\]
Therefore, there exists \(v\in L^{r}\cap L^{2}\) such that \(v^{m}\) converges to \(v\) in \(L^{r}\cap L^{2}\), which \(v\) solves (6.1)-(6.2) in the sense of distributions with corresponding pressure \(q\).
For the uniqueness, we assume that \(v_{1}\) is another solution of (6.1) with initial-boundary condition (6.2) such that \(\|v_{1}\|_{L^{r}}+\|v_{1}\|_{L^{2}}<A\). Let we denote \(V:=v_{1}-v\) and \(Q:=q_{1}-q\). We then see that \((V,Q)\) solves
\[V_{t}-\Delta V+\nabla Q=-\mathrm{div}\,\left(V\otimes v_{1}+v_{1}\otimes V+V \otimes w+w\otimes V\right),\qquad\mathrm{div}\,V=0\]
with homogeneous initial and boundary data, i.e. \(V(x,0)=0\) and \(V(x,t)=0\) on \(\{x_{n}=0\}\). With same estimate to (6.10), we obtain
\[\|V\|_{L^{2}}+\|V\|_{L^{r}}<\frac{1}{2}\big{(}\left\|V\right\|_{L^{2}}+\|V\|_{L^{ r}}\big{)}.\]
Hence, we obtain \(V\equiv 0\).
By uniqueness, \(v\) is represented by
\[v(x,t)=\int_{0}^{t}\int_{\mathbb{R}^{n}_{+}}K(x,y,t-s)\mathbb{P}\mathrm{div}\, F(y,s)dyds, \tag{6.11}\]
where \(F=(v\otimes v+v\otimes w+v\otimes w+w\otimes w)\) and \(K\) is introduced in (1.9). Since \(\left.F\right|_{x_{n}=0}=0\), it is known that there exists a tensor \(\mathcal{F}\) such that \(\mathbb{P}\mathrm{div}\,F=\nabla\cdot\mathcal{F}\) with \(\mathcal{F}_{in}|_{x_{n}=0}=0,\,1\leq i\leq n\) and \(\|\mathcal{F}\|_{L^{p}(\mathbb{R}^{n}_{+})}\leq c\|F\|_{L^{p}(\mathbb{R}^{n}_ {+})},\,1<p<\infty\). (see [9] and [12]). Hence, we have
\[v(x,t)=-\int_{0}^{t}\int_{\mathbb{R}^{n}_{+}}\nabla K(x,y,t-s)\mathcal{F}(y,s )dyds\]
and from (2.5), we have
\[|v(x,t)| \leq c\int_{0}^{t}(t-s)^{-\frac{1}{2}-\frac{n}{r}}\|F(s)\|_{L^{ \frac{r}{2}}(\mathbb{R}^{n}_{+})}ds\] \[\leq c\int_{0}^{t}(t-s)^{-\frac{1}{2}-\frac{n}{r}}\|F(s)\|_{L^{ \frac{r}{2}}(\mathbb{R}^{n}_{+})}ds\] \[\leq ct^{1-\frac{n+2}{r}}\|F\|_{L^{\frac{r}{2}}}.\]
Since \(r>n+2\), we get \(\|v\|_{L^{\infty}}<\infty\). Using the complex interpolation, we get \(\|v\|_{L^{p}}<\infty\) for \(2\leq p\leq\infty\). This implies that
\[\|\nabla v\|_{L^{\frac{r}{2}}}\leq c\|w\otimes w\|_{L^{\frac{r}{2} }}+\|v\otimes w\|_{L^{\frac{r}{2}}}+\|v\otimes v\|_{L^{\frac{r}{2}}}<\infty,\] \[\|\nabla v\|_{L^{2}}\leq c\|w\otimes w\|_{L^{2}}^{2}+\|w\otimes v \|_{L^{2}}^{2}+\|v\otimes v\|_{L^{2}}^{2}<\infty. \tag{6.12}\]
From representation (6.11) and estimate (2.5), we have
\[\|v(t)\|_{L^{2}(\mathbb{R}^{n}_{+})} \leq c\int_{0}^{t}\|\mathbb{P}\mathrm{div}F(s)\|_{L^{2}(\mathbb{R} ^{n}_{+})}ds\] \[\leq c\int_{0}^{t}\big{(}\|(\nabla w)w\|_{L^{2}(\mathbb{R}^{n}_{+ })}+\|(\nabla v)w\|_{L^{2}(\mathbb{R}^{n}_{+})}+\|(\nabla w)v\|_{L^{2}(\mathbb{ R}^{n}_{+})}+\|(\nabla v)v\|_{L^{2}(\mathbb{R}^{n}_{+})}\big{)}ds\] \[\leq c\big{(}\|\nabla w\|_{L^{2}_{t}L^{4}_{x}}\|w\|_{L^{2}L^{4}_{ x}(\mathbb{R}^{n}_{+})}+\|\nabla w\|_{L^{2}_{t}L^{4}_{x}}\|v\|_{L^{2}L^{4}_{x}( \mathbb{R}^{n}_{+})}\] \[\qquad+\|\nabla v\|_{L^{2}_{t}L^{4}_{x}}\|w\|_{L^{2}L^{4}_{x}( \mathbb{R}^{n}_{+})}+\|\nabla v\|_{L^{2}_{t}L^{4}_{x}}\|v\|_{L^{2}L^{4}_{x}( \mathbb{R}^{n}_{+})}\big{)}\] \[\leq c\big{(}\|\nabla w\|_{L^{4}}\|w\|_{L^{4}(\mathbb{R}^{n}_{+})}+ \|\nabla w\|_{L^{4}}\|v\|_{L^{4}(\mathbb{R}^{n}_{+})}\] \[\qquad+\|\nabla v\|_{L^{4}}\|w\|_{L^{4}(\mathbb{R}^{n}_{+})}+\| \nabla v\|_{L^{4}}\|v\|_{L^{4}(\mathbb{R}^{n}_{+})}\big{)}.\]
Since \(r>8\), from (6.12), we have \(v\in L^{\infty}_{t}L^{2}_{x}\). Hence, \(v\) is weak solution, i.e. \(v\in L^{\infty}_{t}L^{2}_{x}\cap L^{2}_{t}H^{1}_{x}\).
We take \(s_{0}\) with \(\frac{n+2}{2}<s_{0}<s\) such that \(\frac{1}{r}<\frac{1}{r_{1}}:=\frac{1}{s_{0}}-\frac{1}{s}\). We note that \(s_{0}<r_{1}<r\). Then, we have
\[\|\nabla^{2}v\|_{L^{s_{0}}}+\|D_{t}v\|_{L^{s_{0}}}+\|\nabla\pi\|_{L ^{s_{0}}}\] \[\leq c\|(\nabla w)w\|_{L^{s_{0}}}+\|(\nabla w)v\|_{L^{s_{0}}}+\|( \nabla v)w\|_{L^{s_{0}}}+\|(\nabla v)v\|_{L^{s_{0}}}\] \[\leq c\|\nabla w\|_{L^{s}}\|w\|_{L^{r_{1}}}+\|\nabla w\|_{L^{s}}\|v\|_ {L^{r_{1}}}+\|\nabla v\|_{L^{s}}\|w\|_{L^{r_{1}}}+\|\nabla v\|_{L^{s}}\|v\|_{L^ {r_{1}}}<\infty. \tag{6.13}\]
We note that
\[w,v\in L^{\infty}(A\times(0,1)),\qquad\nabla v\in L^{s_{0}}(A \times(0,1)),\] \[\nabla w\in L^{r-}(A\times(0,1))\quad\text{ for all }\ r_{-}<r_{0}. \tag{6.14}\]
Since \(5<r_{0}\), we take \(r_{-}\) with \(5<r_{-}<r_{0}\) such that \(\operatorname{div}F\in L^{r_{-}}(A\times(0,1))\). Then, applying (6.13) and (6.14) in the proof of Theorem 1.5 in [4], we have
\[\nabla v\in C_{t}^{\frac{1}{2}(1-\frac{5}{r_{-}})}C_{x}^{1-\frac{5}{r_{-}}}(A \times(0,1)). \tag{6.15}\]
This implies that \(\nabla v\in L^{r_{0}}(A\times(0,1))\). We then set \(u:=v+w\) and \(p=\pi+q\), which becomes a weak solution of the Navier-Stokes equations in \(\mathbb{R}_{+}^{n}\times(0,\infty)\). However,
\[\|\nabla u\|_{L^{r_{0}}(A\times(0,1))}\geq\|\nabla w\|_{L^{r_{0}}(A \times(0,1))}-\|\nabla v\|_{L^{r_{0}}(A\times(0,1))}\geq\|\nabla w\|_{L^{r_{0} }(A\times(0,1))}-c.\]
The righthand side becomes unbounded, and thus we obtain \(\|\nabla u\|_{L^{r_{0}}(A\times(0,1))}=\infty\).
## 7. Appendix
### Proof of Lemma 2.2
Since \(\frac{1}{2}(x_{n}+y_{n})^{2}\leq x_{n}^{2}+y_{n}^{2}\leq 2(x_{n}+y_{n})^{2}\), for \(t>\frac{1}{2}\), we note that
\[\int_{\frac{1}{2}}^{t}\int_{0}^{2}y_{n}^{-\beta}(\tau-\frac{1}{2} )^{-\alpha}(t-\tau)^{\gamma}e^{-\frac{(x_{n}+y_{n})^{2}}{t-\tau}}dy_{n}d\tau\] \[= \int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^{\frac {1}{2}-\frac{\beta}{2}+\gamma}e^{-\frac{x_{n}^{2}}{t-\tau}}\int_{0}^{\frac{2} {\sqrt{t-\tau}}}y_{n}^{-\beta}e^{-y_{n}^{2}}dy_{n}d\tau\] \[\approx \int_{\frac{1}{2}}^{t}(\tau-\frac{1}{2})^{-\alpha}(t-\tau)^{\frac {1}{2}-\frac{\beta}{2}+\gamma}e^{-\frac{x_{n}^{2}}{t-\tau}}d\tau\] \[= \int_{\frac{1}{2}}^{\frac{1}{2}(t+\frac{1}{2})}\cdots d\tau+\int _{\frac{1}{2}(t+\frac{1}{2})}^{t}\cdots d\tau:=I_{1}+I_{2}, \tag{7.1}\]
where we used that \(\int_{0}^{\frac{1}{\sqrt{t-\tau}}}y_{n}^{-\beta}e^{-y_{n}^{2}}dy_{n}d\tau\approx 1\), since \(0\leq t-\tau\leq 1\). It is direct that \(\frac{1}{2}(t-\frac{1}{2})\leq t-\tau\leq t-\frac{1}{2}\) of \(\frac{1}{2}<\tau<\frac{1}{2}(t+\frac{1}{2})\). Therefore, the term \(I\) can be estimated as
\[c(t-\frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2}+\gamma}e^{-\frac{x_{n}^{2}}{t- \frac{1}{2}}}\int_{\frac{1}{2}}^{\frac{1}{2}(t+\frac{1}{2})}(\tau-\frac{1}{2} )^{-\alpha}d\tau\leq I_{1}\leq c(t-\frac{1}{2})^{\frac{1}{2}-\frac{\beta}{2}+ \gamma}e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}\int_{\frac{1}{2}}^{\frac{1}{2}(t+ \frac{1}{2})}(\tau-\frac{1}{2})^{-\alpha}d\tau.\]
Since \(\int_{\frac{1}{2}}^{\frac{1}{2}(t+\frac{1}{2})}(\tau-\frac{1}{2})^{-\alpha}d\tau \approx t^{1-\alpha}\), we obtain
\[c(t-\frac{1}{2})^{\frac{3}{2}-\alpha-\frac{\beta}{2}+\gamma}e^{-2\frac{x_{n}^{2 }}{t-\frac{1}{2}}}\leq I_{1}\leq c(t-\frac{1}{2})^{\frac{3}{2}-\alpha-\frac{ \beta}{2}+\gamma}e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}. \tag{7.2}\]
On the other hand, noting that \(\frac{1}{2}(t-\frac{1}{2})\leq\tau-\frac{1}{2}\leq t-\frac{1}{2}\) of \(\frac{1}{2}(t+\frac{1}{2})<\tau<t\), we have
\[I_{2} \approx c(t-\frac{1}{2})^{-\alpha}\int_{\frac{1}{2}(t+\frac{1}{2} )}^{t}(t-\tau)^{\frac{1}{2}-\frac{\beta}{2}+\gamma}e^{-\frac{x_{n}^{2}}{t- \tau}}d\tau\] \[=c(t-\frac{1}{2})^{-\alpha}\int_{0}^{\frac{1}{2}(t-\frac{1}{2})} \tau^{\frac{1}{2}-\frac{\beta}{2}+\gamma}e^{-\frac{x_{n}^{2}}{\tau}}d\tau\] \[=c(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}\int_{\frac{2x _{n}^{2}}{t-\frac{1}{2}}}^{\infty}s^{-\frac{5}{2}+\frac{\beta}{2}-\gamma}e^{- \tau}d\tau,\]
where we used the change of variables ( \(r=\frac{x_{n}^{2}}{s}\)) in the last equality.
\(\bullet\) ( Case \(\gamma-\frac{\beta}{2}\geq-\frac{3}{2}\)) We treat the both cases of \(\frac{2x_{n}^{2}}{t-\frac{1}{2}}\leq 1\) and \(\frac{2x_{n}^{2}}{t-\frac{1}{2}}\geq 1\) separately. Firstly, if \(\frac{2x_{n}^{2}}{t-\frac{1}{2}}\leq 1\), it follows that
\[I_{2}=c(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}\int_{\frac{2x_{n}^{2} }{t-\frac{1}{2}}}^{1}s^{-\frac{5}{2}+\frac{\beta}{2}-\gamma}e^{-\tau}d\tau+c( t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}\int_{1}^{\infty}s^{-\frac{5}{2}+ \frac{\beta}{2}-\gamma}e^{-\tau}d\tau\]
\[\approx c(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}\big{(}\frac{2x_{n}^ {2}}{t-\frac{1}{2}}\big{)}^{-\frac{3}{2}+\frac{\beta}{2}-\gamma}+c(t-\frac{1} {2})^{-\alpha}x_{n}^{3-\beta+2\gamma}\]
\[\approx c(t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha+\gamma}\approx c (t-\frac{1}{2})^{\frac{3}{2}-\frac{\beta}{2}-\alpha+\gamma}e^{-\frac{2x_{n}^ {2}}{t-\frac{1}{2}}}, \tag{7.3}\]
where we used \(e^{-1}\leq e^{-\frac{x_{n}^{2}}{t-\frac{1}{2}}}\leq 1\).
Next, we consider the case \(\frac{2x_{n}^{2}}{t-\frac{1}{2}}\geq 1\). We note via the l'Hospital's Theorem that
\[\lim_{a\to\infty}\frac{\int_{a}^{\infty}\tau^{-\frac{5}{2}+\frac{\beta}{2}- \gamma}e^{-\tau}d\tau}{a^{-\frac{5}{2}+\frac{\beta}{2}-\gamma}e^{-a}}=\lim_{a \to\infty}\frac{-a^{-\frac{5}{2}+\frac{\beta}{2}-\gamma}e^{-a}}{(-\frac{5}{2} +\frac{\beta}{2}-\gamma)e^{-a}-a^{-\frac{5}{2}+\frac{\beta}{2}-\gamma}e^{-a} }=1. \tag{7.4}\]
Hence, due to (7.4), it is straightforward that
\[I_{2}\approx(t-\frac{1}{2})^{\frac{3}{2}-\alpha-\frac{\beta}{2}+\gamma}e^{- \frac{2x_{n}^{2}}{t-\frac{1}{2}}}.\]
\(\bullet\) (**Case \(\gamma-\frac{\beta}{2}<-\frac{3}{2}\)**) In case that \(\frac{2x_{n}^{2}}{t-\frac{1}{2}}\leq 1\), it is direct that
\[I_{2}\approx c(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}.\]
If \(\frac{2x_{n}^{2}}{t-\frac{1}{2}}\geq 1\), it follows from (7.4) that
\[I_{2} \approx c(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}(\frac{x_{n }^{2}}{t-\frac{1}{2}})^{-\frac{5}{2}+\frac{\beta}{2}-\gamma}e^{-\frac{2x_{n}^ {2}}{t-\frac{1}{2}}}\] \[\approx c(t-\frac{1}{2})^{-\alpha}x_{n}^{3-\beta+2\gamma}e^{- \frac{2x_{n}^{2}}{t-\frac{1}{2}}}. \tag{7.5}\]
Hence, from (7.2), (7.3), (7.5) and (7.5), we complete the proof of Lemma (2.2).
### Proof of Lemma 3.1
First, we assume that \(l=0\) and \(D_{x^{\prime}}^{k}=D_{x^{\prime}}^{k-1}D_{x_{i}},\ k\geq 1,\ 1\leq i\leq n-1\). So,
\[D_{x^{\prime}}^{k}\int_{\mathbb{R}^{n-1}}\Gamma^{\prime}(x^{ \prime}-z^{\prime},t)N(z^{\prime},x_{n})dz^{\prime}=D_{x^{\prime}}^{k-1}\int_ {\mathbb{R}^{n-1}}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)D_{z_{i}}N(z^{\prime },x_{n})dz^{\prime}.\]
We divide \(\mathbb{R}^{n-1}\) by three disjoint sets \(D_{1},D_{2}\) and \(D_{3}\), which are defined by
\[D_{1}=\left\{z^{\prime}\in\mathbb{R}^{n-1}:|x^{\prime}-z^{\prime }|\leq\frac{1}{10}|x^{\prime}|\right\},\]
\[D_{2}=\left\{z^{\prime}\in\mathbb{R}^{n-1}:|z^{\prime}|\leq\frac{ 1}{10}|x^{\prime}|\right\},\qquad D_{3}=\mathbb{R}^{n-1}\setminus(D_{1}\cup D _{2}).\]
We then split the following integral into three terms as follows:
\[\int_{\mathbb{R}^{n-1}}D_{z^{\prime}}^{k-1}\Gamma^{\prime}(x^{ \prime}-z^{\prime},t)D_{z_{i}}N(z^{\prime},x_{n})dz^{\prime}=\int_{D_{1}} \cdots+\int_{D_{2}}\cdots+\int_{D_{3}}\cdots:=J_{1}+J_{2}+J_{3}. \tag{7.6}\]
Noting that \(|D_{z^{\prime}}^{k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)|\leq ct^{-\frac{ n+k-1}{2}}e^{-\frac{|x^{\prime}|^{2}}{2t}}\) for \(z^{\prime}\in D_{2}\) and \(\int_{D_{2}}D_{z_{i}}N(z^{\prime},x_{n})dz^{\prime}=0\), we have
\[|J_{2}| =\left|\int_{D_{2}}D_{z_{i}}N(z^{\prime},x_{n})\big{(}D_{z^{ \prime}}^{k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)-D_{x^{\prime}}^{k}\Gamma ^{\prime}(x^{\prime},t)\big{)}dz^{\prime}\right|\] \[\leq ct^{-\frac{n-1}{2}-\frac{k}{2}}e^{-\frac{|x^{\prime}|^{2}}{2 t}}|\int_{D_{2}}\frac{1}{|z^{\prime}|^{n-2}}dz^{\prime}\] \[\leq ct^{-\frac{n+k-1}{2}}|x^{\prime}|e^{-\frac{|x^{\prime}|^{2}} {2t}}\leq c|x^{\prime}|^{-\frac{n+k-1}{2}}t^{\frac{1}{2}}\leq ct^{\frac{1}{2}}, \tag{7.7}\]
where the mean value theorem is used. Here we also used that \(e^{-\frac{|x^{\prime}|^{2}}{2t}}\leq c(\frac{|x^{\prime}|^{2}}{t})^{-\frac{n+k} {2}}\). On the other hand, the term \(J_{3}\) is controlled as follows:
\[|J_{3}|\leq\frac{c}{|x^{\prime}|^{n-1}}t^{-\frac{n-1}{2}-\frac{k- 1}{2}}\int_{\{|z^{\prime}|\geq\frac{1}{10}|x^{\prime}|\}}e^{-\frac{|x^{\prime}| ^{2}}{t}}dz^{\prime}\leq\frac{c}{|x^{\prime}|^{n-1}}t^{-\frac{k-1}{2}}e^{-\frac {|x^{\prime}|^{2}}{t}}\leq ct^{\frac{1}{2}}, \tag{7.8}\]
where we used that \(e^{-\frac{|x^{\prime}|^{2}}{2t}}\leq c(\frac{|x^{\prime}|^{2}}{t})^{-\frac{k}{2}}\).
Now, it remains to estimate \(J_{1}\). Due to the integration by parts, it follows that
\[J_{1} =\int_{D_{1}}D_{x^{\prime}}^{k-1}\Gamma^{\prime}(x^{\prime}-z^{ \prime},t)D_{z_{i}}N(z^{\prime},x_{n})dz^{\prime}\] \[=(-1)^{k-1}\int_{D_{1}}D_{z^{\prime}}^{k-1}\Gamma^{\prime}(x^{ \prime}-z^{\prime},t)D_{z_{i}}N(z^{\prime},x_{n})dz^{\prime}\] \[=(-1)^{k-1}\sum_{1\leq k^{\prime}\leq k-2}(-1)^{k^{\prime}}\int_{ \partial D_{1}}D_{z^{\prime}}^{k^{\prime}}\Gamma^{\prime}(x^{\prime}-z^{\prime },t)D_{z^{\prime}}^{k-1-k^{\prime}}D_{z_{i}}N(z^{\prime},x_{n})\mathbf{n}_{k^{ \prime}}d\sigma(z^{\prime})\] \[+\int_{D_{1}}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)D_{z^{ \prime}}^{k-1}D_{z_{i}}N(z^{\prime},x_{n})dz^{\prime}\] \[:=J_{11}+J_{12}.\]
Since the magnitude of \(z^{\prime}\in\partial D_{1}\) is comparable to \(|x^{\prime}|\), \(J_{11}\) is controlled as follows:
\[|J_{11}| \leq c\sum_{1\leq k^{\prime}\leq k-2}t^{-\frac{n-1}{2}-\frac{k^{ \prime}}{2}}e^{-\frac{|z^{\prime}|^{2}}{t}}\int_{\partial D_{1}}|D_{z^{\prime} }^{k-1-k^{\prime}}D_{z_{i}}N(z^{\prime},x_{n})\mathbf{n}_{k^{\prime}}|d\sigma (z^{\prime})\] \[\leq c\sum_{1\leq k^{\prime}\leq k-2}t^{-\frac{n-1}{2}-\frac{k^{ \prime}}{2}}e^{-\frac{|z^{\prime}|^{2}}{t}}\int_{\partial D_{1}}|x^{\prime}|^{ -n-k+k^{\prime}}d\sigma(z^{\prime}) \tag{7.9}\] \[\leq c\sum_{1\leq k^{\prime}\leq k-2}t^{-\frac{n-1}{2}-\frac{k^{ \prime}}{2}}|x^{\prime}|^{-k+1+k^{\prime}}e^{-\frac{|x^{\prime}|^{2}}{t}}\leq ct ^{\frac{1}{2}},\]
where we used that \(e^{-\frac{|x^{\prime}|^{2}}{t}}\leq c(\frac{|x^{\prime}|^{2}}{t})^{-\frac{n+k^ {\prime}}{2}}\). Meanwhile, we decompose \(J_{12}\) in the following way.
\[J_{12} =\int_{D_{1}}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)D_{z^{\prime }}^{k-1}D_{z_{i}}N(z^{\prime},x_{n})dz^{\prime}\] \[=\int_{\{[z^{\prime}|\leq\frac{1}{10}\frac{|x^{\prime}|}{\sqrt{t }}\}}\Gamma^{\prime}(z^{\prime},1)\Big{(}D_{z^{\prime}}^{k-1}D_{x_{i}}N(x^{ \prime}-\sqrt{t}z^{\prime},x_{n})-D_{x^{\prime}}^{k-1}D_{x_{i}}N(x^{\prime},x_ {n})\Big{)}dz^{\prime}\] \[\quad+D_{x^{\prime}}^{k-1}D_{x_{i}}N(x^{\prime},x_{n})\int_{ \mathbb{R}^{n-1}}\Gamma^{\prime}(z^{\prime},1)dz^{\prime}-D_{x^{\prime}}^{k-1} D_{x_{i}}N(x^{\prime},x_{n})\int_{\{[z^{\prime}|\geq\frac{1}{10}\frac{|x^{ \prime}|}{\sqrt{t}}\}}\Gamma^{\prime}(z^{\prime},1)dz^{\prime}\] \[=J_{121}+D_{x^{\prime}}^{k-1}D_{x_{i}}N(x^{\prime},x_{n})+J_{122},\]
where we used \(\int_{\mathbb{R}^{n-1}}\Gamma^{\prime}(z^{\prime},1)dz^{\prime}=1\). Since \(\frac{\sqrt{t}}{|x^{\prime}|}\leq 1\), we observe that
\[|J_{121}(x^{\prime},t)|\leq c|x^{\prime}|^{-n-k+1}t^{\frac{1}{2}}\int_{\{[z^{ \prime}|\leq\frac{1}{10}\frac{|x^{\prime}|}{\sqrt{t}}\}}e^{-|z^{\prime}|^{2}}|z ^{\prime}|dz^{\prime}\leq ct^{\frac{1}{2}}, \tag{7.10}\]
\[|J_{122}(x^{\prime},t)|\leq c|x^{\prime}|^{-n-k+2}\int_{\{[z^{ \prime}|\geq\frac{1}{10}\frac{|x^{\prime}|}{\sqrt{t}}\}}e^{-|z^{\prime}|^{2}} dz^{\prime}\leq ce^{-\frac{|x^{\prime}|^{2}}{2t}}\leq ct^{\frac{1}{2}}. \tag{7.11}\]
Setting \(J=J_{2}+J_{3}+J_{11}+J_{121}+J_{122}\) and adding up (7.7) -(7.11), we deduce (3.8) for \(k\geq 1\) and \(l=0\).
Next, we consider the case that normal derivative is taken into account, i.e. \(l\geq 1\). We note first that \(N(z^{\prime},x_{n})\) is regular in the regions of \(D_{1}\) and \(D_{3}\), where all order of normal derivatives can be directly applied to \(N(z^{\prime},x_{n})\). Therefore, \(J_{1}\) and \(J_{3}\) in (7.6) can be computed similarly as in the
above case that \(l=0\). Since its verifications are just tedious repetitions, it suffices that we estimate only \(J_{2}\) in (7.6).
Firstly, in case that \(l=1\),
\[|J_{2}|\leq ct^{-\frac{n-1}{2}-\frac{k}{2}}e^{-\frac{|x^{\prime}|^{2}}{t}}\int_ {|z^{\prime}|\leq\frac{1}{10}|x^{\prime}|}\frac{x_{n}}{\left(|z^{\prime}|^{2}+x _{n}^{2}\right)^{\frac{n}{2}}}dz^{\prime}\leq ct^{\frac{1}{2}}. \tag{7.12}\]
Secondly, if \(l=2\), then it follows due to \(D_{x_{n}}^{2}N(z^{\prime},x_{n})=-\Delta_{z^{\prime}}^{\prime}N(z^{\prime},x_{ n})\) that
\[J_{2} =c\int_{|z^{\prime}|\leq\frac{1}{10}|x^{\prime}|}D_{x^{\prime}}^{ k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)D_{x_{n}}D_{x_{n}}N(z^{\prime},x_{n})dz^{\prime}\] \[=-c\int_{|z^{\prime}|\leq\frac{1}{10}|x^{\prime}|}D_{x^{\prime}} ^{k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)\Delta_{z^{\prime}}^{\prime}N(z^{ \prime},x_{n})dz^{\prime}\] \[=c\int_{|z^{\prime}|\leq\frac{1}{10}|x^{\prime}|}\nabla_{z^{ \prime}}D_{x^{\prime}}^{k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)\cdot\nabla _{z^{\prime}}N(z^{\prime},x_{n})dz^{\prime}\] \[\quad-c\int_{|z^{\prime}|=\frac{1}{10}|x^{\prime}|}D_{x^{\prime}} ^{k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t)\nabla_{z^{\prime}}N(z^{\prime},x _{n})\cdot\nu^{\prime}dz^{\prime} \tag{7.13}\] \[=J_{1a}+J_{1b}.\]
Recalling that \(\int_{|z^{\prime}|\leq\frac{1}{10}|x^{\prime}|}\nabla_{z^{\prime}}N(z^{\prime },x_{n})dz^{\prime}=0\), we observe that
\[|J_{1a}| =c\left|\int_{|z^{\prime}|\leq\frac{1}{10}|x^{\prime}|}\left( \nabla_{z^{\prime}}D_{x^{\prime}}^{k}\Gamma^{\prime}(x^{\prime}-z^{\prime},t) -\nabla_{x^{\prime}}D_{x^{\prime}}^{k}\Gamma(x^{\prime},t)\right)\cdot\nabla _{z^{\prime}}N(z^{\prime},x_{n})dz^{\prime}\right|\] \[\leq ct^{-\frac{n-1}{2}-\frac{k+2}{2}}e^{-\frac{|x^{\prime}|^{2}} {2t}}\int_{D_{2}}\frac{1}{|z^{\prime}|^{n-2}}dz^{\prime}\] \[\leq ct^{-\frac{n-1}{2}-\frac{k+2}{2}}|x^{\prime}|e^{-\frac{|x^{ \prime}|^{2}}{2t}}\leq ct^{\frac{1}{2}},\]
where \(e^{-\frac{|x^{\prime}|^{2}}{2t}}\leq c(\frac{|x^{\prime}|^{2}}{t})^{-\frac{n+ k-2}{2}}\) is used. The second term can be similarly estimated as follows:
\[|J_{2b}|\leq ct^{-\frac{n-1}{2}-\frac{k}{2}}e^{-\frac{|x^{\prime}|^{2}}{2t}}|x^{ \prime}|^{-1}\leq ct^{\frac{1}{2}}.\]
For the case \(l>2\), we can convert the order of normal derivatives to tangential derivatives by using \(D_{x_{n}}^{2m}N(z^{\prime},x_{n})=(-1)^{m}(\Delta_{z^{\prime}}^{\prime})^{m}N (z^{\prime},x_{n})\) for \(m\leq[\frac{l}{2}]\), which reduces it to the case either \(l=0\) or \(l=1\). Repeating the above processing similarly, we can have (3.8)-(3.9). Since its computations are rather straightforward, we skip its details. We deduce the lemma.
### A Figure of disjoint sets
The two dimensional figure of disjoint sets \(A_{i1},A_{i2},B_{i1}\) and \(B_{i2}\) are given as follows:
## Acknowledgement
T. Chang is partially supported by NRF-2020R1A2C1A01102531. K. Kang is supported by NRF-2019R1A2C1084685.
|
2308.07223 | Distance Matters For Improving Performance Estimation Under Covariate
Shift | Performance estimation under covariate shift is a crucial component of safe
AI model deployment, especially for sensitive use-cases. Recently, several
solutions were proposed to tackle this problem, most leveraging model
predictions or softmax confidence to derive accuracy estimates. However, under
dataset shifts, confidence scores may become ill-calibrated if samples are too
far from the training distribution. In this work, we show that taking into
account distances of test samples to their expected training distribution can
significantly improve performance estimation under covariate shift. Precisely,
we introduce a "distance-check" to flag samples that lie too far from the
expected distribution, to avoid relying on their untrustworthy model outputs in
the accuracy estimation step. We demonstrate the effectiveness of this method
on 13 image classification tasks, across a wide-range of natural and synthetic
distribution shifts and hundreds of models, with a median relative MAE
improvement of 27% over the best baseline across all tasks, and SOTA
performance on 10 out of 13 tasks. Our code is publicly available at
https://github.com/melanibe/distance_matters_performance_estimation. | Mélanie Roschewitz, Ben Glocker | 2023-08-14T15:49:19Z | http://arxiv.org/abs/2308.07223v1 | # Distance Matters For Improving Performance Estimation Under Covariate Shift
###### Abstract
Performance estimation under covariate shift is a crucial component of safe AI model deployment, especially for sensitive use-cases. Recently, several solutions were proposed to tackle this problem, most leveraging model predictions or softmax confidence to derive accuracy estimates. However, under dataset shifts confidence scores may become ill-calibrated if samples are too far from the training distribution. In this work, we show that taking into account distances of test samples to their expected training distribution can significantly improve performance estimation under covariate shift. Precisely, we introduce a "distance-check" to flag samples that lie too far from the expected distribution, to avoid relying on their untrustworthy model outputs in the accuracy estimation step. We demonstrate the effectiveness of this method on 13 image classification tasks, across a wide-range of natural and synthetic distribution shifts and hundreds of models, with a median relative MAE improvement of 27% over the best baseline across all tasks, and SOTA performance on 10 out of 13 tasks. Our code is publicly available at [https://github.com/melanibe/distance_matters_performance_estimation](https://github.com/melanibe/distance_matters_performance_estimation).
## 1 Introduction
Machine learning models are sensitive to variations in their deployment environments [19, 63, 40, 26, 57, 38]. Due to the unavailability of ground truth labels for continuous performance monitoring at deployment time, real-time and accurate performance estimation is crucial to detect any unexpected behavior or model failure, particularly in distribution-shifted settings. This is especially important for sensitive use cases such as clinical decision making.
The difficulty in estimating model performance arises from the lack of reliability of model outputs under covariate shift [40, 28]. Recently, several attempts have been made at addressing this problem, many of them based on confidence estimates [16, 15, 32]. For example, Average Thresholded Confidence (ATC) [15] leverages softmax outputs for estimating classification accuracy, considering that all outputs whose confidence do not reach a certain threshold are incorrectly classified. While this method has shown to be effective at estimating the performance under mild shifts (e.g. on synthetically corrupted images), experiments show that the method under-performs in more substantial shifts such as natural sub-population shifts. In particular, current approaches tend to _overestimate_ accuracy in natural real-world distribution shift settings [15, 23]. This can notably be explained by a deterioration of model calibration when going further from the training distribution [25], with softmax outputs becoming over-conf
Figure 1: **Performance estimation under covariate shift needs to take into account different sources of errors.** Distance to the source distribution in the embedding space matters as confidence estimates become unreliable with increased distance.
If test samples are too far from training samples, relying on the output of the classification layer for performance estimation is insufficient. From an uncertainty point of view, softmax outputs can been seen as capturing aleatoric uncertainty, arising from overlapping class boundaries [24, 9]. However, under dataset shifts, errors may also arise from the fact that the model has never seen this type of input data and does not know how to respond to such inputs. This is referred to as epistemic uncertainty [9] and is not well captured by softmax outputs [24, 53], as demonstrated by its poor performance on the related out-of-distribution (OOD) detection task [40, 28, 48, 53, 30]. Note that in OOD detection, the goal is to separate on separating ID from OOD inputs, regardless of the downstream classification performance, often considering inputs completely unrelated to the task. This differs from performance estimation under covariate shift, where we assume that the classification task still applies to the shifted inputs and we focus on estimating performance, not on detecting shifts.
Methodological contributionsIn this paper, we argue that performance estimators should identify samples far away from the training set in the embedding space, for which softmax estimates are most likely unreliable. By measuring the distance in the embedding space, we are able to measure how well the model "understood" the sample when projecting the input to the classification space. This idea is illustrated in fig. 1. Following this intuition, we propose a simple yet effective method to improve the quality of current SOTA performance estimators. Specifically, we use nearest-neighbours distance in the embedding space to reject samples that lay too far from the training distribution. We then only use confidence-based performance estimators on the remaining samples, considering all previously rejected samples as mis-classified. Our distance check approach is versatile and can be used to improve the quality of various existing performance estimators (e.g. [15, 23]).
Main resultsWe evaluate our approach on 13 classification tasks ranging from cancer cell to animal classification. The nature of the distribution shifts studied covers a wide-range of shifts: from synthetic corruption, acquisition shift, real-world population shift to representation shift. For each task we evaluate between 18 and 259 models, covering various training strategies and network architectures. These experiments demonstrate that integrating distance into accuracy estimators significantly improves the quality of the estimation. For example, our proposed estimator ATC-DistCS is significantly better than previous SOTA ATC [15] on all but one task, with a median relative MAE improvement of 30% across all tasks. Furthermore, comparing to the most recent COT method [34], we demonstrate a 27% median relative performance improvement across all tasks, with new SOTA performances on 10 out of 13 tasks. We also demonstrate significant improvements across all datasets for agreement based accuracy estimation when integrating our distance check. Ablation studies yield further insights in the method and its limitations. Finally, to the best of our knowledge, we provide the first comprehensive publicly available codebase of current SOTA baselines for accuracy estimation, along with the complete code to reproduce our experiments.
## 2 Background
### Performance estimation without ground truth
Current methods for performance estimation under covariate shift can be broadly grouped in 4 categories:
Estimating performance via auxiliary task performanceModifies the main classification model to incorporate a (sufficiently informative) auxiliary task for which ground truth labels are available at test time: accuracy on the main task is then approximated by the computed accuracy on the auxiliary task. For example, [11] trains a multi-task model for predicting the class at hand as well as the rotation applied to the input image. The main limitation of this line of work is the requirement to build a multi-task model, making it unusable as a post-hoc tool.
Training a regressor between ID and OOD accuracyThis regressor can be trained based on model outputs or on measures of distance between datasets [13, 47, 16, 35, 12]. One major drawback of this class of estimators is their requirement for having access to labelled OOD data for training the regression model. This is not always available in practice, in particular in data-scarce domains such that healthcare. In absence of such OOD datasets, regressor are sometimes trained using corrupted versions of the original validation set as "OOD" sets. However, this can not guarantee the robustness of this estimator against other shifts e.g. natural subpopulation shift [46].
Agreement-based estimatorsAre based on the idea that agreement between member of model ensembles correlate with model accuracy. For example, generalised disagreement equality (GDE) [23] use pairs of models trained with different random seeds to compute disagreement. Others use more intricate methods for training specialised models to align disagreement and accuracy further [10, 6]. However, these procedures often require expensive additional training steps to derive the siblings models and are not applicable to post-hoc scenarios where only the final model is available to the end user. In [1], the authors go as far as training dozens models to fit a regressor between agreement
and accuracy, whereas [6] requires training a new ensemble for every single test set requiring performance estimation.
Confidence-based estimatorsThese methods, contrarily to the ones above, only require the final model's outputs to perform accuracy estimation and do not require any OOD data for calibration. As such, they are versatile and can be used with any classification model. For example, Difference of Confidence (DOC) [16] approximates the difference in accuracy between the evaluation set and the in-distribution (ID) validation set by the difference in average model confidence. ATC [15] introduces a confidence threshold such that all test samples for which the confidence is lower than this threshold are considered wrong and all samples meeting the minimum confidence requirement are considered correct (see Methods). Finally, concurrently to our work, COT [34] proposed to estimate accuracy based on based on optimal transport of model confidences. Precisely, they measure the Wasserstein distance between source label distribution and target softmax distribution to estimate the test error. Note, that this is expected to perform well if the source label distribution matches the target label distribution but might fail if this assumption breaks.
### Distance-based out-of-distribution detection
The idea that OOD samples should lie far from the training samples in the embedding space is at the core of distance-based methods for OOD detection. For example, [30] propose to fit multi-variate Gaussians on the training embedding distribution of each class and use the Mahalanobis distance [36] to characterise how far test samples are from this expected distribution. If a sample is far from all class clusters, it is considered OOD. This method has shown some success at various OOD detection tasks [4, 30] and extensions of this work have since further improved its capabilities [43]. However, this method suffers from one major limitation: it has a strong assumption that the class embeddings clusters can be accurately modelled by a Gaussian Multivariate distribution. Without any constraints on the training procedure or the embedding space at training time, this assumption may not hold [48]. This is the motivation for the work of [48] who proposed a non-parametric alternative OOD detection method. The authors still focus on the idea of using distances in the embedding space to detect OOD samples, but they leverage nearest-neighbours distances instead of the Mahalanobis distances, removing the normality assumption on the embedding. Precisely, they use the distance to the K\({}^{th}\) nearest-neighbour to classify samples as OOD. They derive the classification threshold for OOD versus ID task such that 95-99% of the training samples are classified as ID.
## 3 Methods
In this section, we begin by reminding the reader of the core principles of two base performance estimators which we build on top of: ATC [15] and GDE [23]. We then introduce our plug-in distance checker designed to flag untrustworthy samples, and discuss how to incorporate this distance check into these performance estimators to yield our proposed estimators "ATC-DistCS" and "GDE-DistCS".
### Base estimators
Average Thresholded Confidence (ATC) [15]approximates accuracy by the proportion of OOD predictions that do not exceed a certain confidence threshold (derived from the ID validation set, where confidence is defined as temperature-scaled [17] softmax confidence). Precisely, the threshold ATC is defined such that on source data \(D_{s}\) the expected number of points that obtain a confidence less than ATC matches the error of the model on \(D_{s}\). This method has been further refined by [32], where the authors propose to apply class-wise temperature scaling and to define class-wise confidence thresholds to improve the quality of the estimation, in particular for class-imbalanced problems.
Generalised Disagreement Equality (GDE) [23]Assuming access to two models \(g\) and \(g^{\prime}\) trained with different random seeds (but identical architecture and training paradigms), GDE estimates model accuracy by \(\frac{1}{N}\sum_{i\in\text{test set}}\left[g(x_{i})=g^{\prime}(x_{i})\right]\), where N is the size of the OOD test set, \(x_{i}\) the inputs to the model, and \(g(x_{i})\) denotes the model prediction.
### Integrating distance to training set
Average Distance CheckInspired by the OOD detection work in [48], we propose to improve standard performance estimators with a "distance checker". Instead of simply rejecting samples with low confidence or model disagreement, we argue that the distance of any given sample to the in-distribution training set should also be taken into account to determine whether its confidence (and prediction) is likely to be trustworthy for estimation purposes or not. In simple terms, we "reject" samples whose penultimate-layer embeddings lie in a region "far" from the ID embedding space. The distance from a sample to the in-distribution set is determined by the average distance between the sample and all of its K-nearest-neighbours:
\[\text{AD}_{i}=\frac{1}{K}\sum_{k}\norm{f_{i}-n_{i}^{(k)}}_{2}, \tag{1}\]
where \(K\) is the number of nearest-neighbours to consider, \(f_{i}\) is the embedding of the \(i^{th}\) test sample and \(n_{i}^{(k)}\) the \(k^{th}\) nearest neighbours to \(f_{i}\) in the embedding space, nearest
neighbours are searched for in the training set. The acceptable threshold is determined on the in-distribution validation set as the \(99^{th}\)-percentile of the average distances observed on this set i.e.
\[\text{DistThreshold}=\text{quantile}_{.99}\left\{AD_{i},\forall i\in\text{val set}\right\}. \tag{2}\]
Note, that our distance criterion differs from [48], in that (i) we use the average of all K distances instead of the distance to the K\({}^{th}\) neighbour only (to be less sensitive to outliers); (ii) we do not normalise the embeddings (see ablation study in section 4); (iii) we do not use a contrastive loss for training our models as this assumption may restrict the scope of application of the method. The fitting procedure for the distance checker can be found in algorithm 1.
Using distance to improve the quality of performance estimatorsOur proposed "distance-checker" can be used as a plug-in method to improve the estimation results of different existing accuracy estimators. Specifically, first we propose "ATC-Dist", where we combine both criteria to estimate the accuracy under shift: a sample is estimated as being correct if it is (i) of high enough confidence, (ii) not too far from the in-distribution embeddings. Similarly we extend GDE with our distance criterion to get "GDE-Dist". There the correctness of a sample is estimated by (i) agreement between both models, and (ii) distance to the in-distribution embeddings. The estimation procedure for ATC-Dist and GDE-Dist is shown in algorithm 1.
Class-wise distance thresholdsAs the tightness of class clusters may differ for different classes, we argue that the quality of the distance threshold can be further improved by defining class-wise distance thresholds. Concretely, for each class \(c\) we compute \(\text{DistThreshold}_{c}\) by taking the \(99^{th}\) percentile of the average distance distribution of the subset of cases labelled as \(c\) in the validation set. At test time, we use the distance threshold associated with the predicted class to determine the validity of a given sample prediction. In cases where less than 20 samples were present in the validation set for any given class, we use the global threshold for this class. Replacing the global distance threshold by the classwise thresholds in the procedure described above, yields our proposed "ATC-DistCS" and "GDE-DistCS" estimators.
```
procedureFitDistanceChecker(\(X_{train}\), \(X_{val}\)) \(f_{train}\leftarrow\text{get features}(X_{train})\) \(f_{val}\leftarrow\text{get features}(X_{val})\) \(\text{KNN}\leftarrow\text{Fit Nearest Neighbors}(f_{train})\) \(\text{AD}_{val}\leftarrow\text{Average NN Distances}(\text{KNN},f_{val})\) \(\text{DistThreshold}\leftarrow\text{Quantile}(\text{AD}_{val},.99)\) returnDistThreshold, KNN endprocedure
```
**Algorithm 1**
## 4 Results
Motivating examplePrior to diving into quantitative analysis of our results, let's start with an illustrative example of our main idea: _in the embedding space, regions of the shifted test set not covered by the ID set are likely to be regions of very low accuracy_. This pattern appears distinctively in the example in fig. 2, where we show the T-SNE [54] representation of the embeddings of the ID validation set as well as the OOD test set, on a model trained on the WILDS-CameLyon [26, 2] dataset. We can clearly see how the region in black \(-\) which is not well represented in the ID validation set \(-\) contains an extremely high proportion of errors in the OOD test set.
DatasetsWe validate our proposed method on a wide range of tasks and covering various natural and synthetic distribution shifts (more details in supplement):
* ImageNet [45] to ImageNet-Sketch [57] where the distribution shifts from photographs to sketch; ImageNet-A [21], where the distribution shifts adversarially; ImageNet-V2 [41] a setting with only mild shifts, designed to mimic ImageNet test set.
* CIFAR10 [27] to CIFAR10-C [20] covering various synthetic corruptions, yielding 95 OOD datasets.
* MNIST [29] to SVHN [39], classic digit classification, shifting from binary digit images to house numbers.
* WILDS [26] benchmark, designed to study natural shifts occuring "in the wild". WILDS-Camelyon17 [2] defines a histopathology binary task, with staining protocol shifts. WILDS-iCam [3] is a 182-classes animal classification task from camera traps, with shifts in camera location. WILDS-FMoW [8] is a satellite image 62-class task, with temporal and geographical shifts. WILDS-RxRx1 [51] is a 1,139 genetic treatment classification task on fluorescent microscopy images, where the shift occurs from so-called experimental "batch-effect".
* The BREEDS [46] benchmark defines various tasks (Entity30, Entity13, Living17, NonLiving26) based on ImageNet subsets and superclasses. The main task consisting of predicting the super-class and the train-val-test split defined such that the subpopulations covered by the OOD test set are disjoint from the ones represented in the training and validation set.
* PACS [31] a 7-class task, where models are trained and validated on photographs and tested on 3 other domains (painting, sketches and cartoon).
* PathMNIST [62] histopathology 9-class task, where training and test splits are taken from different sites.
Experimental setup and modelsFor each evaluated model, we fit our nearest neighbours algorithm on the training set, using K=25 neighbours for the distance check. Distance thresholds are computed on the in-distribution validation set. For each task, we evaluate our accuracy estimator on all available OOD sets, as described above, and measure estimation quality in terms of Mean Absolute Error (MAE) between predicted and true accuracy across all models. Note that if there were more than N=50,000 training samples, we randomly subsampled N samples in the K-NN fitting step to speed up inference. Moreover, for ImageNet to avoid doing a full inference pass on the extremely large training set, we fitted the K-NN algorithm directly on the validation set (discarding distance to self to get the distance threshold). For each task, we evaluate the quality of performance estimation on a large variety of models. For ImageNet, we test on 259 pretrained models from the timm [59] package, covering a range of 14 family of model architectures. For all other datasets, we trained models ourselves using various architectures, training setups, random seeds and initialising models both from ImageNet and random weights (except for BREEDS datasets as they are build from ImageNet images, hence pretrained weights would violate the OOD assumptions of the testing subpopulations); amounting to 18 models for BREEDS tasks and 30 models for all other tasks. More details can be found in Supp Note 2 and in our codebase.
Choice of baselinesOur first analysis focuses on single-model accuracy estimation. We compare our method to established ATC [15] and DoC [16] baselines as well as their improved class-wise version [32]. For class-wise estimation, if any given target class was not present in the validation set, or if less than 20 samples were predicted for that class, we used the global temperature and ATC-threshold for that particular class. This may happen for some classes in imbalanced datasets or with an extremely high number of classes (e.g. WILDS RxRx1 or WILS iCam). We also compare our method to the recently proposed COT estimator [34]. Note that, to date, in their pre-print, the authors only tested their method on a very limited set of tasks, as such our evaluation considerably extends the assessment of COT's capabilities. Regression-like methods are not included as we assume that no OOD dataset is available at training and validation time, similarly we do not include methods that require external metadata such as Mandoline [7] as it was not available. Methods such as self-training ensembles [6] which require model retraining for every single test set, were also not considered as they were computationally much more heavy (it would require training over 3,000 ensembles in our experimental setting) and do not allow for real time monitoring. Weaker baselines such that simply using the average softmax confidence as accuracy estimation are not included as the extensive analysis in the ATC paper [15] already clearly demonstrates the superiority of ATC as a baseline. In a second analysis, we place ourselves in the scenario where we have access to two models for each task for accuracy estimation and compare agreement-based estimator GDE [23] to our improved version GDE-DistCS.
Results for single-model performance estimationIn table 1, we compare DoC, ATC, COT and our method in two different settings, one where temperature scaling (TS) [17] and ATC threshold are optimised globally for the entire dataset (left column group) and the second where we apply class-wise TS and ATC thresholds (rightmost columns). Temperature scaling is applied as previous studies have shown better results over raw model outputs [32, 15]. Results are presented in terms of MAE over all shifted test sets and all models for each task. We can see that our method ATC-Dist achieves lower MAE than its counterpart ATC across all but one dataset (Wilds iCam, see discussion section). Furthermore, on all these datasets, ATC-DistCS using class-wise distance thresholds further improves the results over ATC-Dist. The overall median relative MAE reduction of ATC-DistCS over all datasets is of 30% compared to standard ATC in the global setting and 13% in
the class-wise setting. Additionally, our experimental results confirm the preliminary findings of [32] i.e. ATC with class-wise optimisation of temperature and thresholds outperforms ATC with global optimisation for the majority of datasets (only performance on heavily imbalanced CIFAR10-C had been reported so far). To measure statistical significance, for each dataset, we use the Wilcoxon signed-rank test [60] to compare the best method (i.e. with lowest MAE on that task) against all other methods, with Bonferroni [5] correction to account for multiple testing. We highlight in bold the method with lowest MAE and all methods that are not significantly different to this method (at the level 0.05 after correction). We can see that ATC-DistCS achieves SOTA results on 10 out of 13 tasks, with a median relative MAE improvement of 27% for ATC-DistCS over COT with global TS and 30% with class-wise TS. We discuss differences between COT and ATC-DistCS in more details in section 5. Finally, we detail the performance comparison on CIFAR10-C in fig. 3, we can see that our method outperforms the baselines at all levels of corruption
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Global TS \& ATC thresholds} & \multicolumn{3}{c}{Classwise TS \& ATC thresholds} \\ \cline{2-9} & DoC & COT & ATC & ATC-Dist & ATC-DistCS & COT & ATC & ATC-DistCS \\ Dataset family & [16] & [34] & [15] & (ours) & (ours) & [34] & [32] & (ours) \\ \hline ImageNet-Sketch & 19.26** & 4.62** & 6.05** & 4.75** & **3.28** & 5.31** & 5.36** & **3.42** \\ ImageNet-A & 38.66** & 31.73** & 26.81** & 23.20** & 17.95** & 21.77** & 35.67** & **15.10** \\ ImageNet-V2 & 4.94** & 5.70** & 1.90** & 1.43** & **0.64** & 1.83** & 5.39** & 3.51** \\ Living17 & 21.08** & 18.95** & 17.98** & 15.86** & 14.45* & 20.18** & 15.02** & **11.82** \\ NonLiving26 & 24.31** & 21.38* & 16.71** & 15.65** & **14.53** & 21.85* & 15.84** & **13.87** \\ Entity13 & 13.55** & 12.78** & 8.96** & 8.30* & **8.15** & 12.99** & 8.64** & **7.84** \\ Entity30 & 17.75** & 15.45** & 12.31** & 11.65* & **11.31** & 15.98** & 12.15** & **11.15** \\ WILDS CameLyon & 7.57** & 3.07** & 6.86** & 4.90** & 4.71** & **2.99** & 6.82** & 4.69** \\ WILDS iCam & **8.14** & 7.72* & **7.15** & 7.92* & 9.13** & **6.48** & **5.39** & **6.95** \\ WILDS FMoW & 3.54** & 2.04** & 2.72** & 2.06* & **1.91** & 1.94* & **1.36** & **1.58** \\ WILDS RxRx1 & 7.47** & **2.36** & 6.02** & 5.01** & 3.86** & **2.54** & 8.87** & 9.62** \\ MNIST & 61.41** & **15.17** & 49.52** & **17.41** & **15.96** & **15.33** & 41.44** & **16.12** \\ PACS & 55.38** & **12.61** & 45.98** & 26.25** & 26.21** & 13.23* & 49.45** & 26.65** \\ PathMNIST & 3.68** & 9.90** & 2.67** & **1.31** & **1.14** & 9.92** & 2.37** & **1.09** \\ CIFAR10 & 2.73** & 1.53** & 1.20** & 1.11* & **1.08** & 1.59** & 1.24** & **1.07** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Improving confidence-based accuracy estimation - summary table.** Results are reported in terms of Mean Absolute Error (in %) across all models and OOD datasets. * denotes a p-value after Bonferroni correction \(<\)0.05, ** p-value \(<\)1e-3 for Wilcoxon signed-rank test [60] to test for difference between the best method versus all the others, for each dataset. Bold denotes the best model, all methods not significantly different from the best are highlighted.
Figure 2: **Why distance matters, an example.** Joint TSNE [54] representation of the ID validation set and OOD test set plotted separately for a ResNet18 model on the WILDS CameLyon dataset. We can clearly distinguish a region with low density on the validation set and high density on the OOD set, where most points are misclassified.
strength. Additional scatterplots detailing predicted versus true accuracy can be found in Supp. Note 3.
Results for agreement-based accuracy estimation.Our distance check is not only tailored for improving ATC but rather is a general addition that can be "plugged-in" to various estimators. We demonstrate this by showing how our method improves the quality of agreement-based estimator GDE [23], another well established baseline. For every training configuration, we repeat training with 3 different seeds. To estimate the accuracy for model \(g_{1}\), we use another model \(g_{2}\) trained with a different seed to compute disagreement and deduce the predicted accuracy for \(g_{1}\). We then further improve the estimation with our proposed distance check i.e. we fit our distance checker to the validation features on \(g_{1}\) and use it on the corresponding OOD features to discard distant samples. We evaluate the error for every model using all possible pairs. Results are summarised in table 2. The proposed GDE-DistCS shows statistically significant improvements across all tasks (expect for one task where it is equivalent), with a median relative MAE improvement of 13% over the standard GDE method. However, it is worth noting that results obtained with the accuracy estimators from the previous paragraph are systematically better than these disagreement estimates.
Ablation studies: choice of distance measure and K-NN hyperparametersWe ran additional experiments to justify our choice to use K-NN distance for detecting unreliable samples. As mentioned in section 2.2, other methods have been proposed to perform distance-based OOD detection. Most famous is the Mahalanobis criteria proposed by [30]. Hence, we compare the performance of the proposed ATC-DistCS to ATC-Maha where we use Mahalanobis to compute the distance (all other steps the same). Results in fig. 4 show that (i) for most datasets adding the distance check helps, regardless of the distance choice; (ii) the K-NN distance performs better than Mahalanobis distance (and is often computationally faster). Secondly, in Supp Note 5, we also investigate the impact of the number of neighbours and the effect of features normalisation (as it improved OOD detection in [48]), showing that our method is robust to the choice of number of neighbours and does not require normalisation. Similarly, Supp. Note 6 shows that our \(99^{th}\)-percentile distance threshold choice for rejecting samples is generalisable across all datasets, alleviating the need for cumbersome hyperparameter tuning and allowing us to all parameters fixed across all experiments.
## 5 Discussion & Conclusion
Main take-awaysThe proposed "distance-check" significantly improves accuracy estimation results across datasets and tasks both for ATC and GDE; with a median relative MAE improvement of 30% for ATC versus ATC-DistCS in the global setting (resp. 13% in the class-wise setting) and 13% for GDE versus GDE-DistCS. Importantly, our method is versatile, can be applied to any model and does not require any OOD data to tune the performance estimator. In particular, we can apply the method even if we only have access to the final model at deployment time, enabling external performance monitoring (e.g. by regulators or local auditing teams). This contrasts with some recent methods that require dozens of models to improve upon ATC results [1]. Moreover, we would like to underline the demonstrated plug-in aspect of the proposed method, i.e. its ability to improve estimation quality across several "base" accuracy estimators. Indeed, this attests that distance to the expected distribution has to be taken into account for improved performance estimation and that estimators should not solely rely on model outputs. This is further corroborated by our ablation study comparing the use of K-NN versus Mahalanobis distances for the distance check step
\begin{table}
\begin{tabular}{r r r} \hline \hline & GDE [23] & +DistCS \\ Dataset family & & (ours) \\ \hline Living17 & 19.92** & **16.60** \\ NonLiving26 & 23.49** & **21.26** \\ Entity13 & 12.62** & **11.78** \\ Entity30 & 16.57** & **15.67** \\ WILDS Camelvom & 4.96** & **3.62** \\ WILDS iCam & **6.44** & **5.93** \\ WILDS FMoW & 9.39** & **8.56** \\ WILDS RxRx1 & 9.14** & **7.78** \\ MNIST & 34.63** & **15.63** \\ PACS & 42.41** & **27.76** \\ PathMNIST & 2.81** & **1.62** \\ CIFAR10 & 4.73** & **4.19** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results for improving agreement-based estimates.** Best in bold, * denotes a p-value \(<\)0.05, ** \(<\)1e-5 for Wilcoxon-signed-rank test [60] for GDE-DistCS against GDE.
Figure 3: **Ablation study MSE in function of corruption strength for CIFAR10-C across all models, shaded area depicts +/- one standard deviation.**
in the proposed accuracy estimation flow. Indeed, results show that regardless of the distance measure choice, our proposed ATC with distance check outperforms the standard ATC baseline for all but one dataset. This work is, to the best of our knowledge, the first proposing to combine confidence and distance based performance estimation, without requiring access to OOD data at calibration time.
**ATC-DistCS, COT and computational considerations** Results show that our method performs significantly better (or equivalent) to the concurrently proposed COT method on 10 out of 13 datasets. Our extensive evaluation not only justifies our method but also allows to gain more insight into COT, as it had only be evaluated on a few tasks in the original work [34]. Another important consideration is that COT's runtime increases in \(\mathcal{O}(n^{3})\) with the number of test samples and linearly with the number of classes, whereas K-NN distance increases linearly with the number of training samples (here limited to 50,000). The authors in [34] propose to alleviate this problem by splitting the test set in batches and averaging accuracy estimates. Despite following this and limiting the number of test samples to 25,000, we still observed a runtime penalty of approximately one order of magnitude compared to ATC-DistCS for datasets with a high number of classes and where the transport optimisation problem needed a large number of iterations before convergence (e.g. on our CPU for a ImageNet ResNet150 model it took over 3500s to get one COT estimate versus 300s for ATC-Dist, 450s vs 50s for Wilds-RR1). Finally, the lightweight aspect of ATC-DistCS (and the ATC baseline) is in start contrast with other proposed methods such as e.g. self-training ensembles [6] which requires training new ensembles for every single evaluation set, highly impractical in real-world monitoring scenarios.
**Limitations** The proposed method relies on the representativeness of the in-distribution validation set to calibrate the distance threshold. In other words, the validation set should cover the expected set of possibilities encountered in-distribution. If the validation data is not sufficiently representative of the ID setting e.g. not all classes are represented in the validation set, then the distance check is expected to be sub-optimal. This is what is happening with the WILDS-iCam results in the section above. For this heavily data imbalanced task, not all possible targets are present in the validation set. This led to the distance check not improving the results due to a sub-optimal distance threshold choice. Moreover, because classes were missing in the validation set we were not able to compute class-wise thresholds for many classes and had to use the global threshold for these classes, which are especially important in heavily imbalanced settings, as argued by [32]. Similarly, in Wilds-RxRx1 many classes had only a few samples in the given in-distribution validation set leading to sub-optimal class-wise thresholds for this equally imbalanced task.
Finally, our method, by design, generates more conservative performance estimates than their counterparts without the distance check. As it considers that any point that lies too far from the expected embedding space is wrong, it will reduce the estimated accuracy. In most cases, this assumption holds in practice as shown by our experimental results. However, in some settings with extremely heavy distributional shift such as PACS, this assumption may lead to rejecting a lot of samples that appear "too" OOD. This may in turn yield excessively conservative performance estimates. Nevertheless, we argue that in practice if the input data is very far from the expected training distribution, having a low accuracy estimate triggering an auditing alert is a more desirable behaviour than having over-confident accuracy estimates which may mislead the user and generate unsafe AI use. Once the system is validated on the new data, it can easily be included in the calibration set.
**Conclusion** Taking into account distance to the training distribution substantially improves performance estimation on a wide-range of tasks. Our proposed estimators implementing a distance check demonstrate SOTA performance
Figure 4: **Ablation study for the choice of distance estimation method: K-NN (DistCS) versus Mahalanobis distance (Maha). Each boxplot shows the distribution of the Mean Absolute Error for accuracy estimation. Whiskers denote the [5%;95%]-percentiles of the distribution, outliers omitted for readability. Using distance improves the results for all but one dataset, no matter if K-NN or Mahalanobis distance. However, K-NN distance is better than Mahalanobis overall. For additional datasets, see Supp. Note 4.**
on a large variety of tasks and significant improvement over previous SOTA baselines. Our method offers a practical and versatile approach to performance estimation on new data distributions, and thus, enables important safety checks for AI model deployment in critical applications. Importantly, our work clearly demonstrates the need to bridge the gap between performance estimation and traditional OOD detection literature and proposes a first step towards this end.
## 6 Acknowledgements
M.R. is funded through an Imperial College London President's PhD Scholarship. B.G. received support from the Royal Academy of Engineering as part of his Kheiron/RAEng Research Chair.
|
2302.04771 | Designing Fairness in Autonomous Peer-to-peer Energy Trading | Several autonomous energy management and peer-to-peer trading mechanisms for
future energy markets have been recently proposed based on optimization and
game theory. In this paper, we study the impact of trading prices on the
outcome of these market designs for energy-hub networks. We prove that, for a
generic choice of trading prices, autonomous peer-to-peer trading is always
network-wide beneficial but not necessarily individually beneficial for each
hub. Therefore, we leverage hierarchical game theory to formalize the problem
of designing locally-beneficial and network-wide fair peer-to-peer trading
prices. Then, we propose a scalable and privacy-preserving price-mediation
algorithm that provably converges to a profile of such prices. Numerical
simulations on a 3-hub network show that the proposed algorithm can indeed
incentivize active participation of energy hubs in autonomous peer-to-peer
trading schemes. | Varsha Behrunani, Andrew Irvine, Giuseppe Belgioioso, Philipp Heer, John Lygeros, Florian Dörfler | 2023-02-09T17:03:17Z | http://arxiv.org/abs/2302.04771v1 | # Designing Fairness in Autonomous Peer-to-peer Energy Trading
###### Abstract
Several autonomous energy management and peer-to-peer trading mechanisms for future energy markets have been recently proposed based on optimization and game theory. In this paper, we study the impact of trading prices on the outcome of these market designs for energy-hub networks. We prove that, for a generic choice of trading prices, autonomous peer-to-peer trading is always network-wide beneficial but not necessarily individually beneficial for each hub. Therefore, we leverage hierarchical game theory to formalize the problem of designing locally-beneficial and network-wide fair peer-to-peer trading prices. Then, we propose a scalable and privacy-preserving price-mediation algorithm that provably converges to a profile of such prices. Numerical simulations on a 3-hub network show that the proposed algorithm can indeed incentivize active participation of energy hubs in autonomous peer-to-peer trading schemes.
S +
Footnote †: footnoteinfo
of each hub are derived based on their internal operation strategy to ensure that their costs are recovered. In (Daryan et al., 2022), a two-stage mechanism is proposed to obtain the optimal payment that incentivizes active participation in the market design. Finally, (Sorin et al., 2019) considers product differentiation and preferences to define effective transaction prices.
In this paper, we leverage hierarchical game theory and distributed optimization to design a novel scalable, locally-benefical, and fair pricing scheme for autonomous peer-to-peer trading in energy hub networks that incentivizes active participation. Our contribution is three-fold:
1. We formulate the problem of designing locally beneficial and network-wide fair trading prices as a bilevel game. At the lower level, the "optimal" energy trades and operational setpoints for the energy hubs are formulated as interdependent economic dispatch problems. At the upper level, the desired P2P trading prices are defined as the minimizers of a fairness metric (namely, the sample variance of the local cost reductions of the hubs) which depends on the optimal setpoints of the lower-level dispatch problem.
2. We leverage the special structure of the bilevel game to decouple the two levels and design an efficient 2-step solution algorithm. In the first step, an ADMM-based algorithm is used for distributively solving the economic dispatch problems. In the second step, a semi-decentralized price mediation algorithm is used to compute fair P2P trading prices in a scalable way.
3. We illustrate and validate the proposed autonomous P2P trading and pricing mechanism via extensive numerical simulations on a 3-hub network, using realistic models of energy hubs and demand data.
## 2 Modelling the hubs
We consider a network of \(N\) interconnected energy hubs, labeled by \(i\in\mathcal{N}:=\{1,\ldots,N\}\). Each hub is connected to the electricity and gas grid, and can trade electrical energy with the other hubs via the electricity grid. As an example, a system of three hubs in illustrated in Fig. 1 and will be used to fix ideas throughout the paper.
To fulfil its electrical and thermal demand, each hub is equipped with different energy conversion and storage devices that draw electricity and gas directly from the grid. The heating devices can include gas boilers, heat pumps, as well as thermal energy storage. Electricity can be locally produced via photovoltaic (PV), Combined Heat and Power (CHP) and micro-CHP (\(\mu\)CHP) devices and locally stored using batteries. In addition to the heating and electricity demand, cooling demand may also be present which is not considered in this work. It can be added with suitable devices, such as chillers, ice storage, and HVAC, without conceptual changes to our model.
The electricity grid acts as an infinite source and sink, namely, electricity can be directly drawn from the grid and excess electrical energy produced in the hub can be fed to the grid. Additionally, the hubs can exchange energy via peer-to-peer (or bilateral) trading through the grid. The hubs are connected to the heating and electricity demand via a downstream distribution network. The demands of all the downstream entities supplied by each hub are aggregated into a single demand. For the sake of simplicity, in this study, we assume that a perfect forecast is available for this demand. Similarly, we assume that a perfect forecast for the temperature and the solar radiation are also available. Forecast uncertainties can play a crucial role in optimally operating energy hubs and integrating it into our model is a topic of current work.
In the next sections, we provide an overview of the models used for the components in the energy hubs.
### Energy Conversion
Combined Heat and Power (CHP):The CHP simultaneously generates heat and power using natural gas. The output is limited by its feasible operation region (Figure 2) as defined by a polyhedron with vertices A-D and its corresponding electrical and thermal output, \(p_{\text{a}}\), \(p_{\text{b}}\), \(p_{\text{c}}\), \(p_{\text{d}}\) and \(q_{\text{a}}\), \(q_{\text{b}}\), \(q_{\text{c}}\), \(q_{\text{d}}\), respectively (Navarro et al., 2018; Alipour et al., 2014). The electrical and thermal output of the CHP for hub \(i\) are \(p_{\text{chp,i}}\) and \(q_{\text{chp,i}}\), respectively, characterized as a convex combination of the vertices with weights \(w_{\text{a,i}}\), \(w_{\text{b,i}}\), \(w_{\text{c,i}}\), and \(w_{\text{d,i}}\), respectively. The fuel consumed by the CHP unit, \(f_{\text{chp}}\), depends only on the electrical output subject to the fuel efficiency, \(\eta_{\text{chp}}\). CHP is modelled by the following equations:
Figure 1: A network of three interconnected energy hubs. Each hub can import energy from the electricity (green) and gas (brown) grids, and can feed-in electricity to the electricity grid; additionally, each hub can also trade electrical energy with the other hubs.
\[p_{\rm chp,i} =\sum_{j\in\mathcal{K}}w_{\rm j,i}\cdot p_{\rm j,i} \tag{1}\] \[q_{\rm chp,i} =\sum_{j\in\mathcal{K}}w_{\rm j,i}\cdot q_{\rm j,i}\] \[f_{\rm chp,i} =\frac{1}{\eta_{\rm chp,i}}\cdot p_{\rm chp,i}\] \[1 =\sum_{j\in\mathcal{K}}w_{\rm j,i}\] \[0 \leq w_{\rm j,i}\leq 1\quad\mathcal{K}=\{\text{A,B,C,D}\}\]
Heat Pump (HP):The heat pump uses electricity to extract heat from the ground. The relation between heat pump electrical input \(p_{\rm hp,i}\) and thermal output \(q_{\rm hp,i}\) defined by its coefficient of performance (COP) is given by
\[q_{\rm hp,i}=\text{COP}\cdot p_{\rm hp,i}. \tag{2}\]
Gas boiler (GB):The gas boiler uses natural gas, \(f_{\rm gb,i}\), to generate heat, \(q_{\rm gb,i}\). The thermal output of the boiler and its efficiency, \(\eta_{\rm gb,i}\), is modelled by
\[q_{\rm gb,i}=\eta_{\rm gb,i}\cdot f_{\rm gb,i}. \tag{3}\]
Solar Photovoltaic (PV):The energy output of the solar photovoltaic system at any time, \(p_{\rm pv,i}\), is quantified by the incident solar irradiance \(I_{\rm solar}\) [\(\rm kW/m^{2}\)] along with the total area \(a_{\rm pv}\) [\(\rm m^{2}\)] and the efficiency \(\eta_{\rm pv}\)[14]. The solar irradiance \(I_{\rm solar}\) depends on the forecast of the global solar irradiation (which is assumed to be known) and the orientation of the PV on the building.
\[p_{\rm pv,i}=\eta_{\rm pv,i}\cdot I_{\rm solar,i}\cdot a_{\rm pv,i} \tag{4}\]
The output of all energy converters are also limited by the following capacity constraints:
\[p_{\rm j,i}^{\rm min} \leq p_{\rm j,i}\leq p_{\rm j,i}^{\rm max}\quad\text{j}=\{\text{pv, chp}\}. \tag{5}\] \[q_{\rm k,i}^{\rm min} \leq q_{\rm k,i}\leq q_{\rm k,i}^{\rm max}\quad\text{k}=\{\text{gb, hp, chp}\}.\]
### Energy Storage System
The dynamics of the electrical storage (ES) is modelled by the following discrete-time linear time-invariant system:
\[p_{\rm s,i}(h)=\gamma_{\rm e,i}\cdot p_{\rm s,i}(h-1)+\eta_{\rm e,i}\cdot p_{ \rm s,i}^{\rm ch}(h)-\left(\frac{1}{\eta_{\rm e,i}}\right)\cdot p_{\rm s,i}^{ \rm dc}(h), \tag{6}\]
where \(h\in\mathcal{Z}_{\geq 0}\) is the time index, \(p_{\rm s}\) is the battery state of charge, \(p_{\rm s,n}^{\rm dc}\) and \(p_{\rm s}^{\rm ch}\) are the energy discharged and charged into the battery, and \(\gamma_{\rm e}\) and \(\eta_{\rm e}\) are the standby and cycle efficiency.
The storage levels must satisfy the battery capacity limits
\[p_{\rm j,i}^{\rm min}\leq p_{\rm j,i}\leq p_{\rm j,i}^{\rm max}\quad\text{j}= \{\text{s, dc, ch}\}. \tag{7}\]
The heat storage (TS) dynamics are modelled analogously with the corresponding state of charge, \(q_{\rm s}\), the heat discharged and charged into the thermal storage, \(q_{\rm s}^{\rm dc}\) and \(q_{\rm gh}^{\rm ch}\), and standby and cycle efficiency, \(\gamma_{\rm h}\) and \(\gamma_{\rm h}\), respectively. Storage units for thermal energy storage consists of devices such as borehole field, water tanks, etc.
### Network modelling
The network and internal connections describe the input-output equations of different energy carriers. For hub \(i\), the energy balance constraint of electricity and heat read:
\[L_{\rm e,i} =p_{\rm chp,i}+p_{\rm pv,i}-p_{\rm hp,i}+\left(p_{\rm e,i}^{\rm out }-p_{\rm e,i}^{\rm in}\right)+\left(p_{\rm s,i}^{\rm dc}-p_{\rm s,i}^{\rm ch}\right)\] \[+\sum_{j\in\mathcal{N}\setminus\{i\}}p_{ij}^{\rm tr}\;, \tag{8}\] \[L_{\rm h,i} =q_{\rm chp,i}+q_{\rm gb,i}+q_{\rm hp,i}+\left(q_{\rm s,i}^{\rm dc }-q_{\rm s,i}^{\rm ch}\right), \tag{9}\]
where \(L_{\rm e,i}\) and \(L_{\rm h,i}\) are the electrical and thermal demand of the energy hub \(i\) respectively, and \(p_{\rm e,i}^{\rm out}\) and \(p_{\rm e,i}^{\rm in}\) are the energy imported and fed into electricity grid respectively; While a district heating network may also be present in some places, it is not included here. In the absence of a heating grid, we assume demand can be met exactly at all times by conversion or storage.
The energy hubs can also trade electrical energy amongst each other. The power traded between hub \(i\) and hub \(j\) is \(p_{ij}^{\rm tr}\). The value is positive if the energy is imported by the hub \(i\) and negative otherwise. The total energy exchanged between hub \(i\) and the other hubs is \(\sum_{j\in\mathcal{N}\setminus\{i\}}p_{ij}^{\rm tr}\). Trading agreement is enforced via the so-called reciprocity constraints [1]
\[p_{ij}^{\rm tr}+p_{ji}^{\rm tr}=0,\quad\forall i,j\in\mathcal{N},\;i\neq j, \tag{10a}\] \[p_{ij}^{\rm tr}\leq\kappa_{ij},\quad\forall i,j\in\mathcal{N},\;i\neq j, \tag{10b}\]
The reciprocity constraints (10a) ensures that the energy exported from hub \(i\) to hub \(j\) is the same as the energy imported by hub \(j\) from hub \(i\); additionally, the constraints (10b) limit the trade between hubs where \(\kappa_{ij}\) is the maximum that can be traded between the hubs \(i\) and \(j\). In this study, we assume that each hub can trade with all other hubs. Specific trading networks can be defined by restricting some of the trading limits (10b) to zero.
## 3 Autonomous P2P Trading
### Economic Dispatch as a Noncooperative Game
For each hub, the economic dispatch problem consists of choosing the local operational set points \(p_{i}\), over a horizon \(\mathcal{H}:=\{1,\ldots,H\}\), to minimize its energy cost. The cost of each hub \(i\in\mathcal{N}\) is the sum over all costs of its available assets (including the energy exchanged with the electricity grid), \(\ell\in\mathcal{A}_{i}=\{\text{chp, gb, gshp, pv, grid}\}\) (namely, CHP unit, gas boiler, solar photovoltaic, electricity grid etc.), and the bilateral trades with the other hubs in the network.
We model the cost of each asset \(\ell\in\mathcal{A}_{i}\) as a strongly convex function \(f_{i}^{\ell}(p_{i}^{\ell})\), where \(p_{i}^{\ell}\in\mathbb{R}^{H}\) is the vector of setpoints of asset \(\ell\) over the horizon \(\mathcal{H}\). Typical choices in the literature are quadratic and linear functions [1, 2, 13]. The total cost of bilateral trades with hub \(j\) is given by
\[c_{(i,j)}^{\rm\top}p_{ij}^{\rm tr}+\gamma\|p_{ij}^{\rm tr}\|_{2}^{2}, \tag{11}\]
Figure 2: Feasible region of combined heat and power(CHP)
where \(p_{ij}^{\text{tr}}\in\mathbb{R}^{H}\) collects the trades with hub \(j\) over \(\mathcal{H}\), \(c_{(i,j)}\in\mathbb{R}^{H}\) defines the prices of the bilateral trades with hub \(j\), while \(\gamma\) is a marginal trading tariff imposed by the grid operator to use the network for bilateral trades.
Overall, the economic dispatch problem of each hub \(i\) can be compactly written as the following convex program:
\[\min_{p_{i}} \quad\overbrace{\sum_{\ell\in\mathcal{A}_{i}}f_{i}^{\ell}\left(p _{i}^{\ell}\right)+\sum_{j\in\mathcal{N}}\left(c_{(i,j)}^{\top}p_{ij}^{\text{tr }}+\gamma\big{\|}p_{ij}^{\text{tr}}\big{\|}_{2}^{2}\right)}^{=:J_{i}(p_{i},c_{ i})}\] (12a) s.t. \[\quad p_{i}\in\mathscr{P}_{i} \tag{12b}\] \[\quad p_{ij}^{\text{tr}}+p_{ji}^{\text{tr}}=0,\quad\forall j\in \mathcal{N}\setminus\{i\}, \tag{12c}\]
where \(p_{i}\) collects all the decision variables of hub \(i\) (local set points \(p_{i}^{\ell}\), and import/exports from/to other hubs \(p_{ij}^{\text{tr}}\)), and \(\mathscr{P}_{i}:=\{p_{i}\mid(7)-(10\text{b})\text{ hold}\}\) all its operational constraints; finally, the cost function \(J_{i}(p_{i},c_{i})\) combines the costs of all the local assets and the bilateral trades, and its parametric dependency on the bilateral trading prices \(c_{i}=(c_{(i,1)},\ldots,c_{(i,N)})\in\mathbb{R}^{(N-1)H}\) is made explicit.
Note that the economic dispatch problems (12) are coupled via the reciprocity constraints (12c) that enforce agreement on the bilateral trades. Therefore, the collection of \(N\) parametric inter-dependent optimization problems (12) constitutes a game with coupling constraints (Facchinei and Kanzow, 2010). A relevant solution concept for the game (12) is that of generalized Nash equilibrium (GNE). Namely, a feasible action profile \(p^{*}=(p_{1}^{*},\ldots,p_{N}^{*})\) for which no agent can reduce their cost by unilaterally deviating (Belgioioso et al., 2022b, Def. 1). Here, we focus a special subclass of GNEs known as variational GNEs (v-GNEs) and characterized by the solution set, \(S(c)\), of the variational inequality \(\text{VI}(F(\cdot,c),\mathscr{P})\). Namely, the problem of finding a vector \(p^{*}\in\mathscr{P}\) such that
\[F(p^{*},c)^{\top}(p-p^{*})\geq 0,\quad\forall p\in\mathscr{P}, \tag{13}\]
where \(\mathscr{P}:=\{p\mid(12\text{b})-(12\text{c})\text{ hold for all }i\in\mathcal{N}\}\), and \(F\) is the so-called pseudo-gradient mapping, defined as
\[F(p,c)=\text{col}(\nabla_{p_{1}}J_{1}(p_{1},c_{1}),\ldots,\nabla_{p_{N}}J_{N} (p_{N},c_{N}))\]
and parametrized by the bilateral trading prices \(c\). This subclass of GNEs enjoys the property of "economic fairness", namely, the shadow price (dual variable) due to the presence of the coupling reciprocity constraints is the same for each hub (Facchinei and Kanzow, 2010).
Interestingly, since the cost functions \(J_{i}\) are decoupled, solutions of the variational GNEs of (12) correspond to the minimizers of the social cost problem
\[\left\{\min_{p}\ \sum_{i\in\mathcal{N}}J_{i}\left(p_{i},c_{i}\right)\quad\text{s.t. }\quad p\in\mathscr{P}\right\}=:W(c). \tag{14}\]
This equivalence directly follows by comparing the KKT conditions of \(\text{VI}(F(\cdot,c),\mathscr{P})\) with that of the social cost problem (14), and was noted in a number of different works (Le Cadre et al., 2020; Moret et al., 2020). In the remainder, we exploit this connection in a number of ways.
First, we prove that the trading game (12) has a unique variational GNE which does not depend on the specific choice of the bilateral trading prices.
**Lemma 1**: _Let \(S(\cdot)\) be the price-to-variational GNE mapping, i.e., \(S(c)=\{p^{*}\mid F(p^{*},c)^{\top}(p-p^{*})\geq 0,\ \forall p\in\mathscr{P}\}\). Then, \(S(c)=\{p^{*}\}\) for all price profiles \(c\in\mathbb{R}^{N(N-1)H}\), for some unique profile \(p^{*}\in\mathscr{P}\) independent of \(c\)._
_Proof:_ A formal proof of the equivalence between \(S(c)\) and the solution set to the social cost problem (14), \(S^{\text{sc}}(c)\), can be found in (Le Cadre et al., 2020). Here, we show that \(S^{\text{sc}}(c)\) has a unique element independent of \(c\). First, note that the objective functions \(J_{i}(p_{i},c_{i})\) are decoupled (namely, depend only on local decision variables) and are strongly convex, for all \(c_{i}\). Hence, (14) has a unique solution \(p^{*}(c)\), that is, \(S^{\text{sc}}(c)=\{p^{*}(c)\}\). Next, we show that \(p^{*}(c)\) is the same for all \(c\). Note that, each pair of twin trading terms in \(J_{i}(p_{i}^{*}(c_{i}),c_{i})\) and \(J_{j}(p_{j}^{*}(c_{j}),c_{j})\), satisfy \(c_{(i,j)}^{\top}p_{ij}^{\text{tr}}=-c_{(i,j)}^{\top}p_{ji}^{\text{*}\text{tr}}\), due to the reciprocity constraints (12c). Hence, these terms cancel out when summed up in the objective function (14), for any choice of \(c_{(i,j)}\). It follows that the minimizers of (14) do not depend on \(c\). \(\square\)
**Remark 1**: _The dispatch game (12), and its optimization counterpart (14), are the most widespread mathematical formulations for full P2P market designs, and appear with some variations (e.g., trading tariff, price differentiation, and reciprocity) in a number of different work (Baroche et al., 2019; Le Cadre et al., 2020; Moret et al., 2020). \(\square\)_
### Distributed solution of the P2P trading game
To find a variational GNE of (12), we use a distributed version of the Alternating Direction Method of Multipliers (ADMM) (Boyd et al., 2011) on (14). The resulting algorithm is summarized in Algorithm 1.
```
Initialization (\(k=0\)):\(p_{ij}^{\text{tr, }0},\lambda_{ij}^{0}=0\), for all trades \((i,j)\). Iterate until convergence: For all hubs \(i\):
For all hubs \(i\):
1. Compute \(p_{ij}^{\text{tr, }k+1}\), \(\widehat{p}_{ij,i}^{\text{tr, }k+1}\) and \(\widehat{p}_{ij,j}^{\text{tr, }k+1}\): given \(p_{ij}^{\text{tr, k}},\ \lambda_{ij}^{k}\), minimize (16) s.t. (12b)-(12c), 2. Broadcast \(\widehat{p}_{ij,i}^{\text{tr, k}+1}\), \(\widehat{p}_{ji,i}^{\text{tr, }k+1}\) to hub \(j\), and receive \(\widehat{p}_{ij,j}^{\text{tr, k+1}}\), \(\widehat{p}_{ji,j}^{\text{tr, k+1}}\) from \(j\), \(\forall j\in\mathcal{N}\setminus\{i\}\), 3. Update trade \(p_{ij}^{\text{tr, k+1}}\) as in (17a), 4. Update dual variable \(\lambda_{ij}^{k+1}\) as in (17b), \(k\gets k+1\)
```
**Algorithm 1** Distributed Peer to Peer Trading
The social cost problem in (14) is reformulated as a global consensus problem wherein the hubs have to reach agreement on the bilateral trades. The power traded between hubs, \(p_{ij}^{\text{tr}}\), becomes a global decision variable and each hub \(i\) and \(j\) optimizes over a local copy of this value, thought of as their local estimate of the trade, \(\widehat{p}_{ij,i}^{\text{tr}}\) and \(\widehat{p}_{ij,j}^{\text{tr}}\), respectively. The economic dispatch problem of each hub \(i\) presented in (12) is therefore solved for the local estimates in addition to \(p_{i}\) subject to additional constraints given below. An iterative consensus procedure is then used to ensure that the local estimates adhere to the global decision,
\[\widehat{p}_{ij,i}^{\mathrm{tr},-}-p_{ij}^{\mathrm{tr}}=0,\quad \forall j\in\mathcal{N}\setminus\{i\}, \tag{15a}\] \[\widehat{p}_{ji,i}^{\mathrm{tr}}-p_{ji}^{\mathrm{tr}}=0,\quad \forall j\in\mathcal{N}\setminus\{i\}. \tag{15b}\]
The augmented Lagrangian for the problem (12) with the added constraints (15) is given by
\[J_{i}\left(p_{i},c_{i}\right)+\sum_{j\in\mathcal{N}\setminus\{i \}}\left(\lambda_{ij}^{\mathrm{T}}(\widehat{p}_{ij,i}^{\mathrm{tr}}-p_{ij}^{ \mathrm{tr}})+\frac{\rho}{2}\left\|\widehat{p}_{ij,i}^{\mathrm{tr}}-p_{ij}^{ \mathrm{tr}}\right\|_{2}^{2}\right.\] \[\left.\lambda_{ji}^{\mathrm{T}}(\widehat{p}_{ji,i}^{\mathrm{tr}}- p_{ji}^{\mathrm{tr}})+\frac{\rho}{2}\left\|\widehat{p}_{ji,i}^{\mathrm{tr}}-p_{ji}^{ \mathrm{tr}}\right\|_{2}^{2}\right) \tag{16}\]
where \(\lambda_{ij}\) is the Lagrange dual variable and \(\rho_{\geq 0}\) is the augmented Lagrangian penalty parameter. The resulting dual function that minimizes (16) subject to the (12b), and (12c) is solved independently at the hub level at each iteration to update the local setpoints and estimates of the bilateral trade. Hubs \(i\) and \(j\) communicate their local estimates of the trade values that are used to update the global trade decision, \(p_{ij}^{\mathrm{tr}}\) by an averaging step and the dual variable of (15) through
\[p_{ij}^{\mathrm{tr,k+1}} =\frac{1}{2}\cdot(\widehat{p}_{ij,i}^{\mathrm{tr,k+1}}+\widehat{ p}_{ij,j}^{\mathrm{tr,k+1}}), \tag{17a}\] \[\lambda_{ij}^{k+1} =\lambda_{ij}^{k}+\rho\cdot(\widehat{p}_{ij,i}^{\mathrm{tr,k+1}}- p_{ij}^{\mathrm{tr,k+1}}). \tag{17b}\]
This process continues until the local and global values of all trades converge, namely, consensus is achieved.
### Undesired Effects of Autonomous P2P Trading
In this section, we show that this autonomous peer-to-peer trading model provably leads to a social cost decrease, but not necessarily to a local cost decrease for each hub.
The economic dispatch problem without bilateral trading corresponds to (12) with \(p_{ij}^{\mathrm{tr}}=0\) for all hubs. Clearly, the corresponding social cost, \(W^{\mathrm{nt}}(c)\), will be greater than \(W(c)\) in (14), since the feasible set of the non-trading scenario \(\mathscr{P}^{\mathrm{nt}}=\mathscr{P}\cap\{p\mid p_{ij}^{\mathrm{tr}}=0,\; \forall j\in\mathcal{N}\setminus\{i\},\forall i\in\mathcal{N}\}\) is a subset of the feasible set with bilateral trades \(\mathscr{P}\). Hence, allowing bilateral trades cannot increase the social cost, regardless of the trading prices.
The decrease in social cost is not necessarily reflected in the individual costs of all agents. In other words, there may exist profiles of bilateral trading prices for which certain hubs are worse off than if they were not participating in the autonomous trading mechanism. We illustrate this phenomenon via a numerical example for a 3-hub network in Section 5. A natural question is whether there exists a price profile for which all the hubs benefit by participating in the autonomous bilateral trading mechanism. The following theorem gives an affirmative answer to this question.
**Theorem 1**: _Let \(p^{*}\) be the unique variational GNE of (12). There exists a price profile \(c^{*}\in\mathbb{R}^{N(N-1)H}\) such that_
\[J_{i}(p_{i}^{*},c_{i}^{*})\leq J_{i}^{\mathrm{nt}},\quad\forall i\in\mathcal{ N}, \tag{18}\]
_where \(J_{i}^{\mathrm{nt}}\) are the local costs of the non-trading scenario._
_Proof:_ Without loss of generality, we carry out the proof for \(H=1\). Consider the unique variational GNE \(p^{*}\) and label all \(E\) realized trades between hubs in \(p^{*}\) as \(t_{\ell}\), for \(\ell\in\mathcal{E}=\{1,\ldots,E\}\). Then, define a matrix \(V\in\mathbb{R}^{N\times E}\), whose \((\ell,m)\)-entry satisfies, for all \(m\in\mathcal{N}\), \(\ell\in\mathcal{E}\):
\[[V]_{m\ell}:=\begin{cases}\widehat{p}_{ij}^{\mathrm{tr,*}}&\text{if $t_{\ell}=(i,j)$ and $m=i$},\\ \widehat{p}_{ji}^{\mathrm{tl,*}}&\text{if $t_{\ell}=(i,j)$ and $m=j$},\\ 0&\text{otherwise},\end{cases}\]
Since \(p_{ij}^{\mathrm{tr,*}}=-p_{ji}^{\mathrm{tr,*}}\) by the trading reciprocity (12c), \(V\) is indeed a graph incidence matrix and satisfies
\[\mathrm{range}(V)\supseteq\mathrm{range}(VV^{\top})=\mathrm{null}(1_{N}^{\top}), \tag{19}\]
where the equality follows from (Godsil and Royle, 2013, Th. 8.3.1) assuming \(V\) describes a connected (trading) network. If the network is not connected, the remainder of the proof can be carried out for each connected component. Since the social cost of \(p^{*}\), \(W\), is no greater than that of the non-trading case \(W^{\mathrm{nt}}\), we can write
\[W-W^{\mathrm{nt}}+\kappa=0,\text{ for some }\kappa\geq 0, \tag{20}\]
Now, define \(\mathbb{J}=\mathrm{col}(J_{1},\ldots,J_{N})\), \(\mathbb{J}^{\mathrm{nt}}=\mathrm{col}(J_{1}^{\mathrm{nt}},\ldots,J_{N}^{ \mathrm{nt}})\). Then, it holds that \(\mathbf{1}_{N}^{\top}(\mathbb{J}-\mathbb{J}^{\mathrm{nt}}+\frac{\kappa}{N} \mathbf{1}_{N})=0\), which implies
\[\mathbb{J}-\mathbb{J}^{\mathrm{nt}}+\frac{\kappa}{N}\mathbf{1}_{N}\in\mathrm{ null}(\mathbf{1}_{N}^{\top})\subseteq\mathrm{range}(V), \tag{21}\]
where for the last inclusion we used (19). It follows by (21) that there exists a vector \(c^{*}\) such that
\[\mathbb{J}+Vc^{*}=\mathbb{J}^{\mathrm{nt}}-\frac{\kappa}{N}\mathbf{1}_{N}\leq \mathbb{J}^{\mathrm{nt}}, \tag{22}\]
where the \(i\)-th component of (22) is in fact (18). \(\blacksquare\)
In the following section, we develop a scalable mechanism to identify bilateral trading prices that are not only locally beneficial for each hub but also fair.
## 4 Designing Fairness via bilevel Games
To ensure that the equilibrium of the autonomous peer-to-peer trading game (12) is "fair", we set the prices of the bilateral trades by solving the following bilevel game
\[\min_{c,\,p} \varphi\left(p,c\right)\] (23a) s.t. \[c\in\mathcal{C}, \tag{23b}\] \[p\in S(c), \tag{23c}\]
where \(\mathcal{C}\) is a set of feasible trading prices that can be used to model price regulations, such as capping, and \(S\) as in Lemma 1. The lower level (23c) imposes the operational setpoints and the bilateral trades \(p\) to be in the variational GNE set \(S(c)\) of the price-parametrized game (12). At the higher level, the optimal trading prices are chosen to minimize a certain _fairness metric_\(\varphi\) which depends also on the operational setpoints of the lower level.
Here, we define the fairness metric as the sample variance of the normalized cost reductions \(d_{i}\) achieved by enabling peer-to-peer trading between hubs, namely
\[\varphi(p,c)=\sum_{i\in\mathcal{N}}\left(d_{i}(c_{i},p_{i})-\frac{1}{N}\sum_{j \in\mathcal{N}}d_{j}(c_{j},p_{j})\right)^{2}, \tag{24}\]
where the normalized cost reduction \(d_{i}\) is defined as
\[d_{i}\left(c_{i},p_{i}\right)=(J_{i}^{\mathrm{nt}}-J_{i}\left(p_{i},c_{i}\right))/J _{i}^{\mathrm{nt}},\quad\forall i\in\mathcal{N}. \tag{25}\]
The non-trading cost \(J_{i}^{\mathrm{nt}}\) can be locally computed by each hub by solving their optimal economic dispatch problem in which trading is disabled. This metric ensures that the social wealth generated by enabling peer-to-peer trading is "fairly" distributed across the hubs.
Large-scale bilevel games as (23) are notoriously difficult to solve. Existing solution approaches are based on mixed-integer programming or nonlinear relaxations, and lack
either convergence guarantees or computational efficiency. Next, we show that a solution to (23) can instead be efficiently obtained by solving sequentially the game (23c) and the optimization (23a). By Lemma 1, the v-GNEs set \(S(c)\) is a singleton, independent on the bilateral trading prices \(c\). Hence, the bilevel game (23) boils down to
\[\min_{c}\quad\varphi\left(p^{*},c\right)\quad\text{s.t.}\quad c\in\mathcal{C}, \tag{26}\]
where \(p^{*}\) is the unique price-independent v-GNE of (12). Hence, a solution to (23) can be found in two steps:
1. Compute the unique v-GNE \(p^{*}\) of (12), for any fixed \(c\);
2. Compute a solution to (26), where \(p^{*}\) is fixed.
Solving (26) centrally requires global knowledge over the optimal operational setpoints \(p^{*}=(p_{1}^{*},\ldots,p_{N}^{*})\), the cost functions \(J_{i}\), and the non-trading cost, \(J_{i}^{\text{nt}}\), which is unrealistic. Additionally, the dimensionality of (26) increases quadratically with the number of hubs, thus making it rapidly intractable for large-scale hub networks. Motivated by these challenges, in the next section, we design a scalable and privacy-preserving algorithms to solve (26).
_Remark 2_.: The single-level reformulation (26) is obtained by exploiting the specific structure of the market model (12), for which the variational Nash equilibrium is price insensitive (by Lemma 1). Variations of this market design that consider price differentiation (Sorin et al., 2019) or trading reciprocity constraints with inequalt y(Le Cadre et al., 2020) do not enjoy this favourable property.
### A Semi-decentralized Price Mediation Protocol
We design a semi-decentralized projected-gradient algorithm which preserves privacy and achieve scalability with respect to the number of hubs. To distribute the computation, a mediator, \(M_{(i,j)}\), is introduced between each pair of hubs \(i\) and \(j\), whose objective is to determine a fair trading price, \(c_{(i,j)}\). The mediator updates the price according to
\[c_{(i,j)}^{k+1}=\Pi_{\mathcal{C}}\left(c_{(i,j)}^{k}-\beta\frac{\partial \varphi(p^{*},c^{k})}{\partial c_{(i,j)}}\right), \tag{27}\]
where \(\Pi_{\mathcal{C}}\) is the projection onto a feasible set of prices \(\mathcal{C}\). The gradient of \(\varphi(p,c)\) with respect to a price \(c_{(i,j)}\) is
\[\frac{\partial\varphi(p,c)}{\partial c_{(i,j)}}= \frac{2}{N}\left(\frac{p_{i}^{k*}}{\bar{J}_{i}^{k*}}\left(d_{i}(c _{i},p_{i})-\bar{d}\left(c,p\right)\right)\right.\] \[\left.+\frac{p_{i}^{k*}}{\bar{J}_{j}^{k*}}\left(d_{j}\left(c_{j}, p_{j}\right)-\bar{d}\left(c,p\right)\right)\right)\]
where \(\bar{d}(c,p)=\frac{1}{N}\sum_{i=1}^{N}d_{i}\left(c_{i},p_{i}\right)\) is the average cost reduction. A central coordinator is introduced that gathers the \(\bar{d}_{i}\) and broadcasts \(\bar{d}\) to all the mediators. Algorithms with this information structure are called semi-decentralized, see e.g. (Belgiooso and Grammatico, 2023).
The resulting algorithm is summarized in Algorithm 2 and its information structure is illustrated in Figure 3. At every iteration, each mediator \(M_{ij}\) receives the normalized cost reductions, \(d_{i}\) and \(d_{j}\), from the hubs it manages, and the average network cost reduction \(\bar{d}\) from the coordinator. Then, it updates the price \(c_{(i,j)}\) with a projected-gradient step (27). This process continues until the prices of all trades reach convergence. Since the objective function \(\varphi\left(p,c\right)\) is convex and \(L\)-smooth1, for some \(L>0\), uniformly in \(p\), taking the step size \(\beta\in(0,2/L)\), in the price update (27), guarantees convergence of Algorithm 2.
Footnote 1: The convexity of \(\varphi(\cdot,p)\) can be proven by showing that its Hessian is positive semidefinite. A formal proof is omitted here due to space limitation; smoothness follows since \(\phi(\cdot,p)\) is quadratic.
## 5 Numerical Results
We illustrate the proposed price mediation mechanism on a 3-hub network. The configuration, parameters and capacities for the three hubs are presented in Appendix A. In general, the cost functions for electricity and gas are linear and the v-GNE is not unique. In this study, an
Figure 3: Flowchart of the price mediation scheme in Alg. 2.
additional regularization term that minimizes the total energy imported is added to the cost to find a unique solution. Alternatively, selection algorithms can be used instead of regularization to handle the non-uniqueness of the v-GNE (Ananduta and Grammatico, 2022). The price of input energy carriers (electricity and gas) and for utilizing the electricity grid are summarized in Table 1. We solve the optimization for a horizon \(H=24\,\mathrm{h}\) with a sampling resolution of \(1\,\mathrm{h}\).
The electricity demand and PV production for the three hubs over a span of 24 hours are shown in Figures 4(a) and (b), respectively. Hub 1 represents a larger industrial hub with a high production capacity, Hub 2 is a medium sized hub, and Hub 3 is a small residential hub with heat pump, PV, and small energy demand.
### The impact of autonomous peer-to-peer trading
First, we compare the performance of the system without and with autonomous peer-to-peer energy trading. All the bilateral trades and the operational setpoints of the converters are calculated by solving (12) with Algorithm 1. The results are illustrated in Figure 5(a).
At the beginning of the day (0:00-7:00) and at end of the day (18:00-24:00), when there is no PV production, power is traded from the larger Hub 1 that can use CHP to produce electricity to Hub 2 and Hub 3, owing to its larger production capacity. In the absence of trading, Hub 2 and Hub 3 import electricity from the grid at a higher price than the price of the gas used to produce the energy in the CHP. Part of the power traded from Hub 1 to Hub 3 is transferred to Hub 2 to minimize the grid tariff level to the hubs; this grows quadratically for the power transferred between any two hubs, making it profitable to make multiple small power trades than a single large one. When PV output is high, the trades drop to 0 and the cost is equivalent to that in the non-trading case since the PV output of each hub is sufficient to fulfil the local demand.
### The impact of the bilateral trading prices
We verify that the optimal power traded between the hubs and the optimal setpoints, \(p^{*}\), are independent of the trading price (Lemma 1) by solving (12) with different trading prices. Figure 6 shows the cost reduction (25) achieved by each hub for three different trading prices (uniform across all peer-to-peer trades), namely, \(c=0.1,0.18,0.2\) CHF/kWh. The figure also shows the reduction in the social cost compared to when no trading occurs, and the benefit of autonomous trading to the hub network is evident by the 2.5% reduction of social cost, independently of the trading price. For \(c=0.1\) CHF/kWh, since the trading price is low (even lower than the feed-in tariff), Hub 2 and Hub 3 benefit by trading as they import cheap energy from Hub 1. However, this results in an increase of the cost for Hub 1, as the trading price does not cover the additional production costs of the power traded to the other hubs. For \(c=0.18\) CHF/kWh, although each of the hubs benefits from trading, the cost reduction varies drastically between the hubs. Hub 1 that exports much of its energy has a much smaller benefit than Hub 2 that only imports energy. Finally, for \(c=0.2\) CHF/kWh, Hub 1 and Hub 2 continue to benefit whereas the smaller Hub 3 loses since the higher trading price for import and the trading tariff increase the net price to more than the grid price.
Finally, the trading prices are calculated using the price mediation mechanism in Algorithm 2. The resulting cost reduction for the three hubs using the optimal price profile is shown in Figure 6. Interestingly, the trading price found by Algorithm 2 is different for each trade and also varies at different times within the control horizon (shown in Figure 5(b)). The normalized cost reduction achieved by each of the hubs is nearly equal and matches the social cost reduction achieved by the network.
## 6 Conclusion
Energy trading prices play a major role in incentivizing participation in autonomous peer-to-peer trading mechanisms. We proposed a privacy-preserving and scalable
\begin{table}
\begin{tabular}{l c} \hline Tariff & Price(CHF/kW) \\ \hline Electricity output & 0.22 \\ Electricity feed-in & 0.12 \\ Gas & 0.115 \\ \end{tabular}
\end{table}
Table 1: Tariffs for electricity and gas utility.
Figure 4: Electric power demand, \(L_{\mathrm{e,i}}\), and the PV production, \(p_{\mathrm{pv,i}}\), for the three hubs(\(i=1,2,3\)) over 24 hours.
Figure 5: (a) Bilateral power traded and (b) trading price for the bilateral trades in the 3-hub network.
price-mediation algorithm that provably finds price profiles that are not only locally-beneficial for each hub but also network-wide fair. Numerical simulation on a 3-hub network supported this theoretical result.
|
2301.12443 | Pipe-BD: Pipelined Parallel Blockwise Distillation | Training large deep neural network models is highly challenging due to their
tremendous computational and memory requirements. Blockwise distillation
provides one promising method towards faster convergence by splitting a large
model into multiple smaller models. In state-of-the-art blockwise distillation
methods, training is performed block-by-block in a data-parallel manner using
multiple GPUs. To produce inputs for the student blocks, the teacher model is
executed from the beginning until the current block under training. However,
this results in a high overhead of redundant teacher execution, low GPU
utilization, and extra data loading. To address these problems, we propose
Pipe-BD, a novel parallelization method for blockwise distillation. Pipe-BD
aggressively utilizes pipeline parallelism for blockwise distillation,
eliminating redundant teacher block execution and increasing per-device batch
size for better resource utilization. We also extend to hybrid parallelism for
efficient workload balancing. As a result, Pipe-BD achieves significant
acceleration without modifying the mathematical formulation of blockwise
distillation. We implement Pipe-BD on PyTorch, and experiments reveal that
Pipe-BD is effective on multiple scenarios, models, and datasets. | Hongsun Jang, Jaewon Jung, Jaeyong Song, Joonsang Yu, Youngsok Kim, Jinho Lee | 2023-01-29T13:38:43Z | http://arxiv.org/abs/2301.12443v1 | # Pipe-BD: Pipelined Parallel Blockwise Distillation
###### Abstract
Training large deep neural network models is highly challenging due to their tremendous computational and memory requirements. Blockwise distillation provides one promising method towards faster convergence by splitting a large model into multiple smaller models. In state-of-the-art blockwise distillation methods, training is performed block-by-block in a data-parallel manner using multiple GPUs. To produce inputs for the student blocks, the teacher model is executed from the beginning until the current block under training. However, this results in a high overhead of redundant teacher execution, low GPU utilization, and extra data loading. To address these problems, we propose Pipe-BD, a novel parallelization method for blockwise distillation. Pipe-BD aggressively utilizes pipeline parallelism for blockwise distillation, eliminating redundant teacher block execution and increasing per-device batch size for better resource utilization. We also extend to hybrid parallelism for efficient workload balancing. As a result, Pipe-BD achieves significant acceleration without modifying the mathematical formulation of blockwise distillation. We implement Pipe-BD on PyTorch, and experiments reveal that Pipe-BD is effective on multiple scenarios, models, and datasets.
Distributed Training, Knowledge Distillation, Neural Architecture Search, Model Compression
## I Introduction
Modern deep neural network models are known to incur huge computational and memory requirements, especially with large-scale datasets [1]. With the continuing growth in model size, it takes tens, if not hundreds, of GPU days to train them [2], and the model size often exceeds the GPU memory capacity. Especially for methods that explore large solution spaces such as the neural architecture search (NAS) [3, 4], the problem becomes even more significant. This problem mandates the use of model parallelism [5, 6], which creates substantial throughput loss with inevitable pipeline bubbles.
Blockwise distillation [7, 8, 9] is one promising approach to mitigate such problems. As illustrated in Fig. 1, blockwise distillation splits the model into multiple smaller blocks. As opposed to traditional knowledge distillation methods that rely on input data and output labels from both ends, blockwise distillation uses the intermediate activation values of pretrained blocks of a 'teacher' to train each'student' block. As a result, each block converges faster (i.e., fewer epochs) due to the smaller solution space.
Contrary to the earlier belief that teachers must be larger than students, recent studies have revealed that smaller teachers can be used to train larger students [10]. With such findings, blockwise distillation is used in various fields such as model compression [7, 11] and NAS [9, 12]. Since training a small teacher for a new task is quick and easy, blockwise distillation can be applied in most cases where traditional training is used.
However, the existing state-of-the-art methods for blockwise distillation [7, 9] exhibit several inefficiencies. Relying on the traditional data-parallel training scheme, they train each student block one by one independently. While this fully exploits the independent nature of the blocks, it is not the best choice for training throughput. First, to train a single intermediate student block, the teacher blocks must be executed from the beginning to the designated block. As a result, the teacher blocks exhibit substantial redundant execution, especially with blocks closer to the output. Second, with data parallelism, a batch of data is split among multiple GPUs, which leads to a smaller batch size per GPU, often resulting in resource under-utilization. Some approaches use a larger batch size to mitigate this [2], but it is known to be difficult to ensure model convergence [13]. Last, the data must be redundantly loaded for each student block. Unless the entire dataset fits into the GPU memory, the data are loaded from the CPU memory or disks. As the memory and disks are shared system-wide, the extra data loading becomes another significant overhead in training.
To address the issues, we propose _Pipe-BD_, a novel parallel training method for blockwise distillation. We assign individual student blocks to different devices and compute a teacher network in a relayed manner, which can reduce teacher redundancy. Inspired by approaches with pipeline parallelism [5, 14, 15, 16], we restructure the training schedule of the student blocks such that the training time is greatly improved.
Fig. 1: Conceptual diagram of blockwise distillation.
Pipe-BD comprises three components: First, we propose _teacher relaying_. Instead of relying on data parallelism, we spread the student model to multiple training devices (i.e., GPUs) in a block granularity. Then, blocks of the teacher model are executed by relaying the intermediate activation values between the devices. This approach has the advantages of eliminating extra data loading and increasing resource utilization from larger batch size per device. Second, we propose _decoupled parameter update_ to remove the scheduling bubbles and enhance the overall utilization. With teacher relaying, devices have to wait for the intermediate activation values from previous devices, creating scheduling bubbles. Decoupled parameter update performs model parameter updates in a misaligned manner and starts the next step right ahead, so those bubbles can be removed. Third, we suggest _automatic hybrid distribution_. Achieving a balance between devices is difficult with blockwise distillation because of the limited number of blocks available in typical neural network structures. Automatic hybrid distribution enables fine-grained balancing with further splitting blocks along the batch size dimensions.
Pipe-BD is implemented on PyTorch and can automatically make all scheduling decisions to improve the throughput. Our extensive set of experiments shows Pipe-BD achieves a significant speedup over the state-of-the-art methods on multiple use cases and environments ranging from 2.37\(\times\) to 7.38\(\times\).
## II Background and Related Work
### _Blockwise Distillation_
Blockwise distillation [7, 8, 9] is a promising direction for training a neural network. In traditional knowledge distillation, a student model is trained against a pre-trained teacher model. Because the solution space size is identical to that of conventional supervised training, it faces convergence and training time problems. Blockwise distillation splits the larger teacher model into smaller ones and trains them blockwise as depicted in Fig. 1. Each teacher block (\(T_{i}\)) and student block (\(S_{i}\)) pair obtains activation values from the previous teacher block (\(T_{i-1}\)). This pair performs forward pass using the activation as input and creates teacher output activation and student output activation. Blockwise distillation minimizes a loss function (\(L(\Delta output)\)) which measures the difference between these two activations, to distill knowledge from a teacher block to the dedicated student block. This blockwise distillation process makes target problem spaces smaller and is known to converge faster. Many applications such as NAS [9, 12] and model compression [7, 11] use blockwise distillation because of these characteristics.
### _Parallelization Baseline of Blockwise Distillation_
State-of-the-art methods of blockwise distillation [9] use the traditional data-parallel scheme to further accelerate the training as illustrated in Fig. 2(a). This scheme trains a student block (\(S_{i}\)) with all devices in a data-parallel manner for fixed \(n\) epochs, then moves on to train the next student block (\(S_{i+1}\)). It redundantly loads data multiple times because of this iterative training. Each student block (\(S_{i}\)) requires the activation values from the previous teacher (\(T_{i-1}\)), so it also entails redundant teacher executions. Furthermore, it uses a smaller batch size per device which leads to under-utilization. Due to these inefficiencies, the data-parallel blockwise distillation suffers from poor scalability. An alternative scheme [7] regards the training of each layer as a single task and adopts bin packing algorithm to balance the workload. However, it still has redundant teacher executions and suffers from workload imbalance when there are insufficient layers in the model.
## III Motivation
In this section, we provide a motivational study highlighting the inefficiency of the existing parallel blockwise distillation training scheme and the need for a new approach. Fig. 2 depicts the breakdown of time spent in parallel blockwise distillation with four RTX A6000 GPUs (NAS with Cifar-10; see Section VI-B for the detailed setup). 'Baseline' refers to the state-of-the-art parallel blockwise distillation method [9], where each block is trained sequentially using four devices with data parallelism. As displayed in the chart, the training time is spent on data loading, teacher execution (forward pass), and student execution (forward/backward pass).
However, all three parts exhibit significant inefficiency, slowing down the training. To demonstrate the inefficiencies, we plot the 'ideal' bar in Fig. 2 by measuring the training time of each part separately with a single GPU and dividing each time by four. This represents an imaginary system with perfect parallelization and infinite device memory.
The large gaps in teacher execution and data loading time occur because the baseline has many redundant teacher executions and extra data loading. Because each student block to train requires executing the teacher model from the beginning, the earlier teacher blocks are redundantly executed multiple times (see Fig. 2(a)). Similarly, block-by-block training forces loading data as many as the number of blocks. In addition, data-parallelism leads to smaller batch size per device, resulting in lower resource utilization. As demonstrated in several empirical studies [17, 18], a sufficient per-device batch size is critical for training throughput, which is the cause of the gap on student execution time. Pipe-BD targets these inefficiencies. As presented in Fig. 2, Pipe-BD reduces the training time close to the ideal case, with only a small overhead (idle).
Fig. 2: Motivational experiment. The breakdown demonstrates three major inefficiencies of baseline; redundant teacher execution, extra data loading, and low resource utilization.
## IV Pipe-BD Method
### _Teacher Relaying_
Pipe-BD starts by restructuring the training pipeline of blockwise distillation with teacher relaying. As opposed to the baseline (Fig. 2(a)) where a single block is fully trained in a data-parallel manner before moving on to the next, teacher relaying exclusively distributes the teacher and student blocks to all training devices. Then, each device relays the intermediate teacher activation values to the next device as depicted in Fig. 2(b). The received activation is the input for both the teacher and the student block. The teacher block is executed first, whose output activation is sent to the next device such that the execution of the next block can start. Overlapped with the transmission, the forward pass execution of the student starts, taking the same input as the teacher block. After calculating the loss by comparing the output activations of the teacher and the student, the backward pass of the student follows. After all the backward passes are finished, parameter updates are performed on each block, completing the training step.
The teacher relaying scheme has two advantages over the existing approach. First, each device executes the stages with larger batches and enjoys better resource utilization. For example, in the baseline using four devices with an effective batch size of 256, each device executes with a batch size of 64, which is often too small to fully utilize the hardware resources. In contrast, with teacher relaying, each device would run with a full batch size of 256, increasing resource utilization. Second, the overhead of data loading is reduced. When the dataset is large, the data must come from the main memory or the disk, where both are system-wide shared resources. Because teacher relaying does not go through multiple training passes, the number of data loading decreases, leading to higher throughput.
One minor trade-off is communication overhead. In the baseline, gradient sharing must occur after every backward pass. With teacher relaying, there is some communication delay from relaying the intermediate activation values from one device to another. However, the communication time is almost negligible in our target settings of single-node multi-device training. Furthermore, in both cases, most of the communications overlapped with computations.
### _Decoupled Parameter Update_
Although teacher relaying removes the redundant teacher executions, the removed redundancy is not directly translated to speedup. At the beginning of each step, each device has to wait until the previous device delivers the intermediate activation. Fig. 2(c) illustrates how decoupled parameter update addresses this problem. As soon as the backward pass of each block is complete, the parameter updates are performed without waiting for the other devices. Then, the teacher execution of the next step can start earlier, increasing the training throughput. This does not harm the training accuracy by any means because the student blocks have no dependency on the weight parameters of the other blocks, which is a special characteristic of blockwise knowledge distillation training.
Decoupled parameter update successfully hides the teacher waiting time except for the beginning of each epoch, where full synchronization is needed for validating the whole model. Because there are usually tens to hundreds of steps per epoch, such overhead is amortized to a negligible amount.
### _Automatic Hybrid Distribution_
With the teacher relaying and decoupled parameter update, the system throughput is determined by the throughput of the slowest device. Because of this, load balancing between devices plays a critical role in performance. One straightforward and intuitive load-balancing method is distributing the workload in contiguous blocks. The distribution is simple because there are only \({}_{B-1}C_{N-1}\) choices for \(B\) blocks and \(N\) devices. Unfortunately, the naive distribution scheme often fails to provide a good balance. In blockwise distillation, the number of blocks \(B\) is determined by the neural network architecture. Usually, \(B\) is around ten [3, 19] and \(N\) is four to eight within a single server. Because there are not enough number of blocks to distribute to the devices, the naive distribution is likely to end up in a severe workload imbalance.
With automatic hybrid distribution, we provide another degree of freedom for workload distribution as presented in Fig. 2(d). Instead of relying on the block granularity, we allow further splitting of each block along the batch dimension. Thus, when a block is too long, it can be split into two or more smaller effective blocks. Because a batch is split, the total workload can become larger because of GPU under-utilization. However, sometimes a slight increase in the total workload is dwarfed by the gain from workload balancing.
Fig. 3: Illustration of the techniques in Pipe-BD.
Automatic hybrid distribution introduces a larger design space to workload distribution, which is difficult to tune manually. To estimate throughputs of possible schedules, we measure consumed time of a few test execution for each block under feasible batch sizes. Then, considering the practical problem size of both \(B\) and \(N\) at around ten, the optimal solution can be found using an exhaustive search. Because the decision is made only once at the beginning, its overhead is amortized over the entire training and is negligible in our experiments.
## V Pipe-BD Framework
### _Overall Procedure_
```
1:
2:\(G\): # of devices, \(D_{i}\): \(i\)-th device
3:\(T_{i}\): Teacher blocks assigned to \(D_{i}\)
4:\(S_{i}\): Student blocks assigned to \(D_{i}\)
5:
6:Initialization: Decide \(T_{i}\) and \(S_{i}\) of each device // AHD
7:for each epoch do
8:for parallel \(i=0,1,\)...,\(G-1\)do
9:for each step do
10:if\(D_{i}\)-prev == \(\emptyset\)then\(act_{in}\) = load_data()
11:else\(act_{in}\) = receive(from=\(D_{i}\)-prev) // TR
12:\(act_{\_out}\) = \(T_{i}\).forward(\(act_{\_in_{i}}\))
13:if\(D_{i}\).next!= \(\emptyset\)then send(\(act_{\_out_{\_}i}\), to=\(D_{i}\).next) // TR
14:\(act_{\_out_{\_}i}\) = \(S_{i}\).forward(\(act_{\_in_{i}}\))
15:\(S_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\}\{\_}\_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\_{
## VII Experimental Results
### _Speedup and Ablation_
Fig. 4 shows the speedup of Pipe-BD over the baselines with an ablation study of the proposed techniques using four RTX A6000 GPUs. Each colored bar shows the speedup of Pipe-BD where 1) only teacher relaying is applied (**TR**), 2) decoupled parameter update is further applied (**TR+DPU**), and 3) all three schemes are applied, including automatic hybrid distribution (**TR+DPU+AHD**). In addition, we tested an alternative method named _Internal Relaying_ (**TR+IR**). With internal relaying, each device trains all existing blocks in every step, and parallelization is obtained via data parallelism. Instead of re-executing the teacher blocks or relaying activations between devices, the teacher activations are internally stored in memory and are retrieved for the next block. This approach allows for removing the redundancies of teacher and data loading as well as the load imbalance. However, it has the disadvantage of using a small batch size per device. In fact, internal relaying is a special case of Pipe-BD with TR+DPU+AHD when all blocks are only split along the batch dimension.
Among the baselines, LS performs better than DP on Cifar-10 but worse on ImageNet. Because the composition of the neural networks for ImageNet typically has a few heavy blocks, LS suffers from severe load imbalance. Nevertheless, they both perform inferior to Pipe-BD. TR provides speedup for all cases due to eliminating extra data loading, redundant teacher execution, and enhancing resource utilization. Further, DPU provides additional speedup by removing synchronization barriers, which improves the overlapping of the teacher waiting time with student executions. Additionally, AHD removes the pipeline bubbles by balancing workloads, which drives an additional speedup over TR+DPU.
With the ImageNet dataset, its larger spatial dimension of the images (224\(\times\)224 vs 32\(\times\)32) leads to heavy workloads in the first block. As a result, with TR only, the execution time of block 0 dominates all the others. Because of this, DPU has little room for improvement, whereas splitting the workload of the first block with AHD has a large impact on reducing the bubbles. In contrast, in the Cifar-10 case, the workload is already well-balanced only with TR+DPU version, and the gain from more balancing is offset by the loss from lower resource utilization caused by AHD.
### _Sensitivity and Scheduling_
Fig. 4(b) and Fig. 4(c) show how Pipe-BD automatically determines the appropriate schedule according to two different environmental settings with the same NAS on ImageNet workload. While the speedup trends are similar in Fig. 4(a), they are from different schedules. The execution time of block 0 is the longest among the six blocks in both settings. However, the gap is more extensive on A6000 than on 2080Ti. To mitigate the imbalance, Pipe-BD settles at a schedule where the first three blocks (0-2) are shared on three devices (0-2) for A6000, while block 0 on 2080Ti is shared among two devices (0-1) and two blocks (1-2) are assigned to device 2.
In Fig. 6, we demonstrate the sensitivity to the batch size on the NAS workload, normalized against DP of each batch size. In general, the advantage of Pipe-BD is not very sensitive to the batch size. One common trend is that the speedup is better in smaller batch sizes because the resource utilization difference becomes more significant with smaller batch sizes. One exception is AHD for ImageNet, where the speedup is better on larger batch sizes. The reason is found in the schedule depicted in Fig. 4(c) which uses three-way data parallelism to balance workloads. Because the training time for the student is shorter in both the baseline and Pipe-BD with larger batch sizes, reduction in the teacher redundancy and extra data loading account more for the overall speedup.
### _Memory Overhead_
Fig. 7 depicts the memory overhead of Pipe-BD on the NAS task for each rank (GPU). Due to the characteristics of CNN-based models, lower-indexed teacher blocks generally have
Fig. 4: Speedup and ablation of baselines and Pipe-BD.
Fig. 5: GPU type sensitivity of Pipe-BD on NAS.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multirow{2}{*}{\begin{tabular}{c} Default \\ (w/ A6000) \\ \end{tabular} } & \begin{tabular}{c} GPU \\ CPU \\ Memory \\ \end{tabular} &
\begin{tabular}{c} 4\(\times\) NVIDIA RTX A6000 \\ 1\(\times\) EYPC 7302, 16 cores \\ 256 GB DDR4 ECC \\ \end{tabular} \\
**HW** & & \begin{tabular}{c} GPU \\ CPU \\ Memory \\ \end{tabular} & \begin{tabular}{c} 4\(\times\) NVIDIA RTX 2080Ti \\ 2\(\times\) Xeon 4214 Silver, 12 cores \\ 256 GB DDR4 ECC \\ \end{tabular} \\ & \begin{tabular}{c} Common \\ CPU2 \\ PyTorch \\ \end{tabular} & \begin{tabular}{c} Python \\ 11.6 \\ 1.3 \\ \end{tabular} \\ \cline{3-4} & & \begin{tabular}{c} Teacher Model \\ Kernel Size \\ Expansion Ratio \\ \end{tabular} & \begin{tabular}{c} MobileNetV2 \\ 3.6 \\ 3.6 \\ \end{tabular} \\ \cline{3-4} & \begin{tabular}{c} Model \\ Compression \\ \end{tabular} & \begin{tabular}{c} Teacher \\ Replacement \\ \end{tabular} &
\begin{tabular}{c} VGG-16 \\ DS-Conv \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE I: Experimental Environment
larger feature map sizes. TR and DPU consume more memory than DP because of this characteristic, especially on rank 0. This outcome is also demonstrated in Fig. 6(b) because models for ImageNet contain even larger feature map sizes in lower-indexed blocks. However, AHD successfully addresses this issue using data parallelism in a hybrid manner, which lessens the memory overhead of earlier ranks as depicted in Fig. 4(c). As a result, Pipe-BD provides superior multi-fold speedups with a minor 8.7% and 21.3% additional memory overheads over DP on average for Cifar-10 and ImageNet, respectively.
### _Training Quality_
Pipe-BD has no component that can hurt the accuracy because it only alters the scheduling strategy. Nonetheless, we report the accuracy in Table II to demonstrate that the Pipe-BD framework faithfully reproduces the end-to-end training results in the prior art, with much shorter training time. For all use cases under evaluation, Pipe-BD achieves significant speedup with the same accuracy.
## VIII Conclusion
We propose Pipe-BD, a novel parallelization method for blockwise distillation. By restructuring the existing parallelization scheme, we achieve a multi-fold speedup on various use cases. In this study, we focused on a single-node, multi-GPU setting since it is the most common setup. However, if the method has to be scaled for a multi-node setting, the communication overhead needs to be addressed. Along with the heterogeneous GPU/servers, this will be our future direction.
## Acknowledgement
This work was partly supported by the National Research Foundation of Korea (NRF) grants (2022R1C1C1011307, 2022R1C1C1008131) and Samsung Electronics Co., Ltd (I02221213-04119-01) and Institute of Information & communications Technology Planning & Evaluation (IITP) grants (2020-0-01361) funded by the Korean government (MSIT).
|
2306.06491 | Online learning for X-ray, CT or MRI | Medical imaging plays an important role in the medical sector in identifying
diseases. X-ray, computed tomography (CT) scans, and magnetic resonance imaging
(MRI) are a few examples of medical imaging. Most of the time, these imaging
techniques are utilized to examine and diagnose diseases. Medical professionals
identify the problem after analyzing the images. However, manual identification
can be challenging because the human eye is not always able to recognize
complex patterns in an image. Because of this, it is difficult for any
professional to recognize a disease with rapidity and accuracy. In recent
years, medical professionals have started adopting Computer-Aided Diagnosis
(CAD) systems to evaluate medical images. This system can analyze the image and
detect the disease very precisely and quickly. However, this system has certain
drawbacks in that it needs to be processed before analysis. Medical research is
already entered a new era of research which is called Artificial Intelligence
(AI). AI can automatically find complex patterns from an image and identify
diseases. Methods for medical imaging that uses AI techniques will be covered
in this chapter. | Mosabbir Bhuiyan, MD Abdullah Al Nasim, Sarwar Saif, Kishor Datta Gupta, Md Jahangir Alam, Sajedul Talukder | 2023-06-10T17:14:41Z | http://arxiv.org/abs/2306.06491v1 | # Online learning for X-ray, CT or MRI
###### Abstract
Medical imaging plays an important role in the medical sector in identifying diseases. X-ray, computed tomography (CT) scans, and magnetic resonance imaging (MRI) are a few examples of medical imaging. Most of the time, these imaging techniques are utilized to examine and diagnose diseases. Medical professionals identify the problem after analyzing the images. However, manual identification can be challenging because the human eye is not always able to recognize complex patterns in an image. Because of this, it is difficult for any professional to recognize a disease with rapidity and accuracy. In recent years, medical professionals have started adopting Computer-Aided Diagnosis (CAD) systems to evaluate medical images. This system can analyze the image and detect the disease very precisely and quickly. However, this system has certain drawbacks in that it needs to be processed before analysis. Medical research is already entered a new era of research which is called Artificial Intelligence (AI). AI can automatically find complex patterns from an image and identify diseases. Methods for medical imaging that uses AI techniques will be covered in this chapter.
Medical imaging, Conventional system, Machine Learning, Deep Learning
## 1 Introduction
Many people pass away every year because of substandard care and facilities. The lives of many people could be saved if a disease is discovered early. Therefore, a branch called radiology is introduced in the field of medicine that uses imaging techniques to cure and diagnose disorders [1, 2, 3]. Diagnostic radiology and interventional radiology are two subcategories of radiology. Diagnostic radiology allows radiologists viewing of internal body structures. In this area, radiologists identify the underlying source of symptoms, track how effectively the body responds to treatment, and do disease screenings. The most prevalent diagnostic radiology procedures include computed tomography (CT), magnetic resonance imaging (MRI) and magnetic resonance angiography (MRA), mammography, x-ray, positron emission tomography (PET) scan, ultrasound, etc. Interventional radiologists employ imaging techniques like CT and MRI to help direct treatments. Doctors can insert catheters, wires, or other small instruments into your body with the use of
imaging. Angiography, cancer treatment, needle organ biopsies, uterine artery embolization, etc are some examples of interventional radiology techniques [2]. X-rays are a form of electromagnetic radiation where the wavelength is ranging from 10 picometers to 10 nanometers, corresponding to frequencies in the range of 30 petahertz to 30 exahertz. Wavelengths of X-rays are generally longer compared to those of gamma rays but shorter compared to those of UV rays. In 1895, German scientist Wilhelm Conrad Roentgen first discovered and documented X-rays. An X-ray is a rapid, painless diagnostic that generates pictures of the internal organs and structures in your body, especially your bones. Depending on the density of the parts, the body absorbs X-ray rays when they pass through it. Dense substances like bone and metal appear white when an X-ray is observed. Muscle and fat appear as shades of gray, whereas the air in the lungs is black. In the medical field, X-rays can be used to find fractures and infections in bones or teeth, bone cancer, lung infections, breast cancer, blocked blood vessels, etc. Figure 1 shows an x-ray image.
Computed tomography (CT) scan is a medical imaging technique that uses a combination of X-ray equipment and computer technology to produce cross-sectional images, both horizontally and vertically of the body. Compared to x-ray images, these cross-sectional images provide better detail [4]. CT scan turns two-dimensional X-ray images into three-dimensional ones to obtain more information. Specialists examine the images after getting them to determine the patient's condition. Radiologists can more quickly identify conditions like cancer, cardiovascular disease, infectious disease, trauma, and musculoskeletal diseases by using CT scans. In 1979, South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield were awarded Nobel Prize in Physiology or Medicine for the development of computer tomography (CT) [5]. Figure 2 shows a modern CT scanner.
In the late 1970s, physicists Peter Mansfield and Paul Lauterbur introduced an MRI-related technique, like the echo-planar imaging (EPI) technique [6]. Then in 1971 at Stony Brook University, Paul Lauterbur applied magnetic field
Figure 1: An X-ray photo of a one-year-old girl who swallowed a sewing pin [2].
Figure 2: Modern CT scanner located at the Lochotin University Hospital in Pilsen, Czech Republic [2].
gradients in all three dimensions and a back-projection technique to create NMR images and published the first images of two tubes of water in the journal Nature [7]. The non-invasive medical imaging procedure known as magnetic resonance imaging (MRI) creates detailed images of your body's organs and tissues by combining radio waves generated by a computer and a magnetic field. An MRI machine may detect water molecules inside a body when it is placed into the machine. Water is distributed all over the body. MRI machines can differentiate and construct an image by passing radio waves throughout the body. MRI provides better quality images compared to CT scans for the diagnosis of a disease. Figure 3 represents an MRI-scanned image.
Ultrasound is a type of medical imaging that creates images of your body's organs, tissues, and other structures using high-frequency sound waves. It is also known as sonography. It generates a sound wave above 20KHz. Ultrasound can be used in different ways such as finding the unborn baby's position and condition, abnormality in blood rate, the problem with the structure of the heart, any blockage in the gallbladder, etc. Because the sound waves that ultrasound produces are not very hazardous, it is a safe diagnostic procedure. There are some limitations to using ultrasound. Since sound waves can't pass through air and bone, ultrasound is ineffective at imaging body parts with gas in them or those covered by bone. To view and examine such areas CT scan, MRI, or X-ray is being used. Figure 4 represents an ultrasound image of an unborn baby.
## 2 Related Works
The use of medical imaging to identify and diagnose diseases is expanding quickly. Scientists have already explored a variety of methods to accurately identify an illness. Radiography using X-rays is essential in the diagnosis of diseases like cancer, pneumonia, and COVID-19 [9]. Chest X-ray images have even less variance and little information. They also have low contrast. That's why image enhancement plays a vital role to extract more information from a low-quality
Figure 4: Ultrasound image (sonogram) of a fetus in the womb, viewed at 12 weeks of pregnancy (bidimensional scan) [8].
Figure 3: Examples of T1-weighted, T2-weighted, and PD-weighted MRI scans [2].
image. Sharma et al. [10] proposed some image enhancement techniques such as Contrast-limited adaptive histogram equalization (CLAHE), decorrelation stretch, morphological operations, median filtering, and noise reduction techniques like a median filter, DCT, and DWT. A total of 6334 chest X-ray images were evaluated by the authors using these methods. Then enhanced images are evaluated by different image quality assessment parameters such as MSE, PSNR, and AMBE. The optimum outcome is obtained by combining the CLAHE and DWT techniques. Meenakshi et al. [11] proposed another computer-aided diagnostic method of Gaussian filter, CLAHE, and Sobel edge detection. The authors applied this strategy to X-ray images of pneumonia. Among women, breast cancer is a prevalent form of cancer. Lu et al. [12] developed a computer-aided diagnosis system for breast MRI. Some feature extraction techniques such as morphological, Gabor feature, etc are applied to the dataset. The classification of breast cancer is then done using an ensemble learning technique that uses weighted averages and a majority vote.
The medical industry is always evolving in terms of technology and procedures. In the past, researchers have attempted to collect more information by enhancing low-quality images with high-quality ones using conventional techniques. They used several image enhancement methods in medical imaging, such as CLAHE, histogram equalization, morphological procedures, etc. With this traditional method, information must be manually extracted. Several statistical features such as bias, variance, mean, perimeter, etc can be applied to medical imaging data to identify a disease. However, researchers are now experimenting with new technologies that eliminate the requirement for manual feature extraction. Deep learning is a technology where features are extracted automatically.
Albahli et al. [13] published work on COVID-19 using chest X-ray images. A transfer learning model is developed with a 5-fold cross-validation technique. Three models were presented by the author: DenseNet, InceptionV3, and Inception-ResNetV4. The DenseNet model has the highest classification accuracy (92%), followed by InceptionV3 (83.47%) and Inception-ResNetV4 (85.57%). Chakraborty et al. [14] proposed 7 layers of the Convolutional Neural Network (CNN) model. The author used chest X-ray images of children aged 1 to 5 to identify Pneumonia and also utilized optical coherence tomography to identify eye problems (OCT). Sourab et al. [15] suggested a novel hybrid approach. They proposed a CNN architecture with 22 layers to extract features from chest X-ray images for Pneumonia. Then some machine learning algorithms such as Random Forest (RF), k-nearest neighbors (KNN), and support vector machine (SVM) are utilized to classify Pneumonia. CNN-RF hybrid model provides the best-performing result of 99.82% accuracy and 98.7% AUC. The ensemble is another technique where more than two models are utilized to increase the model's performance.
Ravi et al. [16] developed a stacking ensemble technique to detect lung disease from chest X-ray images. First, they used EfficientNetB0, EfficientNetB1, and EfficientNetB2 pre-trained models to extract features. Then the extracted features are combined and a non-linear fully connected layer is included. After that, a stacking ensemble technique is applied to classify lung disease. In a stacking ensemble, random forest and SVM are used in the first stage and logistic regression in the second stage. The accuracy of the proposed method for detecting pediatric pneumonia lung disease, TB lung disease, and COVID-19 lung disease was 98%, 99%, and 98% respectively. Sejuti et al. [17] proposed a hybrid CNN-KNN approach for the identification of COVID-19 from computer tomography (CT) scan images. The proposed method is used for a dataset that includes 4085 more CT scan images. After performing 5-fold cross-validation, the proposed method's average accuracy, precision, recall, and F\({}_{1}\) score are 98.26%, 99.42%, 97.2%, and 98.19%, respectively. Gao et al. [18] proposed a CNN-based model for the diagnosis of Alzheimer's disease (AD) CT images. They developed a 2D and 3D CNN architecture for detecting the disease. The image dataset is divided into three categories: AD, lesions, and normal. An accuracy of 87.6% is offered by the proposed methodology. A deep neural network was created by Jalali et al. [19] to segment lung CT images automatically. They utilized ResNet-34 architecture and BConvLSTM (Bidirectional Convolutional Long Short-term Memory) as an advanced integrator module. The suggested approach yields a dice coefficient index of 97.31%. Schmauch et al. [20] proposed a deep learning-based transfer learning model for the diagnosis of focal-level lesions. They employed ResNet50 to extract features after removing the final two layers. To classify the disease, logistic regression classifier is used which ranges from 0 to 1. A weighted mean ROC-AUC score of 0.891 is achieved by the model.
Pourasad et al. [21] developed a novel architecture for diagnosis and identifying breast cancer locations using ultrasound images. The fractal approach is used to extract features from images. Images are classified using KNN, SVM, decision tree, and Naive Bayes. Then a convolutional neural network is developed to classify breast cancer images. In the validation, the sensitivity was found 88.5% and the accuracy for the training set was found 99.8%. A morphological operation is performed to locate the tumor's location and volume from the image data.
Technology advances and federated learning make it possible for healthcare organizations to train machine learning models with private data without compromising patient confidentiality through federated learning [22, 23]. Hossain et al. [24] propose a collaborative federated learning system that enables deep-learning image analysis and classifying diabetic retinopathy without transferring patient data between healthcare organizations. Along with image data,
healthcare patients' statistical data can also be used to train machine learning models in order to predict disease exposure [25, 26].
## 3 Methodology
Computer-aided diagnostic systems are widely used by researchers to identify any disease. In the medical field, there are many ways to detect the disease, of which Machine Learning and Deep Learning are widely used nowadays.
### Image Processing
Image processing is the initial stage of disease detection in medical imaging. The extraction of more information from an image is the main objective of image processing. The more information you obtain, the easier it will be to detect any flaw in medical imaging. There are several image processing techniques available such that denoising, image enhancement, segmentation, morphological operation, etc. Medical imaging like CT and MRI scans have noisy elements that reduce the image's overall effectiveness. Image denoising techniques reduce noise and improve the
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Author** & **Methodology** & **Disease** & **Performance** \\ \hline Sharma et al. [10] & Contrast-limited adaptive histogram equalization (CLAHE), Decorrelation stretch, Morphological operation, Median filter, DCT, and DWT & Chest X-ray images & - \\ \hline Meenakshi et al. [11] & Gaussian filter, CLAHE, and Sobel edge detection & X-ray images of Pneumonia & - \\ \hline Lu et al. [12] & Morphological operation, Gabor filter and Ensemble learning & Breast MRI & ROC-AUC score - 0.9617 \\ \hline Albahli et al. [13] & 5-fold cross-validation technique, Transfer Learning technique: DenseNet, InceptionV3, and Inception-ResNetV4 & COVID-19 chest X-ray images & Accuracy - 92\% \\ \hline Chakraborty et al. [15] & Convolutional Neural Network (CNN) & Chest X-ray of Pneumonia & accuracy – 63\% \\ \hline Sourab et al. [16] & CNN with machine learning algorithms: Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) & Chest X-ray images of Pneumonia & Accuracy - 99.82AUC - 98.7\% \\ \hline Ravi et al. [17] & Stacking ensemble technique: Efficient-NetB0, EfficientNetB1, EfficientNetB2, RF and SVM and Logistic regression. & Pneumonia lung disease, and COVID-19 lung disease & Pneumonia lung (accuracy) – 98\% TB lung disease COVID-19 Lung disease (accuracy) – 99\% Lung disease (accuracy) – 98\% \\ \hline Sejuti et al. [18] & Hybrid CNN-KNN & COVID-19 from computer tomography (CT) scan images & Accuracy – 98.26\% Precision – 99.42\% Recall – 97.2\% F\({}_{1}\) score – 98.19\% \\ \hline Gao et al. [19] & CNN architecture & Alzheimer’s disease (AD) CT images. & Accuracy – 87.6\% \\ \hline Jalali et al. [20] & ResNet-34 architecture and BConvLSTM (Bidirectional Convolutional Long Short-term Memory) & Lung CT images & Dice coefficient index – 97.31\%. \\ \hline Schmauch et al. [21] & ResNet50 and Logistic regression & Focal-level lesions & AROC-AUC score – 0.891 \\ \hline Pourasad et al. [27] & CNN with KNN, SVM, Decision Tree, and Naive Bayes and morphological operation & Breast cancer locations using ultrasound images & Sensitivity – 88.5\% Accuracy (training set) – 99.8\%. \\ \hline \end{tabular}
\end{table}
Table 1: Related work in a nutshell
amount of information in the image that is produced. Common techniques to remove noise from medical images include Gaussian averaging, mean, median [27], Lee [28], and diffusion filters [29]. These filters and their variations are created to eliminate particular types of noise in various medical imaging modalities [30, 31, 32]. These filters typically perform a low pass filtering by eliminating the distinct peaks and replacing the suspected values with a local average or other locally relevant measurements.
Image enhancement includes making adjustments to digital images to make them more acceptable for display or additional image analysis. Image enhancement enhances the visual quality of the image, supports the clinician's selection, and ultimately protects the lives of the patients. There are some useful examples and methods of image enhancement: 1. Morphological operations, 2. Linear contrast adjustment, 3. Histogram equalization, 4. Contrast-limited adaptive histogram equalization (CLAHE) 5. Wiener filter, 6. Median filtering and 7. Decorrelation stretch. Medical image segmentation is the procedure used to separate regions of interest (ROIs) from image data, such as that from CT or MRI scans. Identifying the anatomical regions required for particular research is the main goal of segmenting this data. One of the key benefits of medical image segmentation is that separating just essential regions allows for more precise anatomical data interpretation. Segmentation also has the advantage of removing any irrelevant information from a scan, such as air and enabling the separation of various tissues, such as bone and soft tissues. Edge-based, region-based, thresholding approaches and other types of medical image segmentation methods are available. Figures 5, 6, 7, 8 represent some image processing techniques.
Cluster Shade, Energy, Entropy1, Homogeneity1, Homogeneity2, Maximum probability, Sum of squares, Sum entropy, Sum average, Sum variance, Difference variance, Difference entropy, etc. The texture features are Area, Mean, Standard Deviation, Entropy2, RMS, Kurtosis, Skewness, Variance, IDM, and Smoothness. In this method, features are manually extracted. There are a certain amount of features available in the manual feature selection process. These features are applied to the machine learning algorithm to classify diseases.
### Deep Learning
Deep learning is a branch of machine learning that simulates the behavior of a neuron. There are several types of deep learning methods available such that custom CNN, transfer learning models, custom CNN with machine learning algorithm (CNN-ML), etc. Deep learning can extract features automatically from input images [33]. This is the component of the method that matters the most. It is capable of automatically picking up complex patterns and characteristics from objects. Deep learning outperforms machine learning in terms of performance because of this automatic feature extraction technique. When deciding to use deep learning rather than machine learning, a powerful GPU and a ton of data are required. Deep learning would not be the best option if there is any lag. Some of the deep learning approaches are given below.
#### 3.3.1 Custom CNN
A deep learning model of custom CNN is built by adding layers from scratch. There are different types of layers are available to develop a custom CNN model. These are the convolutional layer, activation layer, batch normalization layer, etc. Every layer has a specific parameter, and these parameters must be adjusted to achieve optimum results. The main objective of a convolution layer is to extract features from an image. It's necessary to specify the kernel size in the convolutional layer. Kernel size describes the amount of extracted features. It's also necessary to provide the filter's size. Stride is another parameter that is applied to compress the size of the images. The number of pixels that are shifted across the input matrix is known as the stride. Padding is the technique of enhancing the input image with additional pixel values. Zero padding is the addition of zero pixels to the input image's border when the padding value is set to zero. Padding makes the input matrix larger to improve the accuracy of the analysis. The activation function is what the activation layer consists of. If neurons are active or not, is determined by a mathematical process known as the activation function. There are several activation functions available such as Sigmoid, Tanh, ReLu, Softmax activation function, etc. When the data is provided o the model, then weights and biases make the data linear. For a neuron, learning linear data is quite straightforward. However, real-world data is more complicated. The activation function transforms linear data into non-linear data. It assists neurons in learning more complex patterns. The most common activation function is the step function which is a threshold. Neurons become active at a particular value, and they become inactive below a given threshold. Step-function is mainly used for binary classification. However, it is impossible to determine multi-classification. The zero gradient descent value is a limitation of the step function. As a result, weights cannot be updated. No improvement will be added to the model's performance. Softmax is another type of activation function that is used in the classification layer of a neural network. It is used in multinomial logistic regression and normalizes the output of a network to a probability distribution over the predicted output class. The formula for the softmax activation function is
Figure 7: Original Image (left) and Image Segmentation (right)
\[f\left(\sum_{i=1}^{n}x_{i}w_{i}+b\right) \tag{1}\]
Where, b = bias x = input to a neuron w = weights n = number of inputs from the incoming layer i = counter from 1 to n Another element that makes up CNN is the pooling layer. It reduces the number of parameters and computation of the CNN model. Some examples are max pooling and average pooling. Maximum values are extracted from a rectified feature map using max pooling. The average operation is carried out to take average values using average pooling. Up to this point, the convolutional layer, which extracts features from images, has been described. It's time to categorize the image at this point. The classification function is carried out via a fully connected layer. The feed-forward neural network and the fully connected layer both operate similarly. The convolutional layer contains a three-dimensional array of data. The data is delivered as a one-dimensional array to a fully connected layer. The flattened layer reduces data from three dimensions to one dimension. This is the first fully connected layer. How many layers to utilize will depend on the application. It is possible to choose the required number of neurons for that layer. The classification layer is the topmost and last layer. The required output determines how many neurons are needed in the bottom layer. For example, if we wish to classify both dogs and cats, the last layer will have 2 neurons. Some hyperparameters are employed to achieve the best results. Some of the hyperparameters include input size, batch size, epoch, learning rate, optimizer, etc. Figure 8 shows a custom CNN architecture [34].
#### 3.3.2 Transfer Learning
Transfer learning is a pre-trained network where a model is trained for one task and can be reused in another task. Researchers already developed different transfer learning models for multiclass classification. Several pre-trained networks are available for example AlexNet [36], VGG16 [37], ResNet50 [38], etc. Figure 9 represents the VGG16 architecture.
Transfer learning models can be reused by applying different strategies. Train the entire model: Transfer learning can be retrained with new weights for a specific application because it is a pre-trained network with pre-existing weights. The architecture would remain unchanged, but a new dataset would be used to update the weights. The final layer must be modified based on how many classes are used in that application. For example, AlexNet [36] was trained on 1000 categories. The model can be trained using five categories. It depends on the application of how many classifications there are. Freeze some layers and train others: This is an additional method of freezing some layers of the transfer learning model. The weights of the frozen layer will remain constant. The other layers receive training using the existing weights. The quantity of training and freezing layers is adjustable based on the application. Freeze all layers and change only the final layer: A transfer learning model with existing weights will be employed in this technique. The layers won't be updated with new weights as they are frozen. According to the different classifications, just the top layer will be altered. Strategies of the transfer learning model are shown in Figure 10
Figure 8: Custom CNN architecture [35].
Figure 11: Simple architecture for a deep hybrid network model [39].
Figure 10: Transfer learning strategies.
Figure 9: VGG16 architecture [37].
#### 3.3.3 Cnn-Ml
Now, machine learning algorithms will be utilized as a classifier and CNN will be used as a feature extractor. Any transfer learning model or a custom CNN can be used as the CNN model. SVM, KNN, Decision Tree, and Random Forest are a few machine learning algorithms that can be used as classifiers. An easy Deep Hybrid Network model design is shown in Figure 11. Here, the DNN layer is composed of only four layers, and the ML classification layer follows. Figure 12 represents a CNN-ML architecture. ML classification algorithms like AdaBoost, XGBoost, and Random Forest are used to describe this layer. It is advised to use a more sophisticated DNN layer for more difficult issue-solving and better model performance.
Ketu et al. [39] introduce a CNN-LSTM hybrid deep learning prediction model, which can correctly forecast the COVID-19 epidemic across India. The proposed model uses convolutional layers, to extract meaningful information and learn from a given time-series dataset. By fusing deep learning with machine learning, a fusion network known as deep hybrid learning may be created. Deep hybrid learning, in which the model produces or extracts features from unstructured data using deep learning techniques and then utilizes traditional machine learning techniques to create highly accurate classification models utilizing the unstructured data. So, combining DL and ML with Deep Hybrid Learning (DHL) may improve upon each approach's shortcomings while also delivering more accurate and computationally efficient results.
## 4 Performance Analysis
There are various evaluation metrics available both for regression and classification tasks. Some of the performance metrics for medical image classification are Accuracy, Precision, Recall, F1-Score, Confusion Matrix, ROC (Receiver Operating Characteristic Curve), AUC (Area Under the ROC Curve) curve, etc. Accuracy is the number of correct predictions to the number of total predictions. The Accuracy measure should be utilized when the target variable classes in the data are fairly equal. It's suggested against using the accuracy measure when the target variable primarily belongs to one class. Precision is another performance metric which is the ratio of correctly positive observations to the total predicted positive observations. Accuracy has a constraint, which is overcome by the precision metric. It measures the proportion of actual positives that were incorrectly detected and is comparable to the Precision metric. Precision provides information on how well a classifier performs with false positives, whereas recall measures how well a classifier performs with false negatives. The weighted average of recall and precision is the F\({}_{1}\) score. When the class of the dataset is highly imbalanced, then the F\({}_{1}\) score is utilized.
True positive (TP), false positive (FP), true negative (TN), and false negative (FN) are the four key parameters. True Positive (TP) is a metric for evaluating how many samples your model properly predicted as positive. The amount of negative class samples that your model correctly predicted is known as True Negative (TN). The term False Positive (FP) indicates how many negative class samples your model incorrectly predicted as positive. The number of samples from the positive class that your model incorrectly predicted as negative is represented by False Negative (FN). In equations 2-5, different performance metrics are shown along with a few details [10, 40].
**Precision** is calculated using the following equation number 2.
\[Precision=\frac{TP}{TP+FP} \tag{2}\]
Figure 12: CNN-ML architecture.
\[Precision=\frac{TP}{\text{Total Predicted Positive}} \tag{3}\]
We see that Precision tells us how accurate the model is. Out of those predicted positives, how many of them are positive?
**Recall** can be calculated using the following equation (equation 4).
\[Recall=\frac{TP}{TP+FN} \tag{4}\]
From Figure 14 we see the denominator in equation 4 is the total actual positive. So, we can rewrite the equation as follows:
\[Recall=\frac{TP}{\text{Total Actual Positive}} \tag{5}\]
Here we see that Recall calculates the ratio between the number of predicted positives and the number of actual positives.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**n=165** & \begin{tabular}{c} **Predicted:** \\ **No** \\ \end{tabular} & \begin{tabular}{c} **Predicted:** \\ **Yes** \\ \end{tabular} \\ \hline \begin{tabular}{c} **Actual:** \\ **No** \\ \end{tabular} & 50 & 10 \\ \hline
\begin{tabular}{c} **Actual:** \\ **Yes** \\ \end{tabular} & 5 & 100 \\ \hline \end{tabular}
\end{table}
Table 2: Confusion matrix example
Figure 13: Precision.
\(\mathbf{F_{1}}\) **score** is calculated using the following equation (equation 6).
\[F_{1}score=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{6}\]
\(\mathrm{F_{1}}\) score is a function of Precision and Recall. It is calculated from the harmonic mean of the precision and recall. \(\mathrm{F_{1}}\) score ranges from 0 to 1. \(\mathrm{F_{1}}\) score is also known as balanced F-score or F-measure.
**Accuracy** is calculated using the following equation (equation 7).
\[Accuracy=\frac{TP+TN}{TP+FP+TN+FN} \tag{7}\]
The ground-truth labels and model predictions are displayed in a table using the Confusion Matrix. In the confusion matrix, the cases in a predicted class are represented in each row, whereas the occurrences in an actual class are shown in each column. A confusion matrix is shown in Table 2.
**ROC curve.** It can be used when it's necessary to visualize the performance of the classification model. It is a widely-used statistic that is important for assessing how well the categorization model is working. The performance of a classification model at various threshold levels is represented graphically by the ROC curve. True positive rate and false positive rate are the two parameters between which the curve is displayed.
**AUC curve.** The area Under the ROC curve is referred to as the AUC curve. AUC determines performance across all thresholds and offers an overall measurement. AUC values vary from 0 to 1. AUC 0 means a model with 100% wrong prediction and AUC 1 means 100% correct prediction. It assesses the accuracy of the model's predictions without taking the categorization threshold into consideration.
## 5 Conclusion
This study reviews a few research publications and discusses many sorts of methodologies for medical imaging [42], such as CT, MRI, and X-ray. Medical imaging has been increasingly popular in recent years for disease diagnosis. Researchers are looking for novel ways to quickly and precisely diagnose diseases. Radiologists can utilize a variety of computer-aided automatic technologies to find the disease. Feature extraction is made easier by image processing techniques that improve image quality. Machine learning has the ability to extract the features manually. Because of
Figure 16: AUC curve
Figure 17: ROC and AUC Curve [41].
Figure 15: ROC curve
this, the machine learning model performs worse than the deep learning approach. When compared to machine learning, the deep learning model offers the most accurate results and can automatically extract features. Radiologists can make an accurate and quick diagnosis of an illness with the help of a computer-aided system.
|
2308.10987 | Northbound Lagrangian Pathways of the Mediterranean Outflow Water and
the Mechanism of Time-Dependent Chaotic Advection | The Mediterranean Sea releases approximately 1Sv of water into the North
Atlantic through the Gibraltar Straits, forming the saline Mediterranean
Outflow Water (MOW). Its impact on large-scale flow and specifically its
northbound Lagrangian pathways are widely debated, yet a comprehensive overview
of MOW pathways over recent decades is lacking. We calculate and analyze
synthetic Lagrangian trajectories in 1980-2020 reanalysis velocity data. 16\%
of the MOW follow a direct northbound path to the sub-polar gyre, reaching a
1000m depth crossing window at the southern tip of Rockall Ridge in about 10
years. Surprisingly, time-dependent chaotic advection, not steady currents,
drives over half of the northbound transport. Our results suggest a potential
15-20yr predictability in the direct northbound transport, which points to an
upcoming decrease of MOW northbound transport in the next couple of decades.
Additionally, monthly variability appears more significant than inter-annual
variability in mixing and spreading the MOW. | Ori Saporta-Katz, Nadav Mantel, Rotem Liran, Vered Rom-Kedar, Hezi Gildor | 2023-08-16T10:00:30Z | http://arxiv.org/abs/2308.10987v1 | Northbound Lagrangian Pathways of the Mediterranean Outflow Water and the Mechanism of Time-Dependent Chaotic Advection
###### Abstract
The Mediterranean Sea releases approximately 1Sv of water into the North Atlantic through the Gibraltar Straits, forming the saline Mediterranean Outflow Water (MOW). Its impact on large-scale flow and specifically its northbound Lagrangian pathways are widely debated, yet a comprehensive overview of MOW pathways over recent decades is lacking. We calculate and analyze synthetic Lagrangian trajectories in 1980-2020 reanalysis velocity data. 16% of the MOW follow a direct northbound path to the sub-polar gyre, reaching a 1000m depth crossing window at the southern tip of Rockall Ridge in about 10 years. Surprisingly, time-dependent chaotic advection, not steady currents, drives over half of the northbound transport. Our results suggest a potential 15-20yr predictability in the direct northbound transport, which points to an upcoming decrease of MOW northbound transport in the next couple of decades. Additionally, monthly variability appears more significant than inter-annual variability in mixing and spreading the MOW.
## I Introduction
The mid-depth salinity and temperature fields of the North Atlantic Ocean contain a distinct high-salinity, high-temperature tongue originating from the Mediterranean Sea (Fig. 1). This is a signature of the Mediterranean Outflow Water (MOW), which exits the Straits of Gibraltar into the Gulf of Cadiz as a 0.85 Sv inverse- estuary flow with an average salinity of about 38.4 psu at a depth of 300-500 meter [1; 2; 3]. Upon its entrance into the Gulf of Cadiz (GoC), the relatively salty and dense water sinks to 500-1500 meters entraining the locally fresher and cooler North Atlantic Central Water, and exits the GoC upon crossing the Cape of Vicente as a relatively salty plume of approximately 1 Sv and salinity between \(36.3-37\) psu. Beyond this point, the MOW begins spreading in the North Atlantic Ocean via various pathways [2; 4; 5; 6].
The significant input of salinity into the North Atlantic Ocean is thought to have an important effect on the strength and stability of the Atlantic Meridional Overturning Circulation in current and past climates by contributing to the salinity preconditioning of polar waters for formation of the North Atlantic Deep Water (NADW) [7; 8; 9; 10; 11; 12; 13; 14]. The specific pathways taken by the MOW have been of interest and controversy at least since [7; 15], who used temperature, salinity, oxygen, and silica data from 1957-1971 to establish hydrographic evidence of a direct northbound pathway of MOW, extending from the Gulf of Cadiz well past Porcupine Bank at 53\({}^{\circ}\)N via a mid-depth eastern boundary current. Since then, our understanding of these pathways has evolved by studies of hydrographic data, actual, and virtual drifters [2; 16; 17; 18; 19; 20]. Several works found no evidence that a MOW core exists past Porcupine Bank [2; 16]. To show this, [2] used climatological mean fields from 1904-1990, and [16] used hydrographic methods on data from the 1980s.
In an attempt to reconcile the conflicting evidence, [19] suggested a temporal variability of flow fields, perhaps due to the North Atlantic Oscillation (NAO), that results in an east-west shift of the eastern limb of the sub-polar gyre (SPG). Using historical hydrographical data and salinity anomalies, they showed the temporal variability of the eastern limb of the SPG as well as a northward penetration of a MOW core between 1000-1500 m upon shrinking of the eastern limb. [21], using a basin-scale model of the North Atlantic, similarly used salinity anomalies to show a MOW core at intermediate depths reaching northward of Porcupine Bank in an extreme NAO low year (1996). [22], with an array of
Figure 1: Mediterranean Outflow Water (MOW) in the SODA3.4.2 reanalysis data for climatological annual data averaged 1980-2020. (a) Salinity at 1000m (b) Salinity at 37\({}^{\circ}\)N latitude.
ARGO floats and CTD data in the Northeast Atlantic between 1981-2018, showed periods of warming (cooling) and salinification (freshening) of the area lagging a negative (positive) NAO index indicating a northward (westward) MOW pathway. [20] were the first to use Lagrangian trajectories (both floats and synthetic) to try and map the specific MOW pathways. Instead of using an atmospheric index (the NAO), [20] opted to use an ocean-based measurement since it better depicts the gradual changes in the subpolar gyre, called the gyre index (hereinafter SPG index) using empirical orthogonal function analysis of sea surface height provided by [23]. While there were no clear specific pathways from the eastern North Atlantic to the Rockall Trough region, they found that two broadly defined pathways influenced by the SPG index can result in a greater amount of salty MOW-influenced eastern North Atlantic waters reaching the Rockall Trough. It is important to note that different SPG indices may arrive at different results [24]. [25] looked at decadal SPG indexes using EOF analysis and defined gyre size using the largest closed contour of each monthly SSH field, which revealed oscillations that do not show an impact of the SPG on salinity anomalies, while [24] showed that a density based index captures both salinity anomalies and SPG strength and size.
The time-dependent, 3D nature of oceanic flow over decadal timescales may result in significant chaotic advection of Lagrangian trajectories, that is accountable for some degree of the oceanic transport and mixing due to stretching of material lines [26; 27; 28; 29; 30]. The concept of chaos is asymptotic; in finite-time studies, oceanic chaotic advection refers to the divergence of trajectories and Lagrangian mixing on a predetermined timescale [31; 32; 33]. The associated coherent structures of transport, as exposed in finite times, may reveal the backbone hyperbolic structure of the flow. While a 3D steady incompressible flow is sufficient for chaotic advection (as opposed to a 2D incompressible steady flow), a special feature of time-dependent flows, both in 2D and 3D, is that streamlines do not equal material lines (Lagrangian trajectories), allowing what we denote "time-dependent chaotic advection": a transport mechanism that moves a tracer from point A to point B in a predefined timeframe despite there being no streamlines from point A to point B, i.e. no direct flow at any snapshot in time. A well-known 2D example of this phenomenon is the oscillating double-gyre example [26; 28], where the steady flow has a separatrix that separates the two gyres whereas the unsteady flow has a chaotic transport mechanism between the regions. These studies illustrate the importance of considering time-dependent dynamics to track transport. Roughly, this time-dependent mechanism is created in a flow with several instantaneous saddles (hyperbolic structures) that split nearby trajectories to different directions. If the splitting surfaces themselves are not stationary, a tracer can "jump" between cycles and reach places it could not reach in the predefined timescale (or, perhaps, ever). In 2D, it was shown that this mechanism is essential (due to the integrability of the steady trajectories): an average flow plus a reasonable isotropic diffusion will not recreate large-scale time-dependent transport [34].
In this work, we produce and analyze synthetic Lagrangian trajectories of the MOW in realanalysis velocity data to provide a comprehensive analysis of transport statistics and associated timescales. We examine the effect of seasonal, annual, and interannual variations on pathways of the MOW in the entire North Atlantic basin in the past four decades. To this aim, we compare virtual trajectories released at the GoC and advected by several types of oceanic models. Their analysis allows us to evaluate the effect of transient eddies, seasonal, annual, and interannual variability, and the effect of time-dependent chaotic advection on the northbound transport and overall mixing of the MOW in the North Atlantic.
## Methods
Virtual passive tracers released from the Gulf of Cadiz at 7\({}^{\circ}W\) are advected in the North Atlantic Ocean using velocity fields from the SODA3.4.2 reanalysis spanning 1980-2020 [35]. The SODA3.4.2 reanalysis uses the MOM4p1 code, that solves the Boussinesq hydrostatic primitive equations [36; 37; 38]. It is an eddy-permitting global ocean-sea ice model with a 1/4\({}^{\circ}\times\)1/4\({}^{\circ}\) horizontal resolution and 50 vertical levels with a finer resolution towards the surface, forced at the surface by the ERA-interim near-surface atmospheric variables [39]. The velocity fields we use for the Lagrangian trajectory integration have been regridded by the SODA team onto a 1/2\({}^{\circ}\times\)1/2\({}^{\circ}\) centered horizontal grid, and have either a 5-day or a monthly averaged temporal resolution. The tracking is performed using the Patato toolbox [40], with linear spatial and temporal interpolators, and an additional freeslip condition on the boundaries that follows the scheme presented in the OceanParcels toolbox tutorials (OceanParcels.com, [41]).
We perform four distinct types of experiments, each corresponding to a different type of oceanographic model:
1. Full5day: 12\(\times\)20 monthly releases in the full reanalysis 5-day data, from January 1st, 1980 to December 1st, 1999, denoted Full5day_#year_#month_.
2. FullMonthly: 12\(\times\)20 monthly releases in the full reanalysis monthly-averaged data, from January 1st, 1980 to December 1st, 1999, denoted FullMonthly_#year_#month_.
3. RepeatYear: 12\(\times\)20 monthly releases in yearly-periodic velocity fields, denoted RepeatYear_#year_#month_. For example, Re
peatYear_n_m follows trajectories released in year n, month m, into a yearly-periodic velocity field that repeats year n for 20 times.
4. RepeatMonth: 12\(\times\)20 monthly releases in steady velocity fields that consist of the release month only, denoted RepeatMonth_#year_#month. For example, RepeatMonth_n_m follows trajectories released in year n, month m, into a steady velocity field that repeats year n, month m, for 12\(\times\)20 times.
The first and second models are the closest to the real oceanic flow. The first has the maximal possible temporal-spatial resolution and includes inter-annual variability, and the second model is very similar to it, with a slight reduction in the time resolution. The third model corresponds to time-periodic velocity fields that have the same spatial resolution and kinetic energy as in the second mode. The fourth model is steady, with the same spatial resolution and kinetic energy as the second model. The traditional climatological models suffer, due to the averaging, from a low kinetic energy content, hence they are not included here, see App. A.
For each experiment, the particles are released once a month from the MOW section at the Gulf of Cadiz at 7\({}^{\circ}W\). This section is chosen since it is fully inside the Gulf of Cadiz, westwards enough that it is beyond the major MOW sinking plume and eastwards enough that the velocity field is still clearly separated to an east-bound flow and a distinct westbound channel, see Fig. 1, 2. At this section, 7\({}^{\circ}W\) between \(34-37^{\circ}N\), the salinity and depth threshold for defining the MOW is determined by the depth and salinity values at which the maximum salinity per depth is at its minimum, since any further rise in the salinity is attributed to the MOW ((Fig. 2(a,b,c)). Finally, the MOW area is defined as the intersection between points with depth and salinity higher than the threshold values and points with a westbound velocity. In Fig. 2(d), the overall MOW transport in Sverdrup according to this definition is shown per month for the FullMonthly and Full5day datasets, showing an outflow of around 1 Sv with an interannual variability of up to 1.5 Sv. Every month, we release between 1500-15000 virtual particles, such that each particle carries 10\({}^{-4}\) Sv of water, distributed randomly on the MOW section at 7\({}^{\circ}W\) as defined above. The virtual particles are tracked in their corresponding velocity field for 20 years, for the four different oceanic models we consider.
Figure 2: Mediterranean Outflow Water (MOW) in the SODA3.4.2 reanalysis data for climatological annual data averaged 1980-2020. (a) Maximum salinity per depth at 7\({}^{\circ}\)W; MOW is defined as the area with a westward velocity and salinity above 35.82 psu at a depth below 330 m, see main text for details. (b) Salinity at 7\({}^{\circ}\)W. (c) Velocity at 7\({}^{\circ}\)W. (d) MOW transport in Sverdrup for 5-day and monthly SODA data.
## Results
### Spreading and mixing of the different oceanic models
To study the spreading of the MOW in the North Atlantic, we calculate the density of the final locations of MOW particles released over a single year and measured 20 years after the first release time (Fig. 3). We measure the percentage of particles northwards of 53\({}^{\circ}\)N, entering the sub-polar gyre region; westwards of 35\({}^{\circ}\)W, extending beyond the mid-Atlantic ridge; and deeper than 1500m, joining the NADW. In the Full5day and FullMonthly experiments, that are practically indistinguishable from each other, \(>30\%\) of particles venture far enough to reach (at least) one of these regions. Specifically, an average of 13.5% and up to 20% of MOW particles reach the SPG region after approximately 15 years, at which point the entrance into and flushing out of the region reaches a balance, see a plateau in Fig. 3(b). While differing in specifics, a similar qualitative picture is obtained for repeatYear. The SODA velocity fields do not resolve convective processes, and their vertical velocity is calculated diagnostically [36]; nevertheless, we measure a significant amount of approximately 35% that sink under 1500 meters to join the NADW. The continuously positive slope of the curves in Fig. 3(h) implies that continuing the trajectories beyond 20 years would raise this percentage even more.
The steady RepeatMonth runs, qualitatively different than the other experiments, exhibit significant fluctuations in their northbound transport. Over 80% of releases allow less than 5% of particles to cross 53\({}^{\circ}\)N, while certain months exhibit a massive northbound transport volume of over 25%. Specific RepeatMonth releases do exhibit statistics that are close to those of the full and RepeatYear runs, however, they do not mix well (Fig. 4), despite the steady velocity fields containing the same energy as the full runs. To evaluate the degree of mixing, we use the symmetric Kullback-Leibler divergence (KLD) [42] to calculate the distance between the spreading distributions. It is clear that the RepeatMonth runs are clustered far away and exhibit a much more limited spreading and mixing of the tracers. While Full5day and FullMonthly are again indistinguishable, the RepeatYear
Figure 3: Spreading of the MOW in the North Atlantic. (a,d,g) Probability density plots of MOW particles released in FullMonthly data from January 1992 to December 1992 once a month, measured 20 years after initial release, in January 2012. (b,e,h) show the percentage of particles beyond 53\({}^{\circ}\)N, 35\({}^{\circ}\)W, and 1500m, respectively, per year, where each color signifies a different release year, for FullMonthly data. Note the difference in the y-axis between subplots. (c,f,i) show the overall percentage of particles crossing the benchmarks per release year, for all data types. The RepeatMonth experiment (blue) has a data point for every month repeated.
experiments are clustered somewhat separately. Nevertheless, there are RepeatYear runs that are indistinguishable from the full data, e.g. see the run marked by a circle in Fig. 4(d). This suggests that a yearly-periodic velocity field, chosen with care, can provide an excellent imitation of the full dynamics, despite its lack of interannual variability.
#### The northward trajectories
The horizontal dynamics of the Lagrangian trajectories divide into three distinct groups (Fig. 5): 1. An average of 61% of MOW trajectories are stationary, defined as contained in the box 25degN-53degN, 5degW-35degW throughout their 20 years of evolution; 2. 23% of MOW trajectories take a direct westbound path, taking on average 11.3 years to cross the mid-Atlantic ridge at 35degW; 3. 16% of trajectories follow a direct northbound path along the eastern boundary of the North Atlantic, taking on average 10.4 years to cross the 53degN mark. A negligible amount (\(<1\%\)) of trajectories also exit the box from the south. Out of the northbound particles, most particles (55%) enter the northeastern box and stay there, whereas 34% continue into the subpolar gyre, reaching the northwestern box. Again, the FullMonthly and the FullSday trajectories are practically indistinguishable.
To study at which longitudes the MOW trajectories cross into the SPG region of the North Atlantic, we combine all FullMonthly trajectories that cross the 53degN line, no matter through which pathway, and calculate the density plot on this latitude. A single window at 18degW and at a depth of approximately 1000 meters provides a subsurface pathway for the vast majority of northbound trajec
Figure 4: Mixing and statistics comparison. (a,b,c) For each release year 1980-2020, the spreading statistics vector is defined as (% north spreading after 20 years, % west spreading after 20 years, % sinking spreading after 20 years), and the pairwise Euclidean distances between each two releases are plotted, for FullMonthly from FullMonthly (symmetric matrix), RepeatYear, and RepeatMonth. (d,e) show the KLD distance of spreading distributions from fM_1992 (the year with statistics closest to their averages). The color scheme is the same as in Fig. 3(c). (f,g) show the longitude-latitude density plots of the RepeatYear (f) and RepeatMonth (g) releases with the smallest statistics vector distances, as marked by black x’s on (b,c,d,e). The black circle in (d) marks the RepeatYear experiment that has the smallest KLD distance from the origin.
tories, see Fig. 6. The window is situated at the southern border of Rockall Ridge, which has a depth of less than 1000 meters. It is a saddle-point of the velocity field, from which northbound trajectories separate into two paths, one passing from the east, through the Rockall Trough, at 900 meters, as seen in [20]; and one from the west of the ridge, at 20\({}^{\circ}\)N and around 1100 meters. An average of 35% of trajectories choose the east over the west path; per date of crossing, the percentage of east-bound trajectories has a positive correlation of \(R=0.32\) with the SPG index calculated from the SODA data directly (the calculation is shown in App. B), with a statistical significance of \(\alpha<0.001\).
#### Chaotic advection and correlations
To identify the mechanism that transports the MOW into the SPG region of the North Atlantic, we compare the percentage of trajectories that take a northbound
Figure 5: Typical pathways of the MOW. (a-b) / (d-e) are typical examples of westbound / northbound trajectories, defined as pathways that exit the release box through its western / northern boundary, crossing the 35\({}^{\circ}\)W / 53\({}^{\circ}\)N mark, respectively. (c) / (f) show the percentage of particles that take the westbound / northbound path out of all the particles released in a single month, plotted as a function of the release month; the legend is the same as in Fig. 3. (g) shows the percentage of northbound particles that either stayed in the northeast box after exiting the initial box (blue), moved on to the north-center box (purple), or continued to the northwest box (pink). (h) / (i) show the average amount of years it took the particles released at a given date to cross, respectively, 35\({}^{\circ}\)W / 53\({}^{\circ}\)N. (j) Similar to (f), the percentage of particles that take a direct northbound path for FullMonthly, RepeatMonth, and RepeatYear, out of all particles released at a given date. 12-month moving averages (m.a.) are marked in bold. The black (red) prediction of the FullMonthly northbound percentage for 2000-2018 (2000-2012) is based on the 2-year (8-year) lag correlation between FullMonthly and RepeatYear; see App. C.
path in the FullMonthly, RepeatYear, and RepeatMonth datasets, see Fig. 5(j). While there is a statistically significant (\(\alpha<0.001\)) correlation of \(R=0.26\) between RepeatYear and RepeatMonth, the RepeatYear northbound trajectory statistics are systematically higher than those from the RepeatMonth dataset, with 14% of RepeatYear trajectories and only 6% of RepeatMonth trajectories taking a direct northbound route, as defined in the previous section. An even stronger result emerges from the FullMonthly data, of which 16% of trajectories take a direct northbound route. This implies that for most months, the main mechanism that transports particles from the Gulf of Cadiz to the sub-polar gyre is time-dependent chaotic advection, and not a steady, direct northbound current. The FullMonthly northbound trajectory statistics, out of which each datapoint contains data from 20 years ahead of the release date, exhibit a statistically significant (\(\alpha<0.001\)) high correlation with both the 8-year-lagged RepeatYear northbound data (\(R=0.27\)) and the 2-year-lagged RepeatYear data (\(R=0.28\)); while a correlation with an 8-year lag is expected given the time it takes trajectories to cross the 53"N line, the 2-year-lag correlation is yet to be explained. We hypothesize that a dynamical bridge situated south of 53"N is typically reached after 2 years, and once a particle crosses it, it will have a better chance of reaching the north; exploration of this idea is left to future studies. In any case, these correlations suggest some degree of predictability, as marked in Fig. 5(j). According to the 2-year correlation, a steady decrease in the northward transport of the MOW is expected in the next two decades, see details in App. C.
### Conclusions/discussion
In this work, we have created a thorough survey of the various Lagrangian pathways taken by the MOW over the past four decades. We have shown that in the course of 20 years, the MOW mixes into the entire North Atlantic basin, from 10"N-70"N, and between the coasts (Fig. 3, 4), with a greater concentration around the Gibraltar Straits, between 25-53"N, 10-35"W, and 500-1500 m. After 15 years, the northbound influx and outflux rates reach a balance with an average of 13% of MOW particles situated in the SPG region, i.e. beyond 53"N, indicating that this is the timeframe required to study MOW northbound transport. We identify a direct northbound path from the Gibraltar straits along the eastern section of the North Atlantic that leads beyond 53"N into the SPG region (Fig. 5). Through this path, an average of 16% of MOW particles take a direct northbound path into the SPG region; in some release years, over 20% and up to 35% take the direct northbound path.
The 53"N line contains a relatively narrow window at the southern tip of the Rockall Ridge, between 14-17"W and 800-1200 m depth, through which the vast majority of northbound MOW particles cross into the SPG region (Fig. 6). At this point, they cross the ridge by one of two possible paths, around the east or around the west of the ridge. The two pathways converge after crossing the ridge. While most (65% on average) of these particles take the western path, there is a positive correlation between the SPG index and the percentage of particles that take the eastern path at a given date of crossing, supporting the idea that a high SPG index correlates with an eastward extension of the SPG's eastern boundary that blocks particles from taking the western path around the ridge. However, we have not yet found evi
Figure 6: (a,b) show two typical FullMonthly trajectories that cross 57”N, via either an eastern path (green) or a western path (red) around the Rockall Ridge, both continuing into the sub-polar gyre. (c,d) shows density sections at, respectively, 57”N and 53”N, of all trajectories from the FullMonthly releases that cross these sections. (e) Orange line - the percentage of particles that cross from the east passage out of all particles that cross 57”N, vs. the date at which they cross this section. Black line - its yearly moving average. The blue line is the yearly moving average of the normalized SPG index, see App. B.
dence that a high SPG index correlates with a decreased overall transport of the MOW into the SPG region.
Over half of the direct northbound transport in the time-dependent experiments is a result of time-dependent chaotic advection, and not due to a steady northbound current, as indicated by the sharp decrease in direct northbound transport measured for the 3D streamlines of the snapshot monthly velocity fields of the RepeatMonth experiment (Fig. 5(j)). A statistically significant 2-year-lag correlation between the direct northbound transport in RepeatYear and FullMonthly indicates an expected decrease in the northbound transport of the MOW in the next two decades. Exploration of the dynamical origins of this correlation is left to future studies.
Throughout all the diagnostics we consider, the FullMonthly and FullSday statistics are practically indistinguishable. We expect that additional averaging, as done for example in climatological oceanic models (App. A), will begin to ruin the statistics at timescales longer than the typical lifetime of mesoscale eddies, around a few months. The yearly-periodic RepeatYear experiments differ from the FullMonthly experiments both in their statistics time series and in their averaged degree of mixing (Fig. 3(c,f,i) and Fig. 4). In light of the presence of a RepeatYear run (indicated as the circled run in Figure 3d) that closely resembles full runs, it can be determined that the necessity of interannual variability for replicating the observed MOW spreading is not a significant factor. Instead, an appropriately selected yearly-periodic flow can reproduce transport dynamics similar in quality and quantity to those observed in the complete dynamics. On the other hand, the spreading of the MOW in all the steady RepeatMonth oceanic models (as well as in the steady climatological model, see App. A) differs significantly from all the spreading observed in all of the FullMonthly cases. Hence, we propose that temporal variability is essential for imitating the full flow. Finally, we suspect that the mesoscale eddies do play an important role in the MOW spreading: a climatological averaged time-dependent flow has low kinetic energy content and its MOW spreading appears to be much more restricted than the FullMonthly spreading (see App. A). Future studies are needed to identify the relative importance of mesoscale eddies and the kinetic energy content of the velocity fields in mixing and spreading the MOW. Additionally, it will be interesting to explore, in this 3D context, the possible compensation of the mesoscale eddies and/or of the velocity field time dependence by an isotropic diffusion term.
OSK acknowledges the support of a research grant from the Yotam Project and the Weizmann Institute Sustainability and Energy Research Initiative; and the support of the Sephora Berrebi Scholarship in Mathematics. VRK and OSK acknowledge the support of the Israel Science Foundation, Grant 787/22. VRK also acknowledges the support of The Estrin Family Chair of Computer Science and Applied Mathematics. HG acknowledges the support of the Vigevani Research Project Prize.
|
2305.01876 | Causality-aware Concept Extraction based on Knowledge-guided Prompting | Concepts benefit natural language understanding but are far from complete in
existing knowledge graphs (KGs). Recently, pre-trained language models (PLMs)
have been widely used in text-based concept extraction (CE). However, PLMs tend
to mine the co-occurrence associations from massive corpus as pre-trained
knowledge rather than the real causal effect between tokens. As a result, the
pre-trained knowledge confounds PLMs to extract biased concepts based on
spurious co-occurrence correlations, inevitably resulting in low precision. In
this paper, through the lens of a Structural Causal Model (SCM), we propose
equipping the PLM-based extractor with a knowledge-guided prompt as an
intervention to alleviate concept bias. The prompt adopts the topic of the
given entity from the existing knowledge in KGs to mitigate the spurious
co-occurrence correlations between entities and biased concepts. Our extensive
experiments on representative multilingual KG datasets justify that our
proposed prompt can effectively alleviate concept bias and improve the
performance of PLM-based CE models.The code has been released on
https://github.com/siyuyuan/KPCE. | Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie | 2023-05-03T03:36:20Z | http://arxiv.org/abs/2305.01876v5 | # Causality-aware Concept Extraction
###### Abstract
Concepts benefit natural language understanding but are far from complete in existing knowledge graphs (KGs). Recently, pre-trained language models (PLMs) have been widely used in text-based concept extraction (CE). However, PLMs tend to mine the co-occurrence associations from massive corpus as pre-trained knowledge rather than the real causal effect between tokens. As a result, the pre-trained knowledge confounds PLMs to extract biased concepts based on spurious co-occurrence correlations, inevitably resulting in low precision. In this paper, through the lens of a Structural Causal Model (SCM), we propose equipping the PLM-based extractor with a knowledge-guided prompt as an intervention to alleviate concept bias. The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts. Our extensive experiments on representative multilingual KG datasets justify that our proposed prompt can effectively alleviate concept bias and improve the performance of PLM-based CE models. The code has been released on [https://github.com/siyuyuan/KPCE](https://github.com/siyuyuan/KPCE).
## 1 Introduction
The concepts in knowledge graphs (KGs) enable machines to understand natural languages better, and thus benefit many downstream tasks, such as question answering Han et al. (2020), common-sense reasoning Zhong et al. (2021) and entity typing Yuan et al. (2022). However, the concepts, especially the fine-grained ones, in existing KGs still need to be completed. For example, in the widely used Chinese KG _CN-DBpedia_Xu et al. (2017), there are nearly 17 million entities but only 0.27 million concepts in total, and more than 20% entities even have no concepts. Although _Probase_Wu et al. (2012) is a large-scale English KG, the fine-grained concepts with two or more modifiers in it only account for 30% Li et al. (2021). We focus on extracting multi-grained concepts from texts to complete existing KGs.
Most of the existing text-based concept acquisition approaches adopt the extraction scheme, which can be divided into two categories: _1)_ pattern-matching approaches Auer et al. (2007); Wu et al. (2012); Xu et al. (2017), which can obtain high-quality concepts but only have low recall due to poor generalization; _2)_ learning-based approaches Luo et al. (2020); Ji et al. (2020); Yuan et al. (2021), which employ pre-trained language models (PLMs) fine-tuned with labeled data to extract concepts.
However, an unignorable drawback of these learning-based approaches based on PLMs is **concept bias**. Concept bias means the concepts are extracted based on their contextual (co-occurrence) associations rather than the real causal effect between the entities and concepts, resulting in low extraction precision. For example, in Figure 1, PLMs tend to extract _novel_ and _writer_ together as concepts for the entity _Louisa May Alcott_ even if we explicitly input the entity _Louisa May Alcott_ to the model. Previous work demonstrates that causal inference is a promising technique for bias analy
Figure 1: The example of concept bias. The PLM-based CE models are biased to extract _novel_ mistakenly as the concept of _Louisa May Alcott_ from the text.
sis (Lu et al., 2022). To analyze the reasons behind concept bias, we devise a Structural Causal Model (SCM) (Pearl, 2009) to investigate the causal effect in the PLM-based concept extraction (CE) system, and show that pre-trained knowledge in PLMs confounds PLMs to extract biased concepts. During the pre-training, the entities and biased concepts (_e.g._, _Louisa May Alcott_ and _novel_) often co-occur in many texts. Thus, PLMs tend to mine statistical associations from a massive corpus rather than the real causal effect between them (Li et al., 2022), which induces spurious co-occurrence correlations between entities (_i.e._, _Louisa May Alcott_) and biased concepts (_i.e._, _novel_). Since we cannot directly observe the prior distribution of pre-trained knowledge, the backdoor adjustment is intractable for our problem (Pearl, 2009). Alternatively, the front-door adjustment (Peters et al., 2017) can apply a mediator as an intervention to mitigate bias.
In this paper, we adopt language prompting (Gao et al., 2021; Li and Liang, 2021) as a mediator for the frontdoor adjustment to handle concept bias. We propose a novel Concept Extraction framework with **K**nowledge-guided **P**rompt, namely **KPCE** to extract concepts for given entities from text. Specifically, we construct a knowledge-guided prompt by obtaining the topic of the given entity (_e.g._, _person_ for _Louisa May Alcott_) from the knowledge in the existing KGs. Our proposed knowledge-guided prompt is independent of pre-trained knowledge and fulfills the frontdoor criterion. Thus, it can be used as a mediator to guide PLMs to focus on the right cause and alleviate spurious correlations. Although adopting our knowledge-guided prompt to construct the mediator is straightforward, it has been proven effective in addressing concept bias and improving the extraction performance of PLM-based extractors in the CE task.
In summary, our contributions include: _1)_ To the best of our knowledge, we are the first to identify the concept bias problem in the PLM-based CE system. _2)_ We define a Structural Causal Model to analyze the concept bias from a causal perspective and propose adopting a knowledge-guided prompt as a mediator to alleviate the bias via frontdoor adjustment. _3)_ Experimental results demonstrate the effectiveness of the proposed knowledge-guided prompt, which significantly mitigates the bias and achieves a new state-of-the-art for CE task.
## 2 Related Work
Concept AcquisitionMost of the existing text-based concept acquisition approaches adopt the extraction scheme, which can be divided into two categories: _1) Pattern-matching Approaches_: extract concepts from free texts with hand-crafted patterns (Auer et al., 2007; Wu et al., 2012; Xu et al., 2017). Although they can obtain high-quality concepts, they have low recall due to their poor generalization ability; _2) Learning-based Approaches_: mostly employ the PLM-based extraction models from other extraction tasks, such as the Named Entity Recognition (NER) models (Li et al., 2020; Luo et al., 2021; Lange et al., 2022) and Information Extraction models (Fang et al., 2021; Yuan et al., 2021) in the CE task. Although they can extract many concepts from a large corpus, the concept bias cannot be well handled.
Causality for Language ProcessingSeveral recent work studies causal inference combined with language models for natural language processing (NLP) (Scholkopf, 2022), such as controllable text generation (Hu and Li, 2021; Goyal et al., 2022) and counterfactual reasoning (Chen et al., 2022; Paranjape et al., 2022). In addition, causal inference can recognize spurious correlations via Structural Causal Model (SCM) (Pearl, 2009) for bias analysis and eliminate biases using causal intervention techniques (Weber et al., 2020; Lu et al., 2022). Therefore, there are also studies showing that causal inference is a promising technique to identify undesirable biases in the NLP dataset (Feder et al., 2022) pre-trained language models (PLMs) (Li et al., 2022). In this paper, we adopt causal inference to identify, understand, and alleviate concept bias in concept extraction.
Language PromptingLanguage prompting can distill knowledge from PLMs to improve the model performance in the downstream task. Language prompt construction methods can be divided into two categories (Liu et al., 2021): _1) Hand-crafted Prompts_, which are created manually based on human insights into the tasks (Brown et al., 2020; Schick and Schutze, 2021; Schick and Schutze, 2021). Although they obtain high-quality results, how to construct optimal prompts for a certain downstream task is an intractable challenge; _2) Automated Constructed Prompts_, which are generated automatically from natural language phrases (Jiang et al., 2020; Yuan et al., 2021) or vector space (Li
and Liang, 2021; Liu et al., 2021b). Although previous work analyzes the prompt from a causal perspective (Cao et al., 2022), relatively little attention has been paid to adopting the prompt to alleviate the bias in the downstream task.
## 3 Concept Bias Analysis
In this section, we first formally define our task. Then we investigate the concept bias issued by PLMs in empirical studies. Finally, we devise a Structural Causal Model (SCM) to analyze the bias and alleviate it via causal inference.
### Preliminary
Task DefinitionOur CE task addressed in this paper can be formulated as follows. Given an entity \(E=\{e_{1},e_{2},\cdots,e_{|E|}\}\) and its relevant text \(T=\{t_{1},t_{2},\cdots,t_{|T|}\}\) where \(e_{i}\) (or \(t_{i}\)) is a word token, our framework aims to extract one or multiple spans from \(T\) as the concept(s) of \(E\).
Data SelectionIt must guarantee that the given text contains concepts. The abstract text of an entity expresses the concepts of the entity explicitly, which can be obtained from online encyclopedias or knowledge bases. In this paper, we take the abstract text of an entity as its relevant text \(T\). The details of dataset construction will be introduced in SS 5.1. Since we aim to extract concepts from \(T\) for \(E\), it is reasonable to concatenate \(E\) and \(T\) to form the input text \(X=\{E,T\}\).
### Empirical Studies on Concept Bias
To demonstrate the presence of concept bias, we conduct empirical studies on the CN-DBpedia dataset (Xu et al., 2017). First, we randomly sample 1 million entities with their concepts from CN-DBpedia, and select the top 100 concepts with the most entities as the _typical concept_ set. Then we randomly select 100 entities with their abstracts for each typical concept to construct the input texts and run a BERT-based extractor to extract concepts. Details of the extraction process will be introduced in SS 4.2. We invite volunteers to assess whether the extracted concepts are biased. To quantify the degree of concept bias, we calculate the _bias rate_ of concept A to another concept B. The bias rate is defined as the number of entities of A for which B or the sub-concepts of B are mistakenly extracted by the extractor, divided by the total number of entities of A.
The bias rates among 26 typical concepts are shown in Figure 2, where the concepts (dots) of the same topic are clustered in one rectangle. The construction of concept topics will be introduced in SS 4.1. From the figure, we can conclude that concept bias is widespread in the PLM-based CE system and negatively affects the quality of the results. Previous studies have proven that causal inference can analyze bias via SCM and eliminate bias with causal intervention techniques (Cao et al., 2022). Next, we will analyze concept bias from a causal perspective.
### The Causal Framework for Concept Bias Analysis
The Structural Causal ModelWe devise a Structural Causal Model (SCM) to identify the causal effect between the input text \(X\) of a given entity \(E\) and the concept span \(S\) that can be ex
Figure 3: The proposed structural causal model (SCM). A hollow circle indicates the variable is latent, and a shaded circle indicates the variable is observed. Without causal intervention, the PLM-based CE model extract _Novel_ due to the spurious correlation between the entities of _Writer_ and _Novel_ caused by the confounding variable \(K\). The constructed mediating variable \(P\) can block the backdoor paths for \(X\to S\) (opened by \(K\)) and help the model only extract the unbiased concept _Writer_.
Figure 2: Concept bias map for the entities of popular concepts in CN-DBpedia (better viewed in color).
tracted from \(X\). As shown in Figure 3, our CE task aims to extract one or multiple spans \(S\) from \(X\) as the concept(s) of \(E\) where the causal effect can be denoted as \(X\to S\).
During the pre-training, the contextual embedding of one token depends on the ones that frequently appear nearby in the corpus. We extrapolate that the high co-occurrence between the entities of true concepts (_e.g._, _writer_) and biased concepts (_e.g._, _novel_) in the pre-trained knowledge induces spurious correlations between entities (_e.g._, _Louisa May Alcott_) and biased concepts (_e.g._, _novel_). Therefore, the PLM-based CE models can mistakenly extract biased concepts even if the entity is explicitly mentioned in \(X\). The experiments in SS 5.4 also prove our rationale. Based on the foregoing analysis, we define the pre-trained knowledge \(K\) from PLM-based extraction models as a confounder.
We cannot directly observe the latent space of the PLMs, and thus the backdoor adjustment Pearl (2009) is not applicable in our case. Alternatively, we adopt the frontdoor adjustment Peters et al. (2017) and design a mediator to mitigate the concept bias.
Causal InterventionTo mitigate the concept bias, we construct a prompt \(P\) as a mediator for \(X\to S\), and then the frontdoor adjustment can apply do-operation.
Specifically, to make the PLMs attend to the right cause and alleviate spurious co-occurrence correlation (_e.g._, _novel_ and _Louisa May Alcott_), we assign a topic as a knowledge-guided prompt \(P\) (_i.e._, _person_) to the input text \(X\) (The detailed operation is elaborated in SS 4.1). The topics obtained from KGs are independent of pre-trained knowledge, and thus \(P\) fulfills the frontdoor criterion.
For the causal effect \(X\to P\), we can observe that \(X\to P\to S\gets K\) is a collider that blocks the association between \(P\) and \(K\), and no backdoor path is available for \(X\to P\). Therefore, we can directly rely on the conditional probability after applying the do-operator for \(X\):
\[P(P=p|do(X=x))=P(P=p|X=x). \tag{1}\]
Next, for the causal effect \(P\to S\), \(P\gets X\gets K\to S\) is a backdoor path from \(P\) to \(S\), which we need to cut off. Since \(K\) is an unobserved variable, we can block the backdoor path through \(X\):
\[P(S|do(P))=\sum_{x}P(S|P,X=x)P(X=x). \tag{2}\]
Therefore, the underlying causal mechanism of our CE task is a combination of Eq.1 and Eq.2, which can be formulated as:
\[P(S|do(X))\] \[=\sum_{p}P(S|p,do(X))P(p|do(X))\] \[=\sum_{p}P(S|do(P),do(X))P(p|do(X))\] \[=\sum_{p}P(S|do(P))P(p|do(X)). \tag{3}\]
The theoretical details of the frontdoor adjustment are introduced in Appendix A.
We make the assumption of strong ignorability, _i.e._, there is only one confounder \(K\) between \(X\) and \(S\). One assumption of the frontdoor criterion is that the only way the input text \(X\) influences \(S\) is through the mediator \(P\). Thus, \(X\to P\to S\) must be the only path. Otherwise, the front-door adjustment cannot stand. Notice that \(K\) already represents all the knowledge from pre-trained data in PLMs. Therefore, it is reasonable to use the strong ignorability assumption that it already includes all possible confounders.
Through the frontdoor adjustment, we can block the backdoor path from input text to concepts and alleviate spurious correlation caused by the confounder, _i.e._, pre-trained knowledge. In practice, we can train a topic classifier to estimate Eq.1 (SS 4.1) and train a concept extractor on our training data to estimate Eq.2 (SS 4.2). Next, we will introduce the implementation of the frontdoor adjustment in detail.
## 4 Methodology
In this section, we present our CE framework KPCE and discuss how to perform prompting to alleviate concept bias. The overall framework of KPCE is illustrated in Figure 4, which consists of two major modules: _1) Prompt Constructor_: assigns the topic obtained from KGs for entities as a knowledge-guided prompt to estimate Eq.1; _2) Concept Extractor_: trains a BERT-based extractor with the constructed prompt to estimate Eq.2 and extract multi-grained concepts from the input text. Next, we will introduce the two modules of KPCE.
### Prompt Constructor
Knowledge-guided Prompt ConstructionTo reduce the concept bias, we use the topic of a given entity as a knowledge-guided prompt, which is identified based on the external knowledge of the existing KGs. Take _CN-DBpedia_[20] as an example 1. We randomly sample one million entities from this KG and obtain their existing concepts. Then, we select the top 100 concepts having the most entities to constitute the _typical concept_ set, which can cover more than 99.80% entities in the KG. Next, we use spectral clustering [20] with the adaptive K-means [1] algorithm to cluster these typical concepts into several groups, each of which corresponds to a topic. To achieve the spectral clustering, we use the following overlap coefficient [17] to measure the similarity between two concepts,
Footnote 1: In fact, the concepts of CN-DBpedia are inherited from Probase, so the typical topics are the same for CN-DBpedia and Probase.
\[Overlap(c_{1},c_{2})=\frac{|ent(c_{1})\cap ent(c_{2})|}{min(|ent(c_{1})|,|ent (c_{2})|)+\delta} \tag{4}\]
where \(ent(c_{1})\) and \(ent(c_{2})\) are the entity sets of concept \(c_{1}\) and concept \(c_{2}\), respectively. We then construct a similarity matrix of typical concepts to achieve spectral clustering. To determine the best number of clusters, we calculate the Silhouette Coefficient (SC) [1] and the Calinski Harabaz Index (CHI) [13] from 3 to 30 clusters. The scores are shown in Figure 5, from which we find that the best number of clusters is 17. As a result, we cluster the typical concepts into 17 groups and define a topic name for each group. The 17 typical topics and their corresponding concepts are listed in Appendix B.1
Identifying Topic Prompt for Each EntityWe adopt a topic classifier to assign the topic prompt to the input text \(X\), which is one of the 17 typical topics in Table 6. To construct the training data, we randomly fetch 40,000 entities together with their abstract texts and existing concepts in the KG. According to the concept clustering results, we can assign each topic to the entities. We adopt transformer encoder [21] followed by a two-layer perception (MLP) [1] activated by ReLU, as our topic classifier 2. We train the topic classifier to predict the topic prompt \(P=\{p_{1},p_{2},\cdots,p_{|P|}\}\) for \(X\), which is calculated as 3:
Footnote 2: We do not employ the PLM-based topic classifier since it will bring a direct path from \(K\) to \(P\) in Figure 3.
Footnote 3: The detailed training operation of topic classifier can be found in Appendix B.1
\[P=\operatorname*{arg\,max}_{i}\big{(}P(P^{i}|X)\big{)},1\leq i\leq 17, \tag{5}\]
where \(P^{i}\) is the i-th topic among the 17 typical topics.
In our experiments, the topic classifier achieves more than 97.8% accuracy in 500 samples by human assessment. Through training the topic classifier, we can estimate Eq.1 to identify the causal effect \(X\to P\).
### Concept Extractor
Prompt-based BERTThe concept extractor is a BERT equipped with our proposed prompt followed by a pointer network [17]. The pointer network is adopted for extracting multi-grained concepts.
We first concatenate the token sequence with the tokens of \(P\) and \(X\) to constitute the input, _i.e._, {[CLS]P[SEP]X[SEP]}, where [CLS] and [SEP] are the special tokens in BERT. With multi-headed self-attention operations over the above in
Figure 4: The overview of our CE framework.
Figure 5: The scores of Silhouette Coefficient (SC) and Calinski Harabaz Index (CHI) under different cluster numbers. The scores are normalized with feature scaling for a fair comparison.
put, the BERT outputs the final hidden state (matrix), _i.e._, \(\mathbf{H}^{N_{L}}\in\mathbb{R}^{(|P|+|X|+3)\times d^{d}}\) where \(d^{\prime}\) is the vector dimension and \(N_{L}\) is the total number of layers. Then the pointer network predicts the probability of a token being the start position and the end position of the extracted span. We use \(\mathbf{p}^{start},\mathbf{p}^{end}\in\mathbb{R}^{|P|+|X|+3}\) to denote the vectors storing the probabilities of all tokens to be the start position and end position, which are calculated as
\[[\mathbf{p}^{start};\mathbf{p}^{end}]=\text{softmax}(\mathbf{H}^{N_{L}} \mathbf{W}+\mathbf{B}) \tag{6}\]
where \(\mathbf{B}\in\mathbb{R}^{(|P|+|X|+3)\times 2}\) and \(\mathbf{W}\in\mathbb{R}^{d^{\prime}\times 2}\) and are both trainable parameters. We only consider the probabilities of the tokens in the abstract \(T\). Given a span with \(x_{i}\) and \(x_{j}\) as the start token and the end token, its confidence score \(cs_{ij}\in\mathbb{R}\) can be calculated as
\[cs_{ij}=p_{i}^{start}+p_{j}^{end}. \tag{7}\]
Accordingly, the model outputs a ranking list of candidate concepts (spans) with their confidence scores. We only reserve the concepts with confidence scores bigger than the selection threshold. An example to illustrate how to perform the pointer network is provided in Appendix B.2.
During training, the concept extractor is fed with the input texts with topic prompts and outputs the probability (confidence scores) of the spans, and thus can estimate the causal effect \(P\to S\) in Eq.2.
Model TrainingWe adopt the cross-entropy function \(CE(\cdot)\) as the loss function of our model. Specifically, suppose that \(\mathbf{y}_{start}\in\mathbb{N}^{|P|+|X|+3}\) (or \(\mathbf{y}_{end}\in\mathbb{N}^{|P|+|X|+3}\)) contains the real label (0 or 1) of each input token being the start (or end) position of a concept. Then, we have the following two training losses for the predictions:
\[\mathcal{L}_{start} =CE(\mathbf{p}^{start},\mathbf{y}_{start}), \tag{8}\] \[\mathcal{L}_{end} =CE(\mathbf{p}^{end},\mathbf{y}_{end}). \tag{9}\]
Then, the overall training loss is
\[\mathcal{L}=\alpha\mathcal{L}_{start}+(1-\alpha)\mathcal{L}_{end} \tag{10}\]
where \(\alpha\in(0,1)\) is the control parameter. We use Adam (Kingma and Ba, 2015) to optimize \(\mathcal{L}\).
## 5 Experiments
### Datasets
CN-DBpediaFrom the latest version of Chinese KG CN-DBpedia (Xu et al., 2017) and Wikipedia, we randomly sample 100,000 instances to construct our sample pool. Each instance in the sample pool consists of an entity with its concept and abstract text 4. Then, we sample 500 instances from the pool as our test set and divide the rest of the instances into the training set and validation set according to 9:1.
Footnote 4: If one entity has multiple concepts, we randomly select one as the golden label.
ProbaseWe obtain the English sample pool of 50,000 instances from Probase (Wu et al., 2012) and Wikipedia. The training, validation and test set construction are the same as the Chinese dataset.
### Evaluation Metrics
We compare KPCE with seven baselines, including a pattern-matching approach _i.e._, Hearst pattern. Detailed information on baselines and some experiment settings is shown in Appendix C.1 and C.2. Some extracted concepts do not exist in the KG, and cannot be assessed automatically. Therefore, we invite the annotators to assess whether the extracted concepts are correct. The annotation detail is shown in Appendix C.3.
Please note that the extracted concepts may already have existed in the KG for the given entity, which we denote as _EC_s (existing concepts). However, our work expects to extract correct but new concepts (that do not exist in the KG) to complete the KGs, which we denote as _N_C_s (new concepts). Therefore, we record the number of new concepts (NC #) and display the ratio of correct concepts (ECs and NCs) as precision (Prec.). Since it is difficult to know all the correct concepts in the input text, we report the relative recall (Recall\({}_{R}\)). Specifically, suppose NCs # is the total number of new concepts extracted by all models. Then, the relative recall is calculated as NC # divided by NCs #5. Accordingly, the relative F1 (F1\({}_{R}\)) can be calculated with Prec. and Recall\({}_{R}\). In addition, we also record the average length of new concepts (Len\({}_{NC}\)) to investigate the effectiveness of the pointer network.
Footnote 5: Please note that NCs # is counted based on all models in one comparison. Therefore, Recall\({}_{R}\) can be different for one model when the compared models change.
### Overall Performance
We present the main results in Table 1. Generally, we have the following findings:
Our method outperforms previous baselines by large margins, including previous state-of-the-art (MRC-CE, Yuan et al., 2021). However, the
pattern-based approach still beats the learning-based ones in precision, envisioning a room for improvement. We find that KPCE achieves a more significant improvement in extracting new concepts, indicating that KPCE can be applied to achieve KG completion (SS 5.5). We also compare KPCE with its ablated variant and the results show that adding a knowledge-guided prompt can guide BERT to achieve accurate CE results.
We notice that almost all models have higher extraction precision on the Chinese dataset than that on the English dataset. This is because the modifiers are usually placed before nouns in Chinese syntactic structure, and thus it is easier to identify these modifiers and extract them with the coarse-grained concepts together to form the fine-grained ones. However, for the English dataset, not only adjectives but also subordinate clauses modify coarse-grained concepts, and thus identifying these modifiers is more difficult.
Compared with learning-based baselines, KPCE can extract more fine-grained concepts. Although the Hearst pattern can also extract fine-grained concepts, it cannot simultaneously extract multi-grained concepts when a coarse-grained concept term is the subsequence of another fine-grained concept term. For example, in Figure 4, if Hearst Pattern extracts _American novelist_ as a concept, it cannot extract _novelist_ simultaneously. KPCE solves this problem well with the aid of the pointer network and achieves a much higher recall.
### Analysis
In response to the motivations of KPCE, we conduct detailed analyses to further understand KPCE and why it works.
**How does KPCE alleviate the concept bias?** As mentioned in SS 3.2, the concept bias occurs primarily among 26 concepts in CN-DBpedia. To justify that KPCE can alleviate concept bias with the aid of prompts, we randomly select five concepts and run KPCE with its ablated variant to extract concepts for 100 entities randomly selected from each of the five concepts. Then we calculate the bias rates of each concept, and the results in Table 2 show that KPCE has a much lower bias rate than the vanilla BERT-based concept extractor. Thus, the knowledge-guided prompt can significantly mitigate the concept bias.
Furthermore, a case study for an entity _Korean alphabet_ is shown in Table 3. We find that the proposed prompts can mitigate the spurious co-occurrence correlation between entities and biased concepts by decreasing the confidence scores of biased concepts (_i.e._, _language_ and _alphabet_) and increasing the scores of correct concepts (_i.e._, _system_ and _writing system_). Thus, the knowledge-guided prompt can significantly alleviate the concept bias and result in more accurate CE results.
**How does the prompt affect the spurious co-occurrence correlations?** To explore the rationale behind the prompt-based mediator, we focus on the attention distribution for the special token [CLS], since it is an aggregate representation of the sequence and can capture the sentence-level semantic meaning Devlin et al. (2019); Chang et al. (2022). Following previous work Clark et al. (2019), we calculate the attention probabilities of
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **NC** \# & \(\textbf{Len}_{NC}\) & **Prec.** & \(\textbf{Recall}_{R}\) & \(\textbf{F1}_{R}\) \\ \hline \multicolumn{5}{c}{_Trained on CN-DBpedia_} \\ Hearst & 222 & **5.95** & **95.24\%** & 21.66\% & 35.29\% \\ \multicolumn{5}{c}{_Trained on CN-DBpedia_} \\ Hearst & 64 & 3.09 & 95.71\% & 6.24\% & 11.72\% \\ XLNet & 47 & 2.66 & 88.48\% & 4.68\% & 8.90\% \\ KVMN & 254 & 4.03 & 64.45\% & 26.02\% & 37.08\% \\ XLM-R & 255 & 5.35 & 76.82\% & 24.78\% & 37.47\% \\ BBF & 26 & 4.34 & 88.28\% & 2.54\% & 4.93\% \\ GACEN & 346 & 3.58 & 84.89\% & 36.73\% & 51.27\% \\ MRC-CE & 323 & 5.33 & 92.12\% & 31.51\% & 46.96\% \\ KPCE & **482** & 5.52 & 94.20\% & **44.38\%** & **60.33\%** \\ _w/o P_ & 338 & 5.21 & 72.07\% & 34.05\% & 46.25\% \\ \hline \multicolumn{5}{c}{_Trained on Probase_} \\ Hearst & 287 & **2.43** & **89.04\%** & 17.10\% & 28.69\% \\ FLAIR & 140 & 1.68 & 84.31\% & 7.73\% & 14.16\% \\ XLNet & 342 & 1.51 & 79.30\% & 18.87\% & 30.49\% \\ KVMN & 403 & 1.97 & 47.39\% & 22.24\% & 30.27\% \\ XLM-R & 322 & 2.28 & 81.73\% & 17.77\% & 29.19\% \\ BBC & 154 & 1.68 & 81.13\% & 8.44\% & 15.30\% \\ GACEN & 486 & 1.75 & 76.93\% & 31.82\% & 45.02\% \\ MRC-CE & 598 & 2.23 & 88.59\% & 33.00\% & 48.09\% \\ KPCE & **752** & 2.31 & 88.69\% & **46.83\%** & **61.30\%** \\ _w/o P_ & 691 & 2.26 & 78.64\% & 40.62\% & 53.57 \% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Concept extraction performance comparisons of 500 test samples. _w/o P_ is the ablation variants of KPCE without the knowledge-guided prompt (P)
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Concept\({}_{O}\)** & **Concept\({}_{B}\)** & **KPCE**\({}_{w\!\!
[CLS] to other tokens by averaging and normalizing the attention value in 12 attention heads in the last layers. The attention distributions of the KPCE and its ablation variant are visualized in Figure 6. We find that the tokens of _writer_ and _novel_ both have high attentions in the vanilla BERT-based concept extractor. However, after adopting our knowledge-guided prompt, the attention probabilities of _novel_ is lower than before, and thus can help the model to reduce the spurious co-occurrence correlations derived from pre-trained knowledge.
What if other knowledge injection methods are adopted?We claim that the topics obtained from external KGs are better than the keyword-based topics from the text on guiding BERT to achieve our CE task. To justify it, we compare KPCE with another variant, namely KPCE \({}_{LDA}\), where the topics are the keywords obtained by running Latent Dirichlet Allocation (LDA) [1] over the abstracts of all entities. Besides, we also compare KPCE with ERNIE [11], which implicitly learns the knowledge of entities during pre-training. The detail about LDA and ERNIE is shown in Appendix C.4. The comparison results are listed in Table 4. It shows that our design of the knowledge-guided prompt in KPCE exploits the value of external knowledge more thoroughly than the two remaining schemes, thus achieving better CE performance.
### Applications
KG CompletionWe run KPCE for all entities existing in CN-DBpedia to complement new concepts. KPCE extracts 7,623,111 new concepts for 6 million entities. Thus, our framework can achieve a large-scale concept completion for existing KGs.
Domain Concept AcquisitionWe collect 117,489 Food & Delight entities with their descriptive texts from Meituan 6, and explore two application approaches. The first is to directly apply KPCE, and the second is to randomly select 300 samples as a small training set to fine-tune KPCE. The results in Table 5 show that: _1)_ The transfer ability of KPCE is greatly improved with the aid of prompts; _2)_ KPCE can extract high-quality concepts in the new domain only with a small portion of training samples. Furthermore, after running directly, KPCE extracts 81,800 new concepts with 82.66% precision. Thus, our knowledge-guided prompt can significantly improve the transfer ability of PLMs on the domain CE task.
Footnote 6: [http://www.meituan.com](http://www.meituan.com), a Chinese e-business platform.
## 6 Conclusion
In this paper, we identify the concept bias in the PLM-based CE system and devise a Structural
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{**Output Results**} \\ \hline \multicolumn{4}{c}{KPCE} & \multicolumn{1}{c}{
\begin{tabular}{c} KPCE} \\ \hline **Span** & **C.S.** \\ \end{tabular} & **Span** & **C.S.** \\ \hline language & 0.238 & system & 0.240 \\ alphabet & 0.213 & writing system & 0.219 \\ system & 0.209 & system for the Korean language & 0.130 \\ \hline \hline \end{tabular}
\end{table}
Table 3: A case to verify the effectiveness of the proposed prompts on addressing concept bias. We display an entity _Korean alphabet_ with its top-3 extracted spans and the confidence scores (denoted as C.S.)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **TS \#** & **NC \#** & **Prec.** & **Recall\({}_{R}\)** & **F1\({}_{R}\)** \\ \hline KPCE & 0 & 62 & 82.66\% & 48.44\% & 61.08\% \\ _w/o P_ & 0 & 55 & 69.62\% & 42.97\% & 53.14\% \\ KPCE & 300 & **107** & **82.95\%** & **83.59\%** & **83.27\%** \\ _w/o P_ & 300 & 89 & 81.65\% & 69.53\% & 75.10\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Concept extraction results with different knowledge utilization.
Figure 6: Visualization of the attention distribution of [CLS] to other tokens.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{4}{c}{**Topic**: Technology. **Entity**: Korean alphabet.} \\ \multicolumn{4}{c}{**Abstract**: The Korean alphabet is a writing system for the Korean language created by King Sejong the Great in 1443.} \\ \hline \hline \multicolumn{4}{c}{**Output Results**} \\ \hline \multicolumn{4}{c}{KPCE \({}_{w/oP}\)} & \multicolumn{2}{c}{KPCE} \\ \hline \hline \multicolumn{4}{c}{**Span**} & **C.S.** & **Span** & **C.S.** \\ \hline language & 0.238 & system & 0.240 \\ alphabet & 0.213 & writing system & 0.219 \\ system & 0.209 & system for the Korean language & 0.130 \\ \hline \hline \end{tabular}
\end{table}
Table 3: A case to verify the effectiveness of the proposed prompts on addressing concept bias. We display an entity _Korean alphabet_ with its top-3 extracted spans and the confidence scores (denoted as C.S.)
Causal Model to analyze the bias. To alleviate concept bias, we propose a novel CE framework with knowledge-guided prompting to alleviate spurious co-occurrence correlation between entities and biased concepts. We conduct extensive experiments to justify that our prompt-based learning framework can significantly mitigate bias and has an excellent performance in concept acquisition.
## 7 Limitations
Although we have proven that our work can significantly alleviate concept bias and extract high-quality and new concepts, it also has some limitations. In this section, we analyze three limitations and hope to advance future work.
Model NoveltyAlthough KPCE can effectively mitigate the spurious co-occurrence correlations between entities and biased concepts, the proposed framework is not entirely novel. The novelty of our work is to conduct the first thorough causal analysis that shows the spurious correlations between entities and biased concepts in the concept extraction task. After defining the problem and SCM of concept extraction in SS 3.1, we propose a prompt-based approach to implement the interventions toward the SCM to elicit the unbiased knowledge from PLMs. Previous work in language prompting mostly guides the PLMs with prompts but is unaware of the cause-effect relations in its task, which may hinder the effectiveness of prompts. We hope our work can inspire future work to utilize language prompting from a causal perspective.
Topic ClassificationAlthough the topics obtained by clustering are mostly mutually exclusive, there are still cases where an entity can be classified into multiple topics. Therefore, considering only one topic for the entity excludes the correct concepts.
Threshold SelectionWe only reserve concepts with confidence scores bigger than the selection threshold (SS 4.2), which can hardly achieve a satisfactory balance of precision and recall. If we select a relatively big threshold, we can get more accurate concepts but may lose some correct ones. If the recall is preferred, precision might be hurt.
We suggest that future work consider these three limitations to achieve better performance in the CE task.
## Acknowledgement
We would like to thank the anonymous reviewers for their valuable comments and suggestions for this work. This work is supported by the Chinese NSF Major Research Plan (No.92270121), Shanghai Science and Technology Innovation Action Plan (No.21511100401) and the Science and Technology Commission of Shanghai Municipality Grant (No. 22511105902).
|
2310.14773 | The Galactic neutron star population II -- Systemic velocities and
merger locations of binary neutron stars | The merger locations of binary neutron stars (BNSs) encode their galactic
kinematics and provide insights into their connection to short gamma-ray bursts
(SGRBs). In this work, we use the sample of Galactic BNSs with measured proper
motions to investigate their kinematics and predict their merger locations.
Using a synthetic image of the Milky Way and its Galactic potential we analyse
the BNS mergers as seen from an extragalactic viewpoint and compare them to the
location of SGRBs on and around their host galaxies. We find that the
Galactocentric transverse velocities of the BNSs are similar in magnitude and
direction to those of their Local Standards of Rest, which implies that the
present-day systemic velocities are not isotropically oriented and the peculiar
velocities might be as low as those of BNS progenitors. Both systemic and
peculiar velocities fit a lognormal distribution, with the peculiar velocities
being as low as $\sim 22-157$ km s$^{-1}$. We also find that the observed BNS
sample is not representative of the whole Galactic population, but rather of
systems born around the Sun's location with small peculiar velocities. When
comparing the predicted BNS merger locations to SGRBs, we find that they cover
the same range of projected offsets, host-normalized offsets, and fractional
light. Therefore, the spread in SGRB locations can be reproduced by mergers of
BNSs born in the Galactic disk with small peculiar velocities, although the
median offset match is likely a coincidence due to the biased BNS sample. | Nicola Gaspari, Andrew J. Levan, Ashley A. Chrimes, Gijs Nelemans | 2023-10-23T10:16:40Z | http://arxiv.org/abs/2310.14773v1 | The Galactic neutron star population II - Systemic velocities and merger locations of binary neutron stars
###### Abstract
The merger locations of binary neutron stars (BNSs) encode their galactic kinematics and provide insights into their connection to short gamma-ray bursts (SGRBs). In this work, we use the sample of Galactic BNSs with measured proper motions to investigate their kinematics and predict their merger locations. Using a synthetic image of the Milky Way and its Galactic potential we analyse the BNS mergers as seen from an extragalactic viewpoint and compare them to the location of SGRBs on and around their host galaxies. We find that the Galactocentric transverse velocities of the BNSs are similar in magnitude and direction to those of their Local Standards of Rest, which implies that the present-day systemic velocities are not isotropically oriented and the peculiar velocities might be as low as those of BNS progenitors. Both systemic and peculiar velocities fit a lognormal distribution, with the peculiar velocities being as low as \(\sim 22-157\) km s\({}^{-1}\). We also find that the observed BNS sample is not representative of the whole Galactic population, but rather of systems born around the Sun's location with small peculiar velocities. When comparing the predicted BNS merger locations to SGRBs, we find that they cover the same range of projected offsets, host-normalized offsets, and fractional light. Therefore, the spread in SGRB locations can be reproduced by mergers of BNSs born in the Galactic disk with small peculiar velocities, although the median offset match is likely a coincidence due to the biased BNS sample.
keywords: stars: neutron - gamma-ray burst: general - Galaxy: stellar content - Galaxy: structure - stars: binaries
## 1 Introduction
The merger of a binary neutron star (BNS) can manifest itself with a variety of transient phenomena. This includes short-duration gamma-ray bursts (SGRBs) and their afterglows, gravitational waves, and kilonovae, as shown by the multi-messenger observations of GW 170817 (Abbott et al., 2017). Although all of these transients can be used to inform the physics of the mergers (Nakar, 2007; Lee & Ramirez-Ruiz, 2007; Metzger, 2017), SGRBs occupy a privileged position due to their luminosity, which makes them the easiest to detect (Metzger & Berger, 2012; Burns, 2020). Consequently, SGRBs provide the largest sample for analysis, and to date we have more than three decades of literature that explores their connection to BNS mergers (Eichler et al., 1989; Narayan et al., 1992; for reviews see Berger, 2014). The evidence supporting BNS mergers as progenitors are both indirect, such as the lack of association with supernovae, the redshift distribution, the demographics and location of their host galaxies (Berger, 2014), as well as direct, namely the concurrent detection of GRB 170817A and GW 170817. It is important to note, however, that BNS mergers do not have a one-to-one relation with SGRBs, since it is likely that not all SGRBs are produced by BNS mergers (Thompson & Duncan, 1995; Qin et al., 1998; Levan et al., 2006; Metzger et al., 2008; Troja et al., 2008; Gompertz et al., 2020), and not all mergers produce a SGRB (Rastinejad et al., 2022; Sarin et al., 2022; Salafia et al., 2022).
A key piece of evidence connecting BNS mergers to SGRBs is their location within the host galaxy. Upon their formation in core-collapse supernovae, neutron stars (NSs) receive natal kicks, as evidenced by the observed peculiar velocities of young Galactic pulsars (Hobbs et al., 2005; Verbunt et al., 2017). When the NS is in a binary, the natal kick adds to the systemic recoil due to the mass loss (also known as Blaauw kick, Blaauw, 1961; Boersma, 1961), and results in a kick to the binary barycenter of up to several hundred km s\({}^{-1}\)(Tauris et al., 2017; Vigna-Gomez et al., 2018; Andrews & Zezas, 2019). Combined with the gravitational-wave in-spiral time, which can be as long as several Gyr or more, BNS can therefore migrate and merge well outside their host galaxy of origin (Portegies Zwart & Yungelson, 1998; Bagot et al., 1998; Fryer et al., 1999; Bloom et al., 1999; Perna & Belczynski, 2002; Voss & Tauris, 2003; Belczynski et al., 2006; Zemp et al., 2009; Church et al., 2011; Behroozi et al., 2014; Mandhai et al., 2022). This is observed in the host-offset distribution of SGRBs, which occur at larger projected radii from their hosts than any other class of transient (Fong et al., 2010; Fong & Berger, 2013; Zevin et al., 2020; Fong et al., 2022), and it is not expected in other
progenitor scenarios (e.g. Fryer et al., 1999; Berger, 2011; Behroozi et al., 2014). Nevertheless, a modest fraction (\(\sim\)20 per cent) of SGRBs are apparently 'hostless' (namely there is no underlying galaxy nor a single galaxy to clearly assign as their host), and although this is not at odds with BNS mergers, it leaves open questions about the nature of largest offsets (Berger, 2010; Tunnicliffe et al., 2014; O'Connor et al., 2022). Possible explanations are that BNSs received high systemic kicks (O'Connor et al., 2022), received low systemic kicks (Benjamin and Piran, 2010) but were born either in globular clusters (Grindlay et al., 2006; Lee et al., 2010; Church et al., 2011) or in the outer regions of the host (Perets and Beniannii, 2021), or that they simply reside in a faint and/or distant host which hasn't been correctly identified (e.g. Levan et al., 2007).
Understanding the merger locations of BNSs is also important in the context of Galactic chemical enrichment. BNS mergers are thought to be important sites for the production of \(r\)-process elements (Eichler et al., 1989; Freiburghaus et al., 1999; Rosswog et al., 1999; Pian et al., 2017; Kasen et al., 2017; for reviews see Cowan et al., 2021) due to the neutron-rich ejecta and kilonovae they produce. There have been efforts to understand Galactic r-process enrichment in the context of BNS mergers in the Milky Way (Symbalitsky and Schramm, 1982; Eichler et al., 1989; Freiburghaus et al., 1999; Argast et al., 2004; Matteucci et al., 2014; van de Voort et al., 2015; Shen et al., 2015; Wehmeyer et al., 2015; Beniannii et al., 2016; Hotokezaka et al., 2018; Cote et al., 2018, 2019; Kobayashi et al., 2023), and r-process deposition on Earth has also been linked to kilonovae in the local few kpc (Bartos and Marka, 2019; Wang et al., 2021). Therefore, understanding the locations of BNS mergers has wide-ranging implications.
In this paper, we combine two approaches to studying BNS mergers - the locations of SGRBs, and the Galactic BNS population. We evolve a sample of Galactic BNSs forwards in time through the Galactic potential to determine their future merger locations. We place the merger locations in the context of the Milky Way as seen externally, and compare these results with observations of SGRBs in and around their host galaxies. The paper is structured as follows. In Section 2 we describe our Galactic BNS sample, model for the Galactic potential, and a prescription for producing a synthetic Milky Way image. Section 3 analyses the present-day velocities of observed Galactic BNSs and discusses their implications for birth locations and velocities. Section 4 presents results for BNS merger locations and their measurements as viewed from far. Section 5 outlines possible systematics in the methodology, before we summarise and conclude in Section 6.
Throughout, magnitudes are reported in the AB system (Oke and Gunn, 1982), and a flat \(\Lambda\)CDM cosmology is adopted with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{\rm m}=0.3\).
## 2 Models
### Merger locations of the Galactic BNSs
We collected our sample of Galactic BNSs from the ATNF catalogue (Manchester et al., 2005)1, and their properties are summarized in Table 1 along with the estimated merger times \(\tau_{\rm gw}\) for a gravitational-radiation driven inspiral (Peters, 1964). Out of the 15 confirmed BNSs (we exclude possible NS-WD binaries), 8 have measured proper motions, and 5 have both proper motions and \(\tau_{\rm gw}<14\) Gyr. Since our primary objective is to make predictions about BNSs merging within an Hubble time, only the last 5 are employed in our fiducial models. The remaining BNSs are used to test for systematic effects in our methodology.
Footnote 1: [http://www.atnf.csiro.au/research/pulsar/psrcat](http://www.atnf.csiro.au/research/pulsar/psrcat)
To predict the BNSs merger locations we produce \(10^{4}\) realizations of the Galactic trajectory of each binary, starting from as many realizations of their present-day positions and velocities. These initial conditions are generated through a Monte Carlo (MC) simulation, which employs observational uncertainties and allows us to propagate them to the predicted merger locations.
#### 2.1.1 Initial positions and velocities
To compute the BNSs initial positions, we assume that right ascension, declination, and distance follow Gaussian distributions with mean equal to the estimated values and standard deviation equal to the respective uncertainties. These distributions are sampled \(10^{4}\) times for each binary, giving as many realizations of their initial position.
The distances listed in Table 1 without uncertainties are estimates obtained from the dispersion measure (DM) using the electron-density model of Yao et al. (2017). For J0453+1559, J1411+2551, and J1518+4904, we also report the DM distances obtained with the model of Cordes and Lazio (2002), which will be used in Sec. 5.1 for comparison. On the DM distances we assume a conservative 20 per cent uncertainty, as done by Tauris et al. (2017). For B2127+11C, as it is likely bound to the globular cluster (GC) NGC 7078 (Kirsten et al., 2014), we assume its position to be that of the GC (see Table 2). Another exception is made for the distances of J0737-3039A/B and J1756-2251, for which we use the non-Gaussian probability distribu
Figure 1: Present-day positions of the Galactic BNSs over a synthetic Milky Way image. Triangles indicate binaries with measured proper motions, while red markers indicate binaries merging within a Hubble time. Dots indicates binaries without measured proper motions. The star-shaped marker indicates B2127+11C, which is associated with the globular cluster NGC 7078. The underlying image is our fiducial model for a Milky Way image if it were located at \(z=0.5\) and observed with a PSF FWHM of 0.1 arcsec, and a pixel size of 0.05 arcsec px\({}^{-1}\) (see Section 2.2 for details). The centre of the edge-on image reaches 18.3 mag arcsec\({}^{-2}\), and both images are cut at a limiting surface brightness of 25 mag arcsec\({}^{-2}\).
tions given by Verbiest et al. (2012)2. The BNS present-day positions are shown in Galactocentric coordinates in Fig 1.
Footnote 2: [http://psrpop.phys.wwu.edu/LKbias/](http://psrpop.phys.wwu.edu/LKbias/)
To compute the BNS initial velocities, we apply the same MC approach to the proper motions in right ascension and declination. We assume that the proper motions have Gaussian uncertainties with standard deviation equal to the observational uncertainties, and we produce \(10^{4}\) realizations for each binary. Each realization is then converted to linear units (i.e. [km s\({}^{-1}\)]) using one of the distance realizations, and the transverse component of the Sun's velocity is added to obtain the BNS transverse velocity \(V_{\rm t}\) in the Galactocentric frame. Since we have no observational estimates for the radial velocities \(V_{\rm r}\) but for B2127+11C (for which we use the radial velocity of NGC 7078), we obtain the 3D velocities through a MC simulation of their orientations \(\theta\) with respect to the line-of-sight (LoS).
For our fiducial models, we assume that the BNSs systemic velocities are isotropically oriented in the Galactocentric frame, and we hence compute \(V_{\rm r}\) as
\[V_{\rm r}=V_{\rm t}\cot\theta \tag{1}\]
where \(\theta=\arccos u\) and \(u\) is a real value uniformly sampled \(10^{4}\) times between 0 and 1. Hereafter, we will refer to the systemic velocities obtained with this assumption as Galactocentric-isotropic velocities. If the sample were bigger, we could test the isotropy assumption e.g.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & R.A. & Dec. & \(\mu_{\alpha}\) & \(\mu_{\delta}\) & Dist. & \(\tau_{\rm psr}\) & Ref. \\ Radio pulsar & [deg] & [deg] & [mas yr\({}^{-1}\)] & [mas yr\({}^{-1}\)] & [kpc] & [Myr] & \\ \hline J0737-3039A/B & 114.46353508(11) & -30.66130953(3) & -3.82(62) & 2.13(23) & 1.1(1,2) & 85 & (a) \\ B1534+12 & 234.291507208(13) & 11.9325064964(17) & 1.482(7) & -25.285(2) & 1.05(15) & 2735 & (b) \\ J1756-2251 & 269.19430755(7) & -22.86486(6) & -2.42(8) & 0(20) & 0.73(24,60) & 1457 & (c) \\ B1913+16 & 288.86666425(13) & 16.10760744(14) & -0.72(11) & -0.03(14) & 4.1(7,20) & 301 & (d) \\ B2127+11C* & 322.5050175(5) & 12.1772803(12) & -1.3(5) & -3.3(10) & 25,000 & 215 & (e) \\ J0453+1559 & 73.493273(3) & 15.9892517(17) & -5.5(5) & -6.0(42) & 0.522\({}^{-1}\) - 1.07\({}^{\dagger}\) & 1456721 & (f) \\ J1411+259 & 212.828608(10) & 25.852331(20) & -3.1(2) & -4(9) & 1.13(7.09\({}^{\dagger}\) - 0.98\({}^{\dagger}\) & \(>\)465446 & (g) \\ J1518+4904 & 229.56999618(7) & 49.07618089(5) & -0.67(4) & -8.53(4) & 0.964\({}^{\dagger}\) - 0.63\({}^{\dagger}\) & \(>\)8826539 & (h) \\ \hline J0514-4002A* & 78.5278863(9) & -40.0469147(6) & 5.19(22) & -0.56(25) & 25.000 & 507733 & (i) \\ J0509+3801 & 77.3824504(11) & 38.021690(4) & - & - & 1.562\({}^{\dagger}\) & 579 & (j) \\ J1757-1854 & 269.2657683(3) & -18.9009378(20) & - & - & 19.559\({}^{\dagger}\) & 76 & (k) \\ J1811-1736 & 272.799308(13) & -17.61047(12) & - & - & 4.419\({}^{\dagger}\) & \(>\)1794804 & (d) \\ J1829+2456 & 277.3944450(9) & 24.938869(9) & - & - & 0.909\({}^{\dagger}\) & \(>\)55375 & (m) \\ J1913+1102 & 28.3710592(13) & 11.034928(3) & - & - & 7.140\({}^{\dagger}\) & \(>\)465 & (n) \\ J1930-1852 & 292.623815(3) & -18.862853(17) & - & - & 2.004\({}^{\dagger}\) & \(>\)1e8 & (o) \\ J1946+2052 & 296.55888(3) & 20.87351(3) & - & - & 3.510\({}^{\dagger}\) & \(>\)46 & (p) \\ J1753-2240* & 268.46103(3) & -22.6783(3) & - & - & 3.232\({}^{\dagger}\) & - & (q) \\ J1755-2550* & 268.910007(1) & -25.8394(5) & - & - & 4.891\({}^{\dagger}\) & - & (r) \\ J1759+5036* & 269.940300(13) & 50.615822(6) & - & - & 0.543\({}^{\dagger}\) & \(>\)179604 & (s) \\ J1807-2459B*+ & 271.8369634(3) & -25.000532(5) & - & - & 3.045\({}^{\dagger}\) & 1039373 & (i) \\ J1906+0746* & 286.70358(17) & 7.77386(20) & - & - & 7.4(14,25) & 309 & (u) \\ \hline \end{tabular} 1
\end{table}
Table 2: Properties of the GCs associated with a Galactic BNS. The columns list the pulsar, the associated GC, the right ascension and declination, the distance from the Sun, the proper motion in right ascension and declination, and the mean radial velocity. Values in parenthesis are the uncertainties in the preceding digits. Data are taken from Baumgardt et al. (2019).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & R.A. & Dec. & \(\mu_{\alpha}\) & \(\mu_{\delta}\) & Dist. & \(\tau_{\rm psr}\) & Ref. \\ Radio pulsar & GC & [deg] & [deg] & [\(\rm{ms}\) yr\({}^{-1}\)] & [\(\rm{ms}\) yr\({}^{-1}\)] & [\(\rm{kpc}\)] & [\(\rm{Myr}\)] & Ref. \\ \hline J0737-3039A/B & 114.46353508(11) & -30.66130953(3) & -3.82(62) & 2.13(23) & 1.1(1,2) & 85 & (a) \\ B1534+12 & 234.291507208(13) & 11.9325064964(17) & 1.482(7) & -25.285(2) & 1.05(15) & 2735 & (b) \\ J1756-2251 & 269.19430755(7) & -22.86486(6) & -2.42(8) & 0(20) & 0.73(24,60) & 1457 & (c) \\ B1913+16 & 288.86666425(13) & 16.10760744(14) & -0.72(11) & -0.03(14) & 4.1(7,20) & 301 & (d) \\ B2127+11C* & 322.5050175(5) & 12.1772803(12) & -1.3(5) & -3.3(10) & 25,000 & 215 & (e) \\ J0453+1559 & 73.493273(3) & 15.9892517(17) & -5.5(5) & -6.0(42) & 0.522\({}^{-1}\) - 1.07\({}^{\dagger}\) & 1456721 & (f) \\ J
by comparing the mean value of 1D velocities to that of 2D velocities, as done for isolated pulsars by Hobbs et al. (2005). However, the small sample size prevents us from doing so, hence the sole purpose of this assumption is to best reflect our ignorance about the radial velocities.
We also provide a second estimate for the radial velocities, obtained assuming that the peculiar velocities in the BNSs Local Standards of Rest (LSRs) are isotropically oriented. Here we define as BNS LSR the frame of reference centered on the BNS location, and moving on a circular orbit around the \(Z\)-axis of the Galactocentric frame. Under this assumption, we get \(V_{\rm r}\) by first subtracting the LSR transverse velocity vector \(\mathbf{V}_{\rm t,\,LSR}\) from the BNS transverse velocity vector \(\mathbf{V}_{\rm t}\), and then computing \(V_{\rm r}\) from the residuals, namely
\[V_{\rm r}=\|\mathbf{V}_{\rm t}-\mathbf{V}_{\rm t,\,LSR}\|\cot\theta+V_{\rm r,\, LSR} \tag{2}\]
where \(V_{\rm r,\,LSR}\) is the BNS LSR radial velocity. Hereafter, we refer to these realizations as LSR-isotropic velocities. This second estimate is motivated by the possibility that Galactic BNSs might receive small systemic kicks from the second supernova (Beniamini & Piran, 2016), which would result in small peculiar velocities (Andrews & Zezas, 2019). The systemic velocities \(V\) and the peculiar velocities \(V^{\rm LSR}=\|\mathbf{V}-\mathbf{V}_{\rm LSR}\|\) obtained under the two assumptions are shown in Fig. 2.
The initial conditions are computed using the default values for the Sun Galactocentric position and velocity from astropy v4.\(\boldsymbol{\mathrm{\theta}}\)3(Astropy Collaboration et al., 2022).
Footnote 3: [http://www.astropy.org](http://www.astropy.org)
#### 2.1.2 Galactic trajectories
The Galactic trajectories defined by each realization of initial position and velocity are computed with galpy4(Bovy, 2015) using the Galactic potential model of McMillan (2017). The trajectories are evolved up to the merger time \(\tau_{\rm gw}\) of the respective binary, and the final positions are assumed to be the merger location. As we start from \(10^{4}\) initial conditions for each BNS, we end up with the same number of merger locations.
Footnote 4: [https://github.com/jobovy/galpy](https://github.com/jobovy/galpy)
### Synthetic image of the Milky Way
To analyze the BNS merger locations in the same way as SGRBs on their host, we need to reproduce how the Milky Way appears from a cosmological distance. To do so, we employ the Milky Way synthetic image of Chrimes et al. (2021), upgrading their 2D face-on model to 3D so that we can include the effects of a random viewing angle.
#### 2.2.1 Model structure: bulge, disc, and spiral arms
In the following, we provide a brief overview of the Galactic components combined to create the synthetic image, while a thorough description can be found in Chrimes et al. (2021).
The bar-bulge is modelled with the triaxial body Gaussian distribution fitted on Mira variables by Grady et al. (2020), i.e.
\[\rho_{\rm bar}=\rho_{\rm b,0}\,\exp(-0.5m^{2}) \tag{3}\]
where
\[m=\left\{\left[\left(\frac{x}{X_{\rm b}}\right)^{2}+\left(\frac{y}{Y_{\rm b} }\right)^{2}\right]^{2}+\left(\frac{z}{Z_{\rm b}}\right)^{4}\right\}^{\frac{ 1}{4}} \tag{4}\]
with \((X_{\rm b},Y_{\rm b},Z_{\rm b})=(2.05,0.95,0.73)\) kpc, and \(\rho_{\rm b,0}\) is the normalization factor. The angle between the bar-bulge semi-major axis and the Galactic-centre LoS is assumed to be \(27^{\circ}\)(Wegg & Gerhard, 2013).
The disc is modelled with a double exponential disc
\[\rho_{\rm disc}=\rho_{\rm d,0}\,\exp\left(-\frac{\sqrt{x^{2}+y^{2}}}{R_{\rm d }}\right)\,\exp\left(-\frac{|z|}{Z_{\rm d}}\right) \tag{5}\]
where \(R_{\rm d}=2.6\) kpc and \(Z_{\rm d}=0.3\) kpc. These scale values are typical estimates for the Milky Way thin disc, which is the dominant component of the disc stellar light (Bland-Hawthorn & Gerhard, 2016).
The spiral arms are mapped following the method of Reid et al. (2019), using young stellar object masers and H ii regions from Urquhart et al. (2014) as tracers. To avoid the shadow produced by dust absorption behind the bar-bulge, we only use the lower half of the map, at Galactocentric coordinates \(X\leq 0\) (see Fig. 1), while the upper half is replaced by a reflected lower half.
The synthetic images are produced in two bands, namely the \(I\)- and the \(B\)-band. The normalization factors \(\rho_{\rm b,0}\) and \(\rho_{\rm d,0}\) are tuned so that the total luminosity of the respective components matches the observational estimates. In the \(I\)-band we match the luminosities given by Flynn et al. (2006), namely \(10^{10}\) L\({}_{\sun}\) for the bar-bulge, and \(3\times 10^{10}\) L\({}_{\sun}\) for the disc including spiral arms. For the \(B\)-band we use instead the \(B-I\) colours of Milky Way analogues from Licquiar et al. (2015). We use \(B-I=2.41\) for the bulge-bar and \(B-I=1.62\) for the disc including arms, which give respectively a total luminosity of \(0.11\times 10^{3}\) L\({}_{\sun}\) and \(0.67\times 10^{3}\) L\({}_{\sun}\). The \(I\)-band luminosities are corrected for dust extinction, while the \(B\)-band luminosities are not (for details see Chrimes et al., 2021).
The fraction of disc luminosity arising from the spiral arms alone is assumed to be equal to the arm strength, which is the relative amplitude of the 2nd to 4th order Fourier components of the azimuthal light profile along elliptical isophotes (e.g. Yu et al., 2018). For the \(I\)-band we use an arm strength of 0.15 (e.g. Diaz-Garcia et al., 2019; Yu & Ho, 2020), while for the \(B\)-band we use 0.20 (Yu et al., 2018), both of which are typical values for Milky Way analogues.
#### 2.2.2 Photometry and half-light radius
The method described in the previous Section gives an analytical model of the Milky Way luminosity density in units of e.g. [L\({}_{\sun}\) pc\({}^{-3}\)]. To produce a 2D image, the model is first projected along an arbitrary LoS, then processed to mimic the effects of redshift and instrumental resolution to simulate observations with a given instrument of the galaxy as viewed at an arbitrary redshift.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{\(V\)} & \multicolumn{2}{c}{\(\mathbf{V}^{\rm LSR}\)} \\ \cline{2-5} & Galacen-iso & LSR-iso & Galacen-iso & LSR-iso \\ \hline \(\alpha\) & 8.48 & 21.35 & -11.14 & 0.46 \\ \(\beta\) & 276.16 & 226.13 & 192.91 & 57.66 \\ \(\gamma\) & 0.53 & 0.30 & 0.74 & 1.01 \\ \(\mu\) [km s\({}^{-1}\)] & 285 & 245 & 182 & 58 \\ \(1\sigma\) [km s\({}^{-1}\)] & 172-475 & 189-326 & 82-391 & 22-157 \\ \hline \(\mu\) [km s\({}^{-1}\)] & 284 & 247 & 182 & 60 \\ \(1\sigma\) [km s\({}^{-1}\)] & 175-474 & 192-323 & 83-391 & 23-159 \\ \hline \end{tabular}
\end{table}
Table 3: Parameters of the lognormal distributions fitted to systemic velocities \(V\) and peculiar velocities V\({}^{\rm LSR}\). Last four rows reports median values \(\mu\) and 16th-84th percentiles, using DM distances from either Yao et al. (2017) (two upper rows) or Cordes & Lazio (2002) (two lower rows) for J0453+1559, J141+2551, and J1518+4904.
We start with a grid of points in Galactocentric coordinates. The grid is rotated by a phase \(\phi\) along the \(Z\)-axis and by a inclination \(i\) along the \(Y\)-axis, in order to be aligned with the chosen LoS. The analytical 3D model is evaluated on the grid and summed along the \(X\)-axis with a Riemann sum, to get a 2D image of the surface brightness \(I\) in units of [L\({}_{\sun}\) pc\({}^{-2}\)]. The number of grid points is chosen such that a double Riemann sum of the image gives a total luminosity differing by less than 1 per cent from the prescribed value.
We then chose the observer redshift \(z\), the point spread function (PSF), and the pixel scale of the image (i.e. the angular resolution). The 2D grid is converted from linear to angular units (e.g. from [pc] to [arcsec]) using the angular diameter distance \(D_{\rm A}(z)\), and the image is first convolved with a Gaussian PSF of given full width at half maximum (FWHM) and then downsampled to the pixel scale. The last step is done using measure.block_reduce from the scikit-image library (van der Walt et al., 2014). For the fiducial models we use \(z=0.5\), a pixel size of 0.05 arcsec px\({}^{-1}\), and a PSF FWHM of 0.1 arcsec. The fiducial redshift is chosen because the median redshift of our SGRB sample is \(z=0.46\), while the other two values are typical for the SGRB hosts observations, for which the comparison sample used here is predominantly taken via the _Hubble Space Telescope_ (_HST_, see Table 4 for references). The possible systematics introduced by this choice of parameters are discussed in Section 5.4.
For our purposes, namely computing half-light radii and fractional fluxes, we need to measure the Galaxy total flux from the images. Since its value depends on the image depth, we need to simulate the background noise. To do so, we first convert \(I\) from [L\({}_{\sun}\) pc\({}^{2}\)] to [L\({}_{\sun}\) arcsec\({}^{2}\)] using the angular distance \(D_{\rm A}\). Then, we convert the surface brightness from [L\({}_{\sun}\) arcsec\({}^{2}\)] to [mag arcsec\({}^{2}\)] using
\[\mu=-2.5\log I+5\log\left(\frac{D_{\rm L}}{10\ {\rm pc}}\right)-2.5\log(1+z)+M_{\sun} \tag{6}\]
where \(\mu\) is the surface brightness in magnitude units, \(D_{\rm L}(z)\) is the luminosity distance, and \(M_{\sun}\) is the absolute magnitude of the Sun, i.e. 4.51 in the \(I\)-band and 5.31 in the \(B\)-band (Willmer, 2018). Finally, we chose a limiting magnitude \(\mu_{\rm lim}\) and set to \(I=0\) and \(\mu=\mu_{\rm lim}\) all those pixels with \(\mu>\mu_{\rm lim}\). In our fiducial models we use \(\mu_{\rm lim}=25\) mag arcsec\({}^{2}\), motivated again by the typical values of the SGRB host observations. The edge-on and face-on images obtained with the fiducial setup are shown in Fig. 1.
The half-light radius \(r_{\rm e}\) is given as the semi-major axis of the isophote enclosing half of the total flux. We use elliptical isophotes, fitted using isophote.Ellipse.fit_image from the photutils package (Bradley et al., 2023).
## 3 Systemic and peculiar velocities
Before analyzing the merger locations, it is worth taking a closer look at the BNS present-day velocities, as they bear some insight not only about the predicted merger locations but also about the BNS properties themselves. In the following Section we consider the 7 BNSs with measured proper motion, with the exception of B2127+11C which is likely bound to a GC, and thus has a proper motion which might be biased by the GC internal dynamics.
Figure 2: Galactocentric systemic velocities \(V\) and peculiar velocities in the BNS LSRs \(V^{\rm LSR}\) of the Galactic BNSs with measured proper motions (except B2127+11C). The upper panels show the total cumulative distributions of \(V\) and \(V^{\rm LSR}\). Dashed lines are distributions from the MC simulations, while solid lines are lognormal distribution fitted to the MC simulations. The middle and bottom panels show respectively the Galactocentric-isotropic and LSR-isotropic velocities for each single BNS considered. The boxes extend from the 1st to the 3rd quartiles, with the orange line on the median value.
### Transverse velocity of the BNSs and their LSR
Let us consider the BNS transverse velocities \(\mathbf{V_{\mathrm{t}}}\). When we compare them to the transverse velocities of their respective LSRs \(\mathbf{V_{\mathrm{t,LSR}}}\) (namely the circular velocity at their position), we find that the two have similar magnitudes and directions in all but two cases (see Fig. 3). This suggests that the BNS systemic velocities are (\(i\)) not isotropically oriented in the Galactocentric frame, and (\(ii\)) not much different from the velocity of their LSR, or in other words, that they have small peculiar velocities with respect to their LSR. From Fig. 3, we notice that this is due to the tangential component of the Sun's velocity \(\mathbf{V_{\mathrm{t,\odot}}}\) being aligned to \(\mathbf{V_{\mathrm{t,LSR}}}\), and dominating over the BNS proper motions. We also notice that \(\mathbf{V_{\mathrm{t,\mathrm{LSR}}}}\) are almost completely in the tangential direction, since \(V_{\mathrm{t,\mathrm{LSR}}}\) is \(\sim 200\) km s\({}^{-1}\) in all but two cases.
For the reason discussed in the previous paragraph, we conclude that the BNS systemic velocities inferred under the assumption of Galactocentric-isotropy (eq. 1) might be overestimated. Instead, those inferred under the assumption of LSR-isotropy (eq. 2) likely provide a lower limit, since similar \(\mathbf{V_{\mathrm{t}}}\) and \(\mathbf{V_{\mathrm{t,LSR}}}\) would result in small radial velocities. This become clear when comparing the systemic and peculiar velocities \(V\) and \(V^{\mathrm{LSR}}\) from the two assumptions. In the middle and bottom panels of Fig. 2 we see that while we obtain similar \(V\) from the two assumptions, the Galactocentric-isotropic \(V^{\mathrm{LSR}}\) all peak around \(\sim 200\) km s\({}^{-1}\) whereas the LSR-isotropic ones are distributed between \(\sim 10\) and \(\sim 200\) km s\({}^{-1}\). These latter estimates cover the same range of values as the BNS progenitors, namely NS-hosting high-mass X-ray binaries (see Fig. 2 from Fortin et al., 2022), and that they get as low as the velocity dispersion in the thin disc (Robin et al., 2003).
The cumulative distributions of \(V\) and \(V^{\mathrm{LSR}}\) for all the BNSs with measured proper motions (except the one in the GC, i.e. B2127+11C) are fitted with the lognormal distribution
\[f(x,\alpha,\beta,\gamma)=\frac{1}{x\beta\gamma\sqrt{2\pi}}\exp\left[-\frac{1} {2\gamma^{2}}\log^{2}\left(\frac{x-\alpha}{\beta}\right)\right] \tag{7}\]
using scipy.stats.lognorm.fit(Virtanen et al., 2020). The fits are shown in the upper panels of Fig. 2 and the fitted parameters are reported in Tab. 3. For the whole sample, we find \(V\approx 285^{+190}_{-113}\) km s\({}^{-1}\) and \(V^{\mathrm{LSR}}\approx 182^{+200}_{-100}\) km s\({}^{-1}\) under Galactocentric-isotropy, and \(V\approx 245^{+81}_{-56}\) km s\({}^{-1}\) and \(V^{\mathrm{LSR}}\approx 58^{+99}_{-36}\) km s\({}^{-1}\) under LSR-isotropy. This lower value is also consistent with that inferred by Beniamini & Piran (2016).
Despite the qualitative differences resulting from the two assumptions, we do not expect them to produce quantitative differences in the merger locations. Indeed, both assumptions result in systemic velocities that are close to the circular velocity (\(\sim 230\) km s\({}^{-1}\), Bovy et al., 2012; McMillan, 2017) and well below the escape velocity at the Sun location (\(\sim 500-600\) km s\({}^{-1}\), Piffl et al., 2014; Monari et al., 2018). Therefore, they should lead to similar merger offsets. A quantitative analysis of their effect on the results is provided in Sec. 5.2.
### Implications for birth locations and velocities
As mentioned in the previous Section, almost all the BNSs with proper motions have \(\mathbf{V_{\mathrm{t}}}\) closely aligned to \(\mathbf{V_{\mathrm{t,LSR}}}\), which hints that \(\mathbf{V}\) is likely not isotropic. The deviation is evident when comparing the angles \(\theta\) between \(\mathbf{V_{\mathrm{t}}}\) and \(\mathbf{V_{\mathrm{t,LSR}}}\) to the isotropic distribution, as shown in the left panel of Fig. 4. The \(\mathbf{V_{\mathrm{t}}}-\mathbf{V_{\mathrm{t,LSR}}}\) alignment also implies that the BNSs might have peculiar velocities \(V^{\mathrm{LSR}}\) as low as those of their progenitors. This in turn suggests that our sample might be biased toward systems that received low kicks to the barycenter following the second supernova, and raises the question about how low the kicks should be in order to reproduce the observed \(\theta\).
To this end, we employ a toy model to check the range of suitable kicks. We simulate the trajectories of \(10^{5}\) point masses with circular orbits in the Galactic disc after receiving a kick \(V_{\mathrm{in}}^{\mathrm{LSR}}\). We test 7 different kick magnitudes, namely 50, 100, 200, 300, 400, 500, and 600 km s\({}^{-1}\), and for each of these values we seed the point masses in the plane at \(Z=0\) using eq. 5 as probability distribution. The masses are initialized with a velocity equal to the circular velocity plus a kick \(V_{\mathrm{in}}^{\mathrm{LSR}}\) in a random direction. The trajectories are integrated for 5 Gyr, and we record location and velocity at 300 random times between 0 and 5 Gyr. Of all the locations we record, we select those that fall within 1 kpc from the Sun since the BNS sample is biased to this region, and compute the angle \(\theta_{\mathrm{fin}}\) between \(\mathbf{V_{\mathrm{t}}}\) and \(\mathbf{V_{\mathrm{t,LSR}}}\). The respective starting radial positions \(R_{\mathrm{in}}\) are recorded as well.
The left panel of Fig. 4 shows a comparison between the measured \(\theta\) distribution and the predicted \(\theta_{\mathrm{fin}}\) distributions for each value of \(V_{\mathrm{in}}^{\mathrm{LSR}}\). We find that lower kicks result in angles skewed toward small values (meaning \(\mathbf{V_{\mathrm{t}}}\) is mostly aligned to \(\mathbf{V_{\mathrm{t,LSR}}}\)), while at higher kicks the angle distribution steepens at both low and high values (meaning \(\mathbf{V_{\mathrm{t}}}\) is either aligned or anti-aligned to \(\mathbf{V_{\mathrm{t,LSR}}}\)). The mea
Figure 3: Transverse velocities of the BNSs in Fig. 2. The axes are arbitrarily oriented so that the LSR transverse velocity vectors \(\mathbf{V_{\mathrm{t,LSR}}}\) in the \(x\)-axis. Grey triangles indicate the LSR transverse velocities \(\mathbf{V_{\mathrm{t,LSR}}}\), green arrows indicate the BNSs transverse velocities \(\mathbf{V_{\mathrm{t}}}\), and light-green arrows indicate the transverse component of the Sun velocity \(\mathbf{V_{\mathrm{t,\odot}}}\). The dashed green lines enclose the \(1\sigma\) regions of \(\mathbf{V_{\mathrm{t,LSR}}}\), while the grey shaded areas are the \(1\sigma\) regions of \(\mathbf{V_{\mathrm{t,LSR}}}\), both obtained from a Gaussian kernel density estimation of our MC simulations. These \(1\sigma\) regions represent the uncertainties from the oxy-sky locations and proper motions, and the distance estimates. The dotted black lines of J0453+1559, J1411+2551, and J1518+4904, enclose the \(\mathbf{V_{\mathrm{t}}}\)\(1\sigma\) regions when the DM distance is estimated from Cordes & Lazio (2002) instead of Yao et al. (2017).
sured \(\theta\) distribution is well reproduced with kick magnitudes up to \(100-200\) km s\({}^{-1}\), which are compatible with the values inferred from the observed SGRB offsets (\(\sim 20-140\) km s\({}^{-1}\), Fong & Berger 2013). We find that strongly-kicked systems probe the inner regions of the Galaxy, while weakly-kicked systems probe only a region close to the Sun, as shown in the middle panel in Fig 4.
Since each realisation with different kicks in the disc start with the same number of binaries, the right panel of Fig. 4 shows that the number of binaries with 1 kpc of us in the strong-kick scenarios is \(<40\) per cent of the weak-kick scenarios. Therefore a modest population of strongly-kicked binaries would not be recovered in the BNS samples that we have to date. Taking into account that population synthesis predicts an anti-correlation between kicks and merger times (e.g. Fig. C1 in Vigna-Gomez et al. 2018), hence that strongly-kicked BNSs should be even rarer in the Milky Way given its age and low star formation rate, the \(\mathbf{V_{\rm t}}-\mathbf{V_{\rm t,LSR}}\) alignment suggests that our sample probes only the BNS population born with small \(V_{\rm in}^{\rm LSR}\) at around the same Galactocentric radius as the Sun. In other words, that our sample is neither representative of the BNSs born in the central regions, nor of the BNSs residing in the outer regions of the Milky Way.
## 4 Merger locations
In the following Sections we analyze the merger locations of the 5 Galactic BNSs with measured proper motions and \(\tau_{\rm BW}<14\) Gyr, over the fiducial Milky Way image in the \(I\)-band. We refer to this setup as the fiducial model. The analysis employs three observables: the projected offsets \(r_{\rm h}\), the normalized offset \(r_{\rm n}\) and the fraction of light \(f_{\rm light}\). The first observable is the on-sky projection of the merger offset from the Galactic centre, the second is the projected offset expressed in units of \(r_{\rm e}\), while the third is the fraction of total light contained in pixels dimmer than the one at the transient location. The last two observables are commonly used in the analysis of transients locations on their hosts (e.g. Bloom et al. 2002; Fruchter et al. 2006), including SGRBs (e.g. Berger 2014).
We choose the viewing angles with which to project the 3D galaxy model onto a 2D image, for each merger, from an isotropic distribution. The possible values for \(i\) and \(\phi\) are distributed over a discrete grid with 10 values for \(i\) and 20 for \(\phi\). In particular, we use \(i=\arccos(u)\) where the \(u\) are 10 evenly-spaced values in \([0,1]\), and \(\phi\) are 20 evenly-spaced values in \([0,\pi]\).
The predictions are then compared to a sample of observed SGRBs, listed in Table 4. The sample is a subset of the SGRBs analyzed by Fong et al. (2022), selected for having measured \(r_{\rm n}\). The \(f_{\rm light}\) values are collected from various works in the literature (see Table 4 for the references).
### Projected offsets
We produce \(10^{4}\) realizations of the \(r_{\rm h}\) distribution by picking 1 out of the \(10^{4}\) merger locations for each of the 5 Galactic BNSs. The cumulative \(r_{\rm h}\) distributions are shown in the left panels of Fig. 5, together with those of the SGRBs. The latter are produced with a MC simulation, assuming the SGRB offsets in Tab. 4 have Gaussian uncertainties with standard deviation equal to the observational uncertainties. We produce \(10^{4}\) realizations of such distributions, so that we can compare them one-to-one to those of the Galactic BNSs.
The Kolmogorov-Smirnov (KS) test cannot reject the null hypothesis at a significance level below 5 per cent in virtually all realizations. In particular, the null hypothesis is rejected in \(<0.3\) per cent of cases when comparing the BNS mergers to the whole SGRB sample, in \(<0.5\) per cent of cases when comparing to SGRBs with late-types host (type Q and T in Tab. 4), and in \(<0.1\) per cent when comparing to SGRBs with early-types host (type SF in Tab. 4). Note that the BNSs merger realizations are not independent, thus one should not expect to reject the null hypothesis at a 5 per cent significance level in \(\sim 5\) per cent of cases, as per definition.
The median projected offset of the BNS mergers is \(\langle r_{\rm h}\rangle\approx 6\)
Figure 4: **Left.** Distribution of angles \(\theta_{\rm in}\) between \(\mathbf{V_{\rm t}}\) and \(\mathbf{V_{\rm t,LSR}}\) for different kicks \(V_{\rm in}^{\rm LSR}\). Green lines labelled “BNS sample” are realizations of the observed \(\theta\) from Galactic BNSs, while all the other solid coloured lines are predictions for binaries in an exponential disc after an isotropic kick of given magnitude, as explained in Sec. 3.2. The color code for the kick magnitude \(V_{\rm in}^{\rm LSR}\) is shown in the right plot. The grey dotted line is the distribution predicted for point masses with isotropic velocities, located at random positions within 1 kpc from the Sun. **Centre.** Initial radial positions \(R_{\rm in}\) of the point masses that end up within 1 kpc of the Sun for different kicks \(V_{\rm in}^{\rm LSR}\). As we can see, high \(V_{\rm in}^{\rm LSR}\) allow us to probe the inner Galactic regions, whereas low \(V_{\rm in}^{\rm LSR}\) bias the sample to systems with \(R_{\rm in}\approx R_{\rm\odot}\). **Right.** Counts of the point masses that end up within 1 kpc of the Sun for different kicks \(V_{\rm in}^{\rm LSR}\), normalized to the highest value. As \(V_{\rm in}^{\rm LSR}\) increases, the systems found around the Sun becomes less numerous.
kpc (see Fig. 5). This value is similar to the BNS present-day offsets, which due to observational bias is around the Galactocentric radius of the Sun, i.e. \(R_{\sun}\approx 8\) kpc (which is on average 20 per cent smaller upon projection)5, and it is a result of the systemic velocities being close to the circular velocity, and well below the escape velocity at the Sun location. By chance, these values are similar to the median projected offset of the SGRBs, which is \(\langle r_{\rm h}\rangle\approx 8\) kpc (see Fig. 5). When taken together with the small radial displacement in the BNS trajectories, this suggests that the similarity between the \(r_{\rm h}\) distributions of SGRBs and BNS mergers might be simply a coincidence, resulting from the BNS sample being biased toward systems located close to the Sun. Or in other words, if the Sun were located at a significantly larger (or smaller) Galactocentric radius, then we might find the same BNSs mergers peaking at a different \(r_{\rm h}\). Nevertheless, it is noteworthy that our models predict merger offsets that are comparable to the upper tail of the SGRB distribution despite this bias.
Footnote 5: Consider a point on a sphere with cartesian coordinates \((x,y,z)=(\rho\cos\phi\sin i,\rho\sin\phi\sin i,\rho\cos i)\). The projected radius on the \(xy\)-plane is \(R=\rho\sin i\). The expected value of \(R\) assuming an isotropic random orientation is \((R)=\frac{\int_{0}^{z}d\phi\int_{0}^{z}d\phi\int_{0}^{z}d\phi}=\frac{\pi}{4} \rho\approx 0.8\rho\).
### Normalized offsets
The cumulative distributions of \(r_{\rm h}\) for the BNS mergers are shown in the middle panels of Fig 5, together with those for the SGRBs. Similarly to the \(r_{\rm h}\), the KS test cannot reject the null hypothesis at a significance level below 5 per cent in virtually all realizations. In particular, the null hypothesis is rejected in \(<0.5\) per cent of cases when comparing the BNS mergers to the whole SGRB sample, in \(<0.4\) per cent of cases when comparing to SGRBs with late-types host, and in \(<0.1\) per cent when comparing to SGRBs with early-types host. Overall, we find that the projected offsets of the various samples agree better when normalized to \(r_{\rm e}\), as already noted by Nugent et al. (2022).
The inclusion of Milky Way's morphological properties in our analysis, e.g. \(r_{\rm e}\), raises the question about how representative our Galaxy is with respect to the SGRB host population. Although a thorough comparison is beyond the scope of this work, a brief discussion is already insightful. The Milky Way is a very bright yet relatively red spiral galaxy, which likely belongs to the green valley in the colour-magnitude diagram (Mutch et al., 2011; Licquia et al., 2015; Boardman et al., 2020). Whereas its total stellar mass (\(M_{\bullet}\approx 6\times 10^{10}\) M\({}_{\sun}\), Bland-Hawthorn & Gerhard, 2016) is around the median value for transitioning/quiescent (T/Q) SGRB host and \(1\sigma\) higher than star-forming (SF) hosts, its star formation rate (\(\dot{m}_{\bullet}\approx 1.65\) M\({}_{\sun}\) yr\({}^{-1}\)
Figure 5: Projected offsets \(r_{\rm h}\) (left), normalized offsets \(r_{\rm h}\) (centre), and fraction of light \(f_{\rm light}\) (right) for BNS mergers and SGRBs. Solid thick lines represent the median distributions of SGRBs, either for the whole sample (upper panels) or divided by host type (lower panels). Dashed thick lines represent the median distributions of BNSs mergers from our fiducial model. Semi-transparent lines are the single realizations of the corresponding distributions. The insets show the distribution of the \(p\)-values from the KS test when comparing all the realizations one-to-one, with the shaded area indicating the 5% region.
Bland-Hawthorn & Gerhard, 2016) is \(2\sigma\) higher than any T/Q host and below \(\sim 70\) per cent of the SF hosts (see Fig. 5 from Nugent et al., 2022). Thus, we cannot conclusively compare the Milky Way with either Q/T or SF hosts alone, but we can conclude that our Galaxy is at least more massive than half of the Q/T hosts, and in the upper quartile of the whole host population. Despite this, it has been suggested that the scale length \(R_{\rm d}\) of the Milky Way disc is anomalously small (Malhotra et al., 1996; Hammer et al., 2007; Licquia et al., 2016; Boardman et al., 2020). Based on the luminosity-velocity-radius (LVR) scaling relation, Licquia et al. (2016) find that the Milky Way \(R_{\rm d}\) is half of the typical value of similar galaxies, which is \(R_{\rm d}\approx 5\) kpc, and that our Galaxy lies farther from the LVR relation than \(\sim 90\) per cent of other spiral galaxies, in agreement with Hammer et al. (2007).
These remarks suggest that even if the Milky Way would be representative of the most massive SGRB hosts, the size of its stellar disc could play a role in shifting the \(r_{\rm n}\) distribution of BNS merger to lower values, supporting even more our claim that the agreement we find between BNS mergers and SGRBs is a cosmic coincidence. Nevertheless, we do not investigate the impact of a different \(R_{\rm d}\) since the stochastic spread in the predictions would still be dominant in the KS test.
### Fraction of light
The cumulative distributions of \(f_{\rm light}\) for the BNS mergers are shown in the right-hand panels of Fig 5, together with those for the SGRBs. Similar to the previous cases, the KS test cannot reject the null hypothesis at a significance level below 5 per cent in virtually all realizations. In particular, the null hypothesis is rejected in \(<0.01\) per cent of cases when comparing the BNS mergers to either the whole SGRB sample, to SGRBs with late-types host, or to SGRBs with early-types host. We note that all the \(f_{\rm light}\) distribution is skewed to the left, meaning that both BNS mergers and SGRBs do not trace the stellar light, and that they are more likely found in the dimmer pixels. The remarks we made in the previous Section about the Milky Way \(R_{\rm d}\) have also implications for the \(f_{\rm light}\) distributions of BNS mergers, as a more compact disc would skew the distributions even more to the left. However, we do not test a different \(R_{\rm d}\) as the significant stochastic spread would still dominate in the KS test, as mentioned earlier.
## 5 Systematics
### DM distances
To estimate the distance of J0453+1559, J1411+2551, and J1518+4904, we use their DM together with the model for the free-electron density in the Milky Way from Yao et al. (2017). Different electron-density models however can lead to different distance estimates, therefore we want to test the impact of our specific choice of model. To do this, we compare the results obtained with the model of Yao et al. (2017) to those obtained with the widely-used model of Cordes & Lazio (2002).
First of all, we note that all three BNSs with DM distances have merger times greater than the Hubble time, therefore we do not use them for our fiducial models of \(r_{\rm h}\), \(r_{\rm n}\), and \(f_{\rm light}\). For this reason, the choice of a specific electron-density model do not impact our results on the BNS merger locations. Regarding the BNS velocities instead, we note that the two different models give similar results. The lognormal distributions fitted to both \(V\) and \(V^{\rm LS}\) have the same mean value and \(1\sigma\) interval regardless of the electron-density model, as reported in Tab. 3. The angles \(\theta\) between \({\bf V_{\rm t}}\) and \({\bf V_{\rm t,LSR}}\) are also not affected by a different electron-density model, as shown in Fig. 3. We do not show a comparison between the \(\theta\) distributions from the two electron-models as they overlap and would not be distinguishable. Therefore, we conclude that also our results involving the BNS systemic velocities are not affected by the choice of electron-density model.
### Isotropy assumptions
The toy model discussed in Sec. 3.2 has not only implications for how representative the BNS sample is of the whole Galactic population, but also for our estimates of the radial velocities \(V_{\rm r}\). The \({\bf V_{\rm t}}-{\bf V_{\rm t,LSR}}\) alignment disfavours the assumption of isotropy under which we obtain \(V_{\rm r}\) for our fiducial model, and also favours kicks with magnitudes less than or equal to the circular velocity, which means
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline GRB & Type & \(z\) & \(r_{\rm h}\) & \(r_{\rm n}\) & \(f_{\rm light}\) \\ & & & [kpc] & [\(r_{\rm e}\)] & \\ \hline
050509B & Q & 0.23 & 55.19\(\pm\)1.243 & 2.59\(\pm\)0.58 & - \\
050709 & SF & 0.16 & 3.76\(\pm\)0.056 & 2.00\(\pm\)0.03 & 0.09 \({}^{\rm a}\) \\
050724 & Q & 0.25 & 2.74\(\pm\)0.08 & 0.67\(\pm\)0.02 & 0.33 \({}^{\rm a}\) \\
051210 & SF & 2.58 & 29.08\(\pm\)16.34 & 5.65\(\pm\)3.17 & - \\
051221A & SF & 0.55 & 2.08\(\pm\)0.19 & 0.89\(\pm\)0.083 & 0.65 \({}^{\rm a}\) \\
060121 & - & - & 0.97\(\pm\)0.37 & 0.18\(\pm\)0.069 & 0.41 \({}^{\rm a}\) \\
060313 & - & - & 2.60\(\pm\)0.55 & 1.39\(\pm\)0.3 & 0.00 \({}^{\rm a}\) \\
060614 & SF & 0.13 & 0.70\(\pm\)0.79 & 0.86\(\pm\)0.97 & - \\
061006 & SF & 0.46 & 1.39\(\pm\)0.29 & 0.37\(\pm\)0.077 & 0.63 \({}^{\rm a}\) \\
070429B & SF & 0.90 & 6.00\(\pm\)13.44 & 1.17\(\pm\)2.62 & - \\
070707 & - & - & 3.25\(\pm\)0.24 & 1.11\(\pm\)0.083 & 0.00 \({}^{\rm a}\) \\
070714B & SF & 0.92 & 12.33\(\pm\)0.87 & 5.17\(\pm\)0.37 & 0.00 \({}^{\rm a}\) \\
070724 & SF & 0.46 & 5.52\(\pm\)0.18 & 1.49\(\pm\)0.48 & 0.23 \({}^{\rm a}\) \\
070809 & T & 0.47 & 34.11\(\pm\)2.75 & 9.34\(\pm\)0.75 & - \\
071227 & SF & 0.38 & 14.74\(\pm\)0.26 & 3.08\(\pm\)0.055 & 0.00 \({}^{\rm a}\) \\
080503 & - & - & 7.31\(\pm\)0.24 & 3.46\(\pm\)0.12 & - \\
090305 & - & - & 3.49\(\pm\)0.24 & 1.19\(\pm\)0.083 & 0.30 \({}^{\rm a}\) \\
090510 & SF & 0.90 & 10.51\(\pm\)2.92 & 1.66\(\pm\)0.46 & 0.00 \({}^{\rm a}\) \\
090515 & Q & 0.40 & 76.19\(\pm\)0.16 & 1.39\(\pm\)0.03 & 0.00 \({}^{\rm a}\) \\
091109 & - & 4.22\(\pm\)0.41 & 1.93\(\pm\)0.19 & - \\
100117A & SF & 0.91 & 1.35\(\pm\)0.32 & 0.61\(\pm\)0.14 & 0.54 \({}^{\rm a}\) \\
130603B & SF & 0.36 & 5.40\(\pm\)0.20 & 0.71\(\pm\)0.27 & 0.35 \({}^{\rm a}\) \\
130912A & - & - & 3.90\(\pm\)10.65 & 1.41\(\pm\)0.38 & - \\
131004A & - & 0.72 & 0.80\(\pm\)0.22 & 0.25\(\pm\)0.068 & - \\
150101B & Q & 0.13 & 7.36\(\pm\)0.072 & 0.78\(\pm\)0.007 & 0.21 \({}^{\rm b}\) \\
150424A & - & - & 3.41\(\pm\)0.32 & 1.50\(\pm\)0.14 & - \\
160303A & SF & 1.01 & 15.31\(\pm\)0.90 & 3.42\(\pm\)0.20 & - \\
160624A & SF & 0.48 & 9.63\(\pm\)6.24 & 2.37\(\pm\)1.54 & - \\
160821B & SF & 0.16 & 15.74\(\pm\)0.03 & 4.24\(\pm\)0.008 & - \\
170817A & Q & 0.01 & 2.125\(\pm\)0.001 & 0.64\(\pm\)0.03 & 0.54 \({}^{\rm c}\) \\
200522A & SF & 0.55 & 0.93\(\pm\)0.19 & 0.24\(\pm\)0.048 & 0.95 \({}^{\rm d}\) \\
211106A & - & - & 0.79\(\pm\)0.29 & 0.49\(\pm\)0.18
that the systemic velocities might still encode the Galactic rotation (although the velocity is not conserved during the orbit).
To check the impact of the isotropy assumption, we perform the analysis in Sec. 4.1-4.3 on models employing LSR-isotropic velocities instead of the Galactocentric-isotropic ones, while keeping all the other parameters unchanged. In this variation we compute \(V_{\rm r}\) by de-projecting the residuals \(\|{\bf V}_{\rm t}-{\bf V}_{\rm t,LSR}\|\) on a random angle, thus simulating the extreme case in which the systemic velocities are the circular velocities at the BNSs locations plus some peculiar velocity. When comparing \(r_{\rm h}\), \(r_{\rm n}\), and \(f_{\rm light}\) of BNS mergers to those of SGRBs, the KS test gives the same results as for the fiducial model, namely it cannot reject the null hypothesis below a 5 per cent significance level in virtually all the cases and for all three observables, even though the LSR-isotropic velocities decrease the higher values of \(r_{\rm h}\) and increase those of \(f_{\rm light}\). This result reflects the fact that even if Galactocentric- and LSR-isotropic velocities cover different ranges, they are still both close to the circular velocity and well below the escape velocity at the Sun's location, as discussed in Sec. 3.1. The distributions of observables predicted with by the two assumptions are compared in Fig. 7, where we see that the two deviates only for \(r_{\rm h}\) and \(r_{\rm n}\) at high values.
### Initial conditions
Of the 8 confirmed BNSs with measured proper motion, only the 5 with \(\tau_{\rm gw}<14\) Gyr have been used to predict the merger locations. We now employ the remaining 3 to understand how our results depends on the specific initial positions and velocities. To predict the merger locations of these 3, though, we cannot not use their true \(\tau_{\rm gw}\), since they are at least an order of magnitude greater than the Hubble time (see Table 1) and they might lead to unphysical offsets, i.e. in the case of unbound trajectories. Instead, we use one of the 5 BNSs \(\tau_{\rm gw}\) that are below 14 Gyr, motivated by the fact that the \(\tau_{\rm gw}\) do not show correlation with neither the BNSs positions nor their peculiar velocities, as shown in Fig. 6.
To do this, we repeat the analysis from Sec. 4.1-4.3, but this time we swap the initial conditions of each realization with the present-day position and velocity of a BNSs randomly drawn among the 8 with measured proper motions, while keeping the \(\tau_{\rm gw}\) unchanged. The KS test results remain the same of the fiducial model, namely the test cannot reject the null hypothesis below a 5 per cent significance level in virtually all the cases, for all three observables and for both Galactocentric- and LSR-isotropic velocities. As shown in Fig. 7, this variation affects only the \(f_{\rm light}\) distribution skewing it more toward small values.
### Band and resolution of the synthetic image
Lastly, we test our choice of parameters for the fiducial Milky way image. Since the vast majority of SGRBs observables we collect are obtained from _HST_ observations, we only test the effects of different redshifts together with different bands, without investigating the impact of PSF and pixel size.
In the fiducial model, we modelled the Milky Way image using \(z=0.5\) and the surface brightness in the \(I\)-band \(\mu_{I}\). The choice of redshift is motivated by the median redshift of the SGRB hosts, while the choice of band is motivated by the SGRB hosts being observed mostly in red bands (e.g. _HST_ F814W ). This combination however is unphysical, since the observer-frame \(I\)-band correspond the rest-frame \(V\)-band for a galaxy at \(z=0.5\). To test how it might impact our results, we repeat the analysis from Sec. 4.1-4.3 with two different synthetic images, one with \(z=0.2\) and \(\mu_{I}\) and the other with \(z=0.7\) and \(\mu_{B}\). These two redshifts mark the 16th and 84th percentiles of the SGRB redshifts. For the higher redshift we use the \(B\)-band to mimic the bandshift, since light emitted in the \(B\)-band at \(z=0.7\) would be observed approximately in the \(I\)-band. For the lower redshift we still use the \(I\)-band, which is a better approximation than the fiducial model since the observer-frame \(I\)-band correspond to the rest-frame \(R\)-band at \(z=0.2\).
When comparing \(r_{\rm h}\), \(r_{\rm n}\), and \(f_{\rm light}\) of BNS mergers to those of SGRBs, the KS test gives the same results as for the fiducial model, namely it cannot reject the null hypothesis below a 5 per cent significance level in virtually all the cases and for all three observables. This holds for both images, and for both Galactocentric- and LSR-isotropic velocities. Fig. 7 shows that these variations do not produce significant differences in the predictions.
## 6 Summary and conclusions
In this work we predicted the merger locations of the Galactic BNSs, and compared them to the locations of SGRBs on their hosts. We compared in particular the projected Galactocentric offsets \(r_{\rm n}\), the host-normalized offsets \(r_{\rm n}\), and the fraction of light \(f_{\rm light}\).
Our fiducial model employs only 5 out of 15 confirmed BNSs, chosen for having measured proper motions and merger times \(\tau_{\rm gw}\) below the Hubble time. Their present-day Galactocentric positions and velocities are computed through a MC simulation that employs the on-sky positions and proper motions, distances estimated from the BNS dispersion measures, and radial velocities obtained by de-projecting the transverse velocity onto an isotropic orientation. The BNS trajectories are evolved in the Galactic potential starting from the present-day conditions up to \(\tau_{\rm gw}\). The merger locations are then analyzed on a synthetic image of the Milky Way, as if they were observed from a cosmological distance. The Galaxy model is composed
Figure 6: Merger times \(\tau_{\rm gw}\) against Galactocentric radii \(R\), heights \(Z\), and preculiar velocities \(V^{\rm LSR}\) for the BNSs in Fig. 2. The error bars extend from the 16th to the 84th percentiles, and are smaller than the marker when not visible.
of a bulge/bar plus a double-exponential disc, and the image is made for isotropic viewing angles in the \(I\)-band, assuming \(z=0.5\), PSF FWHM of 0.1 arcsec, pixel size of 0.05 arcsec px\({}^{-1}\), and limiting surface brightness of 25 mag arcsec\({}^{-2}\).
When converting the present-day BNS proper motions into Galactocentric transverse velocities \(\mathbf{V_{\mathrm{t}}}\), we find that \(\mathbf{V_{\mathrm{t}}}\) are similar in magnitude and direction to the transverse velocity of each BNS Local Standard of Rest \(\mathbf{V_{\mathrm{t,LSR}}}\) in all but two cases. The similar directions suggest that BNSs have systemic velocities \(\mathbf{V}\) which are not isotropically oriented in the Galactocentric frame, but that are rather aligned to the LSR velocity. The similar magnitudes suggest instead that BNSs have small peculiar velocities \(\mathbf{V_{\mathrm{t}}}\)SR with respect to the LSR velocity. Using \(\mathbf{V_{\mathrm{t}}}\), we compute two different estimates for \(\mathbf{V}\), one assuming \(\mathbf{V}\) is isotropic, and the other assuming \(\mathbf{V}^{\mathrm{LSR}}\) is isotropic. We show that both systemic and peculiar velocities predicted for the observed Galactic BNSs fit a lognormal distribution.
Upon comparison with SGRBs, we find that our predicted BNS merger locations cover the same ranges of \(r_{\mathrm{h}}\), \(r_{\mathrm{n}}\), and \(f_{\mathrm{light}}\) as the SGRBs. We compare all the observables predicted for BNS mergers to those of the SGRBs with a KS test, which shows statistically non-significant differences in all cases. This could be attributed to the large spread in our predictions rather than the distributions being intrinsically similar.
We test our results against a range of systematics that might be induced from our methodology. We find that the results from the fiducial model are robust against biases induced by the specific initial conditions of the trajectories, our estimates of the radial velocities, and our choice of parameters for the Milky Way synthetic image. However, we find evidence that our BNS sample is not representative of the whole Galactic population, being biased toward systems that lies at around the same Galactocentric distance of the Sun, and have small peculiar velocities in their LSR. Thus, our sample is likely not representative neither of the BNSs born in the inner regions of the Galaxy, nor of those dwelling in the outer regions.
Although the connection between BNS mergers and SGRBs is supported by almost two decades of literature, we find that the agreement between the two shown by our analysis is non-trivial. The small peculiar velocities of the BNSs in our sample result in small radial displacements between the start and the end of their trajectories. That is to say, being all located at \(R\approx 8\) kpc, they also merge at \(R\approx 8\) kpc. Coincidentally, this is also the median offset of SGRBs. Furthermore, whereas the Milky Way is likely representative of the most massive SGRB hosts, its stellar disc might be more compact than similar spiral galaxies, which could result in lower \(r_{\mathrm{n}}\) and \(f_{\mathrm{light}}\). For this reason, we claim that the agreement we find between Galactic BNS mergers and SGRBs is likely a cosmic coincidence. Regardless, our results are noteworthy in the fact that we are still able to reproduce the highest values of \(r_{\mathrm{h}}\) and \(r_{\mathrm{n}}\), and the lowest values of \(f_{\mathrm{light}}\), for BNSs that start, travel, and merge close to the stellar disc. Also, our results suggest the need of further investigation into how representative the observed Galactic BNSs are compared to the whole Galactic population. A follow-up should analyze the observational biases characterizing the observed BNS sample, and could reveal new implications for the physical processes governing their systemic velocities.
## Acknowledgements
We thank the anonymous referee for the constructive comments. NG acknowledges studentship support from the Dutch Research Council (NWO) under the project number 680.92.18.02. AJL was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 725246). This work made use of Astropy6: a community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al., 2013, 2018, 2022).
Footnote 6: [http://www.astropy.org](http://www.astropy.org)
Figure 7: Distributions of observables predicted for the Galactic BNS mergers under several assumptions. Beside the predictions from the fiducial model, the other variations are meant to test different systematic effects that might arise from our methodology. Galactocentric- and LSR-isotropic distributions test the assumption of isotropy used to estimate the BNS radial velocities. Distributions from the permutated sample are obtained by permutating the initial conditions within the BNS that have measured proper motions but no constrain on the merger time, and are meant to test the dependency on the initial conditions within our sample. Distribution for the MW at \(z=0.2\) and \(z=0.7\) are obtained by changing band and angular resolution for the Milky Way image, to test the dependency on the image parameters.
## Data Availability
The data underlying this article are available at [https://gitlab.com/NicolaGsp/galbns_vs_sgrb](https://gitlab.com/NicolaGsp/galbns_vs_sgrb).
|
2310.18149 | Game of arrivals at a two queue network with heterogeneous customer
routes | We consider a queuing network that opens at a specified time, where customers
are non-atomic and belong to different classes. Each class has its own route,
and as is typical in the literature, the costs are a linear function of waiting
and service completion time. We restrict ourselves to a two class, two queue
network: this simplification is well motivated as the diversity in solution
structure as a function of problem parameters is substantial even in this
simple setting (e.g., a specific routing structure involves eight different
regimes), suggesting a combinatorial blow up as the number of queues, routes
and customer classes increase. We identify the unique Nash equilibrium customer
arrival profile when the customer linear cost preferences are different. This
profile is a function of problem parameters including the size of each class,
service rates at each queue, and customer cost preferences. When customer cost
preferences match, under certain parametric settings, the equilibrium arrival
profiles may not be unique and may lie in a convex set. We further make a
surprising observation that in some parametric settings, customers in one class
may arrive in disjoint intervals. Further, the two classes may arrive in
contiguous intervals or in overlapping intervals, and at varying rates within
an interval, depending upon the problem parameters. | Agniv Bandyopadhyay, Sandeep Juneja | 2023-10-27T13:55:14Z | http://arxiv.org/abs/2310.18149v1 | # Game of arrivals at a two queue network with heterogeneous customer routes
###### Abstract
We consider a queuing network that opens at a specified time, where customers are non-atomic and belong to different classes. Each class has its own route, and as is typical in the literature, the costs are a linear function of waiting and service completion time. We restrict ourselves to a two class, two queue network: this simplification is well motivated as the diversity in solution structure as a function of problem parameters is substantial even in this simple setting (e.g., a specific routing structure involves eight different regimes), suggesting a combinatorial blow up as the number of queues, routes and customer classes increase. We identify the unique Nash equilibrium customer arrival profile when the customer linear cost preferences are different. This profile is a function of problem parameters including the size of each class, service rates at each queue, and customer cost preferences. When customer cost preferences match, under certain parametric settings, the equilibrium arrival profiles may not be unique and may lie in a convex set. We further make a surprising observation that in some parametric settings, customers in one class may arrive in disjoint intervals. Further, the two classes may arrive in contiguous intervals or in overlapping intervals, and at varying rates within an interval, depending upon the problem parameters.
strategic arrivals queuing network games population games.
## 1 Introduction
Queueing games or games where strategic customers, served by a queue or a queuing network, decide on actions such as which queue to join, whether to join, when to join, what level of priority to select and so on, are well studied in the literature (see [6, 7, 9] for surveys). In this paper we focus on the queuing arrival game where customers decide on when to join a queuing network. This is in contrast to much of the literature that considers arrival games to a single queue (see, e.g., [2, 5, 8, 9, 12, 13, 15]). Applications of such arrival games to multiple queues are many: Customers arriving to a health facility where they queue up to meet a doctor and then may queue to get tests done and/or procure medicines; in banks where different customers may need to go to different and multiple counters; in cafeteria where
customers queue up for different and multiple food items, and so on. In consonance with much of the existing literature, we model customers as non-atomic 'infinitesimal' fluid particles with costs that are linear in waiting time and in time to service, customers are served in a first-come-first-serve manner and service facility opens at time zero (see [11, 12, 15]). In [13], uniqueness of equilibrium solution was proven in a single queue setting with stochastic service rates and large no of users. Moreover, [13] showed that, the equilibrium solution with a large no of users is well approximated by the corresponding fluid system and thus lending support to the fluid analysis. This fluid setting has resulted in unique and elegant, easily calculable customer equilibrium profiles for single queues as well as certain symmetric queuing networks where customers have homogeneous travel routes (see [11, 12]).
To keep the discussion simple we focus on a two queue, two class customer setting where each class of customers has a distinct route, and customers in each queue are served in a first-come-first-served manner. While this set-up may be practically interesting, our main contributions are theoretical: Our key aim is to test whether the elegance and simplicity of customer equilibrium behaviour to a single queue extends to more general queuing networks in presence of heterogeneous routing.
Even in the simple two queue setting, we see that, unlike for the single queue, here the solution structure and order of arrivals in equilibrium is a function of all the problem parameters, _i.e._, linear coefficients of the cost function, the queue service rates and the population size of each class of customers. For one set of customer travel routes we observe that depending upon problem parameters, there exist eight distinct solution structures. This suggests that as the number of queues increase there may be a rapid blow-up in the solution structures. This may make the problem of identifying and learning the correct structure computationally prohibitive. In this paper, we do not address the issue of customers learning the equilibrium profile by repeatedly playing the game. Limiting behaviour of players repeatedly updating their action in a game using a simple rule, often called a 'no-regret' policy, are studied in [1] and we refer the reader to [3] for a comprehensive exposition of this literature.
Our other broad contributions/observations are: **1)** We find that similar to the single queue setting, the equilibrium profile of arriving customers is unique for a wide set of parameters. However, interestingly, when customer cost preferences across classes are identical up to a constant, there may be multiple equilibrium arrival profiles, all lying in a convex set that we identify. Although there are many arrival profiles in equilibrium in this case, they all have identical social cost.
**2)** In [12], the equilibrium profile is determined for the case when multiple classes of customers with linear costs are arriving at a single queue again. They find that different classes of customers arrive in non-overlapping and contiguous intervals. In our two queue setting we find that depending upon the problem parameters, in equilibrium, arrivals may come in non-overlapping and contiguous intervals, in overlapping intervals, or under certain parametric settings, a class of customers may even arrive in disjoint intervals. Moreover, we show that whether the classes will arrive over overlapping sets or not is independent of the population sizes and decided entirely by the queue service rates and customer preferences.
**Related literature:** The arrival games to queues were first considered by [5]. The concert queueing game is the fluid setting was introduced in [12]. The arrival game in a fluid network of bottleneck queues including tandem, Trellis, and general feed-forward networks was considered in [11], where they characterized the equilibrium arrival profile in each of these topologies.
Transportation modelling community has extensively studied arrival games. Vickrey in [16], introduced the morning commute problem. Unlike the concert queuing game, in these transportation problems, the service facility has no predetermined opening time. Instead, the customers have a preferred time to complete service and a cost for arriving too early or too late (see [10]). This led to a huge literature on arrival games to a bottleneck queue, the impact of tolls, etc. (see [14] for an extensive list of references). Much of the transportation literature considers single queue settings. Lindsey, in an influential work [15], establishes the existence of equilibrium arrival profile for multiple classes of customers with general non-linear cost functions arriving at a bottleneck queue with a constant service rate, through intricate fixed point arguments. Our work differs from transportation literature in that we consider a two queue network with heterogeneous arrival routes, linear costs, and in this setting we are able characterize the equilibrium user arrival profiles in a closed form, and for a large class of parameters, show that these profiles are unique.
**Outline of the paper:** In Section 2 we provide the background to the arrival queueing game and overview the two-class, two queue, two-route networks that we consider. We emphasize on two heterogeneous routes networks 1) where the departures of the two classes are through different queues (Heterogeneous Departure System or HDS) and 2) where the arrivals enter at different queues (Heterogeneous Arrival System or HAS). In Section 3, we identify the equilibrium arrival profile for all possible parameters for arriving customers for HDS. In particular, we see that these parameters can be partitioned into four distinct regions each having a separate solution structure, when the two customer classes have unequal preferences. In Section 4 we similarly analyze HAS. Here we discover that the parameter space can be partitioned into eight regions based on the solution structure, when the two customer classes have unequal preferences. Moreover, for both HDS and HAS, when the groups have identical preference, we identify a parametric regime where unique equilibrium exists, as well as a parametric regime where the equilibrium is non-unique and the set of equilibrium profiles is convex. We end with a brief conclusion in Section 5. In the main body, we have confined our discussion to the main proof ideas behind our results and have kept the detailed proofs in the appendix.
## 2 Preliminaries
### Fluid Model
We consider a fluid model having two classes of customers or users. The size of each class \(i=1,2\) is given by a positive quantity \(\Lambda_{i}>0\). In every class \(i=1,2\) individual users are infinitesimal and the set of all users in class \(i\) is given by the points in the interval \([0,\Lambda_{i}]\).
We define functions \(F_{i}:\mathbb{R}\rightarrow[0,\Lambda_{i}]\) for \(i=1,2\) such that, \(F_{i}(t)\) denotes the amount of users of class \(i\) that arrive by time \(t\). We call \(F_{i}\) the arrival profile of class \(i\) users. Therefore, each \(F_{i}\) is non-decreasing and satisfies \(F_{i}(-\infty)=0\) and \(F_{i}(+\infty)=\Lambda_{i}\). We consider \(F_{i}\) that are right-continuous and can be expressed as a sum of a non-decreasing absolutely continuous function and a non-decreasing discrete function. We call the pair \(\textbf{F}=\{F_{1},F_{2}\}\) as the joint arrival profile of the two classes.
We consider a network comprising of two queues, both starting service at time \(t=0\). Let \(\mu_{1}\) and \(\mu_{2}\), respectively, denote the deterministic fixed service rates at the two queues after they start service. We consider four routes of the two arriving classes to the two queues. These are displayed in Table 1.
Instance I is equivalent to two groups of users arriving at a two-layer tandem network to travel by the same path. By Theorem 5 of [11], the instance is equivalent to the case where the two groups are arriving at a single queue of capacity \(\min\{\mu_{1},\mu_{2}\}\). Instance II is equivalent to the case where the two queues independently serve the two groups and therefore is equivalent to two independent instances of a single queue with a single class customer arrivals. Hence, the first two instances are reducible to instances with just one queue studied in [12]. In this paper we study the arrival equilibrium behaviour in the other two instances III and IV. We refer to them as Heterogeneous Departure (HDS) and Heterogeneous Arrival Systems (HAS), respectively.
### Waiting and Departure Times
To specify the waiting and departure times in a system, first consider a single queue setting where \(A(t)\) denotes the total mass of users of all classes that have arrived at the queue by time \(t\). Let \(\mu\) denote the service rate. Then at time \(t\), the length of the waiting queue developed in that queue will be (see Theorem 6.5 in [4]):
\[Q(t)=A(t)-\mu\cdot\max\{t,0\}+\sup_{s\in[0,t]}\max\left\{\mu s-A(s),0\right\}. \tag{1}\]
We assume that if there is a jump in the arrival profile \(A\) at time \(t\), the arrivals are positioned in the queue at uniformly random order. As a result, a user arriving at that queue at time \(t\) will suffer an expected waiting time of
\[W(t)=\frac{Q(t+)+Q(t-)}{2\mu}+\max\{0,-t\},\text{ and departs at time }\tau(t)=W(t)+t, \tag{2}\]
where \(Q(t+)\) and \(Q(t-)\) respectively denote the right and left limits of \(Q\) at time \(t\). Note that if the queue length process \(Q(\cdot)\) is continuous (which is the case if \(A(\cdot)\) is absolutely continuous), waiting time as a function of time \(t\) will be \(W(t)=\frac{Q(t)}{\mu}+\max\{0,-t\}\).
If the arrival profile \(A(\cdot)\) is absolutely continuous, by (1) and (2), the departure time as a function of time \(t\) will be: \(\tau(t)=\frac{A(t)}{\mu}+\sup_{s\in[0,t]}\max\left\{0,s-\frac{A(s)}{\mu}\right\}\). Whenever \(Q(t)>0\), the term \(\sup_{s\in[0,u]}\max\{\mu s-A(s),0\}\) is independent of the choice of \(u\in[t-\delta,t+\delta]\) for \(\delta>0\) and sufficiently small, and when \(t<0\), \(\tau_{1}(t)=\frac{A(t)}{\mu}\). If \(A(\cdot)\) is absolutely continuous, its derivative \(A^{\prime}(\cdot)\) will exist a.e. As a result,
\[\tau^{\prime}(t)=\frac{A^{\prime}(t)}{\mu}\text{ \ a.e. in the closure of the set of times }\{s\mid s<0\text{ or }Q(s)>0\}. \tag{3}\]
\begin{table}
\begin{tabular}{|c|c|} \hline
**Instance III** (Heterogeneous Departure System & **Instance IV** (Heterogeneous Arrival System or HDS) \\ \hline \end{tabular}
\end{table}
Table 1: Various instances with two groups traveling through two queues connected in tandem
The above observation will be useful in our analysis of HDS and HAS in the later sections.
**Definition 2.1**.: _We say the queue is engaged at time \(t\) if \(t\) lies in the closure of the set \(\{s\mid Q(s)>0\}\)._
By (3), after the queue starts serving, users depart at rate \(\mu\) whenever the queue is engaged. We introduce the following notation:
* \(A_{i}(t)\) be the total mass of customers of both the groups who have arrived at queue \(i=1,2\) till time \(t\) (Note that while \(F_{i}\) denotes arrival profile corresponding to user class \(i\), \(A_{i}\) denotes overall arrival profile to queue \(i\).).
* \(Q_{i}(t)\) and \(W_{i}(t)\) be the length of the waiting queue, and the waiting time that a customer arriving in queue \(i\) at time \(t\) will observe. Let \(\tau_{i}(t)\) denote the time that customer will depart the system.
* \(W_{\mathbf{F}}^{(j)}(t)\) and \(\tau_{\mathbf{F}}^{(j)}(t)\) be the waiting and departure times from the network suffered by a class \(j\) user arriving at time \(t\) for \(j\in\{1,2\}\). Explicit dependence on \(\mathbf{F}\) in this notation is useful to our analysis later.
For both the queues \(i=1,2\) upon defining the arrival profile \(A_{i}(\cdot)\), using (1) and (2), \(Q_{i}(\cdot)\), \(W_{i}(\cdot)\) and \(\tau_{i}(\cdot)\) are well-defined. Now we specify the waiting and departure times of both the queues in HDS and HAS as functions of time, under the assumption that the joint arrival profile \(\mathbf{F}=\{F_{1},F_{2}\}\) is absolutely continuous (we later argue by Lemma 2.1 that considering absolutely continuous joint arrival profiles are sufficient for identifying equilibrium behavior).
* : Arrival profiles at individual queues are \(A_{1}(t)=F_{1}(t)+F_{2}(t)\) and \(A_{2}(t)=F_{2}(\tau_{1}^{-1}(t))\), where \(\tau_{1}^{-1}(t)=\sup\{s\mid\tau_{1}(s)\leq t\}\). Both \(A_{1}(\cdot)\) and \(A_{2}(\cdot)\) are absolutely continuous. With this, \(W_{\mathbf{F}}^{(1)}(t)=W_{1}(t)\), \(\tau_{\mathbf{F}}^{(1)}(t)=\tau_{1}(t)\), \(W_{\mathbf{F}}^{(2)}(t)=W_{1}(t)+W_{2}(\tau_{1}(t))\), and \(\tau_{\mathbf{F}}^{(2)}(t)=\tau_{1}(t)+W_{2}(\tau_{1}(t))\).
* : Arrival profile at individual queues are \(A_{1}(t)=F_{1}(t)\) and \(A_{2}(t)=F_{1}(\tau_{1}^{-1}(t))+F_{2}(t)\) where \(\tau_{1}^{-1}(t)=\sup\{s\mid\tau_{1}(s)\leq t\}\). Both \(A_{1}(\cdot)\) and \(A_{2}(\cdot)\) are absolutely continuous. With this, \(W_{\mathbf{F}}^{(1)}(t)=W_{1}(t)+W_{2}(\tau_{1}(t))\), \(\tau_{\mathbf{F}}^{(1)}(t)=\tau_{1}(t)+W_{2}(\tau_{1}(t))\), \(W_{\mathbf{F}}^{(2)}(t)=W_{2}(t)\), and \(\tau_{\mathbf{F}}^{(2)}(t)=\tau_{2}(t)\).
### Solution Concept
We assume that every user in group \(i\) (\(i\in\{1,2\}\)) has a cost function linear in her waiting and departure times from the network given by: \(C_{\mathbf{F}}^{(i)}(t)=\alpha_{i}\cdot W_{\mathbf{F}}^{(i)}(t)+\beta_{i}\cdot \tau_{\mathbf{F}}^{(i)}(t)\), where \(\alpha_{i}\) and \(\beta_{i}\) are positive constants quantifying the cost suffered by a class \(i\) user for unit waiting time and delay in departure.
**Definition 2.2** (Support of arrival profile).: _Given an arrival profile \(t\mapsto B(t)\) such that \(B(+\infty)<\infty\), the support of \(B\), denoted by \(\mathcal{S}(B)\), is defined as the smallest closed set having a \(B\)-measure equal to \(B(+\infty)\)._
**Definition 2.3** (Equilibrium Arrival Profile (EAP)).: _The joint arrival profile \(\mathbf{F}^{\star}=\{F_{1}^{\star},F_{2}^{\star}\}\) is an Equilibrium Arrival Profile of this game if for both the groups \(i\in\{1,2\}\): \(t\in\mathcal{S}(F_{i}^{\star})\) and \(\tilde{t}\in\mathbb{R}\) implies \(C_{\mathbf{F}^{\star}}^{(i)}(t)\leq C_{\mathbf{F}^{\star}}^{(i)}(\tilde{t})\). In particular, the arrival profile is iso-cost along its support._
Note that, the EAP doesn't change upon normalizing the cost function of both the classes \(i=1,2\) by multiplying \(1/(\alpha_{i}+\beta_{i})\). For simplicity, and without loss of generality, we assume both the classes \(i=1,2\) have their normalized cost function, which are: \(C_{\mathbf{F}}^{(i)}(t)=\gamma_{i}W_{\mathbf{F}}^{(i)}(t)+(1-\gamma_{i})\tau_{ \mathbf{F}}^{(i)}(t)\) where \(\gamma_{i}=\frac{\alpha_{i}}{\alpha_{i}+\beta_{i}}\) quantifies the preference of every class \(i\) user. A value of \(\gamma_{i}\) close to \(1\) indicates users in group \(i\) prefer late departure compared to waiting a long time in the network and \(\gamma_{i}\) close to \(0\) implies the opposite. So, we use \(\gamma_{i}\) to quantify the _cost preference_ of every group \(i\) user.
**Remark 1**.: EAP captures the aggregate equilibrium behavior of the group. We can equivalently define Nash equilibrium at individual level where under it no individual has unilateral incentive to deviate. As is well known and discussed in more detail in [12], the two concepts are equivalent.
**Lemma 2.1**.: _In every EAP, \(\mathbf{F}=\{F_{1},F_{2}\}\) of the HDS and HAS, the arrival profiles \(F_{1}\) and \(F_{2}\) are absolutely continuous._
Proof of the above lemma is similar to proof of statement (ii) of Lemma 1 in [12]. On assuming contradiction, if any of the arrival profiles have a jump, any user arriving in that jump will be strictly better off arriving slightly early and as a result the arrival profile cannot be an EAP.
We argued before that Instances I and II in Table 1 are reducible to instances where one or more groups of users having distinct preferences are arriving to a single queue. [12] show that when two classes of customers having cost preferences \(\gamma_{1}\) and \(\gamma_{2}\) arrive at a single queue with service rate \(\mu\), the EAP has a simple structure. The class with smaller \(\gamma_{i}\) comes first at arrival rate \(\mu\cdot\min\{\gamma_{1},\gamma_{2}\}\) over an interval, while the next class arrives at a contiguous but non-overlapping interval, at rate \(\mu\cdot\max\{\gamma_{1},\gamma_{2}\}\). Fig 1 illustrates this EAP and the resulting queue length with the assumption \(\gamma_{1}<\gamma_{2}\) and is useful to contrast with the various EAP structures that we find for HDS and HAS in Sections 3 and 4 below. The queue length process is constructed assuming that in the EAP, class 2 users start arriving from a positive time, which is equivalent to saying, masses of the two classes satisfy \(\Lambda_{1}>\left(\frac{1}{\gamma_{2}}-1\right)\Lambda_{2}\).
## 3 Heterogeneous Departure Systems (HDS)
In this section, we consider the situation where the two classes arrive at the first queue and depart from different queues, as illustrated in Table 1. If \(\mu_{1}\leq\mu_{2}\), class 2 users arrive at queue 2 at a maximum rate of \(\mu_{1}\) and as a result, queue 2 remains empty and the cost of class 2 is unaffected by the second queue. Thus, if \(\mu_{1}\leq\mu_{2}\), the instance becomes equivalent to both the groups arriving at a queue of capacity \(\mu_{1}\). The problem is identical to the two-class, single queue case studied in [12]. Therefore, in subsequent discussion, we restrict ourselves to HDS with \(\mu_{1}>\mu_{2}\). We further consider the case \(\gamma_{1}\neq\gamma_{2}\) separately from \(\gamma_{1}=\gamma_{2}\) since the latter displays different behaviour. |
2307.03818 | Efficient Correlation Clustering Methods for Large Consensus Clustering
Instances | Consensus clustering (or clustering aggregation) inputs $k$ partitions of a
given ground set $V$, and seeks to create a single partition that minimizes
disagreement with all input partitions. State-of-the-art algorithms for
consensus clustering are based on correlation clustering methods like the
popular Pivot algorithm. Unfortunately these methods have not proved to be
practical for consensus clustering instances where either $k$ or $V$ gets
large.
In this paper we provide practical run time improvements for correlation
clustering solvers when $V$ is large. We reduce the time complexity of Pivot
from $O(|V|^2 k)$ to $O(|V| k)$, and its space complexity from $O(|V|^2)$ to
$O(|V| k)$ -- a significant savings since in practice $k$ is much less than
$|V|$. We also analyze a sampling method for these algorithms when $k$ is
large, bridging the gap between running Pivot on the full set of input
partitions (an expected 1.57-approximation) and choosing a single input
partition at random (an expected 2-approximation). We show experimentally that
algorithms like Pivot do obtain quality clustering results in practice even on
small samples of input partitions. | Nathan Cordner, George Kollios | 2023-07-07T20:14:50Z | http://arxiv.org/abs/2307.03818v1 | # Efficient Correlation Clustering Methods for Large Consensus Clustering Instances
###### Abstract
Consensus clustering (or clustering aggregation) inputs \(k\) partitions of a given ground set \(V\), and seeks to create a single partition that minimizes disagreement with all input partitions. State-of-the-art algorithms for consensus clustering are based on correlation clustering methods like the popular Pivot algorithm. Unfortunately these methods have not proved to be practical for consensus clustering instances where either \(k\) or \(V\) gets large.
In this paper we provide practical run time improvements for correlation clustering solvers when \(V\) is large. We reduce the time complexity of Pivot from \(O(|V|^{2}k)\) to \(O(|V|k)\), and its space complexity from \(O(|V|^{2})\) to \(O(|V|k)\)--a significant savings since in practice \(k\) is much less than \(|V|\). We also analyze a sampling method for these algorithms when \(k\) is large, bridging the gap between running Pivot on the full set of input partitions (an expected 1.57-approximation) and choosing a single input partition at random (an expected 2-approximation). We show experimentally that algorithms like Pivot do obtain quality clustering results in practice even on small samples of input partitions.
## 1 Introduction
In this paper we examine the consensus clustering problem (also known as clustering aggregation). Given a set of input clusterings over a single ground set, consensus clustering seeks to create a single clustering that is most "similar" to the inputs provided. For example, a number of organizers for a dinner party may have come up with separate seating arrangements for guests; consensus clustering would find a "consensus" seating arrangement that minimizes "disagreements" (pairs of guests who are seated at the same table in one arrangement, but at different tables in the other) between the consensus arrangement and each of the arrangements from party organizers.
Consensus clustering is closely related to the correlation clustering problem. As originally defined by Bansal et al. [4], "min disagreement" correlation clustering inputs a complete graph \(G=(V,E)\) where every pair of nodes is assigned a positive (+) or negative (-) relationship. The objective is to cluster together positively related nodes and separate negatively related ones, minimizing the total number of mistakes along the way. This clustering paradigm has been used in many applications, such has its original motivation of classification [4], database deduplication [15], and community detection in social networks [27; 24]. This formulation of graph clustering has been especially useful, since a specific number of clusters does not need to be specified beforehand and the only information needed as input concerns the relationship between objects--not about the objects themselves.
Consensus clustering can be easily reduced to a generalized version of correlation clustering where graph edges are no longer just positive or negative, but are assigned weights (or probabilities) between 0 and 1 [3]. The state-of-the-art consensus clustering algorithm is based on the Pivot method for correlation clustering by Ailon et al. [3], which yields a 1.57-approximation result for consensus clustering. One bottleneck that previous authors have run into when applying the Pivot algorithm to consensus clustering involves computing and storing edge weights [14], making it prohibitive to perform consensus clustering when the number nodes in the graph is large. In Section 3 we show a practical run time improvement that enables Pivot and other correlation clustering algorithms to run on these instances. In particular we reduce the running time of a single run of Pivot from \(O(|V|^{2}k)\) to \(O(|V|k)\), where \(V\) is the ground set and \(k\) is the number of input clusterings. We also reduce the amount of memory needed from \(O(|V|^{2})\) to \(O(|V|k)\).
Another bottleneck experienced when running correlation clustering algorithms for consensus clustering is when the number of input clusterings is large. In Section 4 we analyze the effect of computing a consensus on small samples of input clusterings, and show that correlation clustering algorithms like
Pivot still produce quality results. In particular we develop a function that computes the expected approximation bound guaranteed by the Pivot algorithm when performing consensus clustering on a sample of input clusterings, filling the gap between using the full set (an expected 1.57-approximation) and choosing a single input clustering at random (known to be an expected 2-approximation).
We conclude this paper with several experimental results that demonstrate the practicality of our new methods for consensus clustering (see Section 5).
### Related Work
#### 1.1.1 Correlation Clustering
The NP-hard correlation clustering problem was introduced by Bansal et al. [4], who also provided its first constant approximation algorithm in the min disagreement setting. The best known approximation factor is \(1.994+\epsilon\), from a linear program rounding method due to Cohen et al. [9]. Correlation clustering remains an active area of research, and many variations of the problem have arisen over time; a general introduction to the correlation clustering problem and some of its early variants is given by Bonchi et al. [6].
#### 1.1.2 Weighted Correlation Clustering
Weighted correlation clustering was first considered by Bansal et al. [4]. Ailon et al. adapted the Pivot algorithm and a LP rounding method for general weighted graphs; for the special probability weights case they showed that Pivot yields a 5-approximation and their LP rounding method yields a 2.5-approximation [3]. Correlation clustering with probability weights has been adapted for probabilistic [17] and uncertain [21] graphs. On graphs with generalized weights, the best approximation ratio for correlation clustering is \(O(\log n)\)[7; 10]. Puleo and Milenkovic also studied a partial generalization of graph weights [23].
#### 1.1.3 The Pivot Algorithm
The Pivot algorithm was first introduced by Ailon et al. [3]. Its efficient run time and ease of implementation have made it very popular, and it has been applied to many variants of correlation clustering that have arisen since. Recently, it has been used for uncertain graphs [21], query-constrained correlation clustering [12], online correlation clustering [18], chromatic correlation clustering [16], and fair correlation clustering [1]. It has also been shown how to run the Pivot algorithm in parallel in various settings [8; 22]. Zuylen and Williamson [25] developed a deterministic version of Pivot that picks a best pivot at each round, though at the cost of an increased running time complexity. The most efficient non-parallel implementation of Pivot uses a neighborhood oracle, where a hash table stores lists of neighbors for each node [2].
#### 1.1.4 Consensus Clustering
Though a problem of interest in its own right, consensus clustering has often been studied as a special case of correlation clustering [13; 3; 14]. An overview of consensus clustering methods is given by Vega-Pons and Ruiz-Shulcloper [26].
## 2 Weighted Correlation Clustering
Given a graph \(G=(V,E)\), Ailon et al. [3] considered a generalization of the correlation clustering problem by allowing every edge \((u,v)\) in \(E\) to have a positive weight \(w^{+}_{uv}\geq 0\) and negative weight \(w^{-}_{uv}\geq 0\). A weighted correlation clustering instance \(G\) has a corresponding unweighted _majority_ instance \(G_{w}\); this is formed by adding edge \((i,j)\) to \(E^{+}_{w}\) if \(w^{+}_{ij}>w^{-}_{ij}\) and \((i,j)\) to \(E^{-}_{w}\) if \(w^{-}_{ij}>w^{+}_{ij}\) (breaking ties arbitrarily).
In this chapter we will assume the input graph satisfies the probability constraints \(w^{+}_{uv}+w^{-}_{uv}=1\). For edge \((u,v)\), we set \(s(u,v)=w^{+}_{uv}\) and note that \(1-s(u,v)=w^{-}_{uv}\). The correlation clustering objective function is again given by
\[\text{Cost}(\mathcal{C},V)=\sum_{\begin{subarray}{c}u,v\in V,\,u\neq v\\ (u,v)\text{ is intra-cluster}\end{subarray}}\left(1-s(u,v)\right)+\sum_{ \begin{subarray}{c}u,v\in V,\,u\neq v\\ (u,v)\text{ is inter-cluster}\end{subarray}}s(u,v).\]
The standard correlation clustering problem arises when each \(s(u,v)\) is equal to 0 or 1.
### Correlation Clustering Algorithms
The Pivot algorithm runs directly on weighted graphs that satisfy the probability constraints [17]. The algorithm chooses a node \(u\) at random, starts cluster \(C=\{u\}\), and adds all other unclustered nodes \(v\) to \(C\) with weight \(s(u,v)\geq 1/2\). It repeats on \(V\setminus C\) until all nodes are clustered. The time complexity is \(O(|V|+|E|)\), where \(E\) is the complete set of edges between pairs of nodes in \(V\). This can be improved to \(O(|V|+|E^{+}|)\) where \(E^{+}\) represents edges of weight \(\geq 1/2\) if neighbor sets \(N(v)=\{u\in V\ |\ s(u,v)\geq 1/2\}\) are known for all \(v\in V\).
Ailon et al. showed that the Pivot algorithm yields a 5-approximation for the probability weights case. In the special case where the "complements" of these weights also satisfy a version of the triangle inequality (i.e. \(1-s(u,v)\leq(1-s(u,w))+(1-s(v,w))\) for \(u,v,w\in V\)), Ailon et al. further showed that the Pivot algorithm yields a 2-approximation.
Ailon et al. also presented a LP rounding method for weighted correlation clustering. We introduce a variable \(x_{uv}\) for every pair of distinct nodes \(u,v\in V\). We interpret \(x_{uv}=0\) to mean that \(u\) and \(v\) lie in the same cluster; \(x_{uv}=1\) means that \(u\) and \(v\) lie in different clusters. We assume \(x_{uu}=0\) always.
\[\begin{array}{ll}\min&\sum\limits_{u,v\in V}[s(u,v)x_{uv}+(1-s(u,v))(1-x_{uv })]\\ \mbox{s. t.}&x_{uv}+x_{vw}\geq x_{uw}&\forall u,v,w\in V,\\ &0\leq x_{uv}\leq 1&\forall u,v\in V.\end{array} \tag{1}\]
Given a fractional solution to LP 1, the rounding method from Ailon et al. yields a 2.5-approximation for the probability weights case, and a 2-approximation for probability weights that also satisfy the complement triangle inequality.
### Application to Consensus Clustering
The input for consensus clustering is a set of clusterings \(\mathcal{C}_{1}\ldots,\mathcal{C}_{k}\) of a fixed set \(V\). The goal is to output a new clustering \(\mathcal{C}\) of \(V\) that minimizes the _disagreement distance_\(\sum_{i=1}^{k}\mbox{Disagree}(\mathcal{C},\mathcal{C}_{i})\), where \(\mbox{Disagree}(\mathcal{C},\mathcal{C}_{i})\) is the number of node pairs \((i,j)\) that are clustered together in one clustering but not in the other.
Consensus clustering is closely related to weighted correlation clustering. We construct a graph by adding one node for every object in \(V\), and then for every pair of objects \(u,v\in V\) we add an edge \((u,v)\) with weight equal to the average number of clusterings where \(u\) and \(v\) appear in the same cluster. Constructed this way, the edge weights satisfy the complement triangle inequality [3].
The consensus clustering result is then given by the clustering formed by a correlation clustering algorithm on this weighted graph. The approximation bound of the correlation clustering algorithm equals the approximation bound for consensus clustering. Thus using either the Pivot or the LP rounding method presented in Section 2.1, we get a 2-approximation algorithm for consensus clustering [3]. From now on we will focus on the Pivot algorithm, since there are no theoretical gains from using the more expensive LP rounding method. In fact, the state-of-the-art method from Ailon el al. yields a 1.57-approximation by choosing the better clustering between the one produced by Pivot or choosing one of the input clusterings at random.
## 3 Runtime Improvements
Previous implementations of the Pivot algoritm for consensus clustering have relied on precomputing the weighted graph used for these algorithms [14]. For \(k\) input clusterings on set \(V\), precomputing all weights in the edge similarity graph requires \(O(|V|^{2}k)\) time and \(O(|V|^{2})\) space. Though correlation clustering algorithms like Pivot algorithm run fastest when these similarities are precomputed, the memory required to store edge weights quickly becomes unmanageable for larger graphs. Furthermore, the Pivot algorithm rarely uses all \(O(|V|^{2})\) edges during a single clustering run. Even if Pivot is run a handful of times, the extra cost of precomputing all possible edges still may not be justified.
To achieve a run time improvement and to reduce overall memory usage, we can just compute and store cluster labels for each node for the \(k\) input clusterings. A _cluster label_ is a \(k\)-tuple \((v_{1},\ldots,v_{k})\) for a given node \(v\), where \(v_{i}\) is an integer denoting which cluster node \(v\) is a member of in clustering \(i\). This step runs in \(O(|V|k)\) time and uses \(O(|V|k)\) memory--the same amount of memory used by storing the input clusterings. Then computing the similarity between node \(u\) and node \(v\) is an \(O(k)\) operation from counting the number of matching labels \(v_{i}=u_{i}\). In practice the number of input clusterings is much
smaller than \(|V|\). By computing similarities "on-the-fly", the Pivot algorithm only incurs an extra factor of \(k\) in its running time.
We can use a similar approach for implementing other correlation clustering algorithms like LocalSearch [13] and Vote [11] by computing edge probabilities only as needed. This allows these algorithms to run on larger instances without requiring a great deal of extra memory to store edges. However, the run time gains of the "on-the-fly" approach is diminished for these algorithms if they are run multiple times, since both LocalSearch and Vote examine all graph edges during a single run. On smaller instances it is better to precompute edges in order to reuse them on multiple runs.
## 4 Sampling Methods for Consensus Clustering
Different sampling methods have been considered for correlation clustering [5] and consensus clustering [13], focusing on reducing the number of edge comparisons performed during clustering. Here we propose a method for consensus clustering to reduce the number of attributes used to calculate edge probabilities.
### Pivot: Sampling Input Clusters
For a large number \(k\) of input clusterings, an additional factor of \(O(k)\) in running time may not be practical. Let \(R\in\{1,\ldots,k\}\). We will analyze the effect of sampling only \(R\) input clusters to compute edge probabilities.
For a given pair of nodes \(i,j\), we can produce a \(k\)-bit string \(s\) to represent which clusters they agree on. We set bit \(l=1\) if \(i\) and \(j\) are clustered together in clustering \(l\), and \(0\) otherwise. Sampling a smaller number of clusterings is akin to drawing a smaller number of bits from this string. We will assume that \(k\) is large so we can model these draws as sampling with replacement.
Let \(p\) denote the true average number of clusters that \(i\) and \(j\) are clustered together in, and assume that \(p\leq 1/2\) (the opposite case will follow by symmetry). Let \(X\) be the sum of \(R\) randomly chosen bits of \(s\). We model \(X\) as a Binomial random variable with \(R\) samples and success probability \(p\). Since the Pivot algorithm will "make a mistake" when the sampled probability is \(>1/2\), we will first find the probability that \(X>R/2\).
To do this, we use the standard deviation \(\sqrt{Rp(1-p)}\) for \(X\). Let \(Z=(X-pR)/\sqrt{Rp(1-p)}\). We estimate \(\mathbb{P}(X>R/2)\) using
\[\mathbb{P}(Z>(R/2-pR)/\sqrt{Rp(1-p)})=\mathbb{P}(Z>\sqrt{R}(1/2-p)/\sqrt{p(1- p)}).\]
Let \(f(R,p)=\sqrt{R}(1/2-p)/\sqrt{p(1-p)}\). We find \(\mathbb{P}(Z>f(R,p))\) by evaluating \(\operatorname{Err}(R,p):=1-\Phi(f(R,p))\), where \(\Phi\) is the normal CDF.
For \(p<1/2\), the cost of a correctly clustered edge is \(p\). We are interested in how much this cost might increase due to an error in reading this probability from the sample. We will analyze the "Node-at-a-Time" approach to the Pivot algorithm.
As observed by Bonchi et al. [5], the Pivot can be adapted to run in \(n\) rounds where a single node is clustered after each round. A permutation \(\pi\) of the vertex set \(V\) is fixed beforehand, and then nodes are processed in that order. A pivot node set \(P\) is maintained. If an incoming node \(v\) matches with pivot \(i\) first, then \(v\) is assigned cluster label \(i\). If \(v\) doesn't match with any nodes in \(P\), then \(v\) begins a new cluster and \(v\) is added to \(P\).
Given a permutation \(\pi\) of \(V\), we will break down what happens when Pivot "makes a mistake" into three cases:
1. Node \(v\) connects to the wrong pivot (but wasn't going to become a new pivot itself). In the worst case, the cost of all edges associated with \(v\) will "flip" (becoming 1 - the original cost). This event happens with probability \(\leq\operatorname{Err}(R,p)\), where \(p\) is the true probability of the incorrectly sampled edge.
2. Node \(v\) connects to a pivot when it was supposed to become a new pivot. In this case we can create a new permutation \(\pi^{\prime}\) where node \(v\) is sent to the back and all other nodes move up by one place. The cost increase is given in Case 1, but now relative to the new permutation \(\pi^{\prime}\) which also determines a Pivot clustering.
3. Node \(v\) becomes a new pivot when it was supposed to connect to an existing pivot. The cost incurred by node \(v\) to already clustered nodes is given in Case 1. The cost incurred by subsequent
nodes that connect to \(v\) is given by Cases 1 and 2, depending on whether they should've connected to a different pivot or become a new pivot.
In particular, we see that the expected cost due to error can be calculated based solely on edge probabilities. For a given \(R\) and \(p<1/2\), we calculate the expected cost due to error as
\[p\cdot(1-\operatorname{Err}(R,p))+(1-p)\cdot\operatorname{Err}(R,p).\]
(If \(p\geq 1/2\) for a given edge, we replace \(p\) with \(1-p\) in the formula).
Fix a value for \(R\). We want to find the maximum multiple of the original cost incurred from the sample. Define \(g(R)\) to be
\[g(R)=\max_{p\in[0,1/2]}\{[p\cdot(1-\operatorname{Err}(R,p))+(1-p)\cdot \operatorname{Err}(R,p)]/p\}.\]
Since the correlation clustering objective sums the cost of edges, and \(g(R)\) is the multiple of the maximum expected increase of cost due to an error in sampling, we have the following
**Lemma 4.1**.: _For \(R<k\), construct the consensus clustering similarity graph with \(R\) randomly sampled input clusterings. The Pivot algorithm yields an expected \(g(R)\cdot 2\)-approximation when compared with the true similarity graph._
### Pivot Sampling Approximation Results
The simplest algorithm for consensus clustering just picks an input clustering a random and returns it. This is noted to be a \(2\)-approximation algorithm [3], so doing the extra work of computing edge probabilities and running the Pivot algorithm does not provide any theoretical benefit. Ailon et al. instead analyzed the following approach: run both of these algorithms, and return the result with lower cost.
Let \(t=(i,j,k)\) be a triangle of nodes. Let \(w(t)\) be the worst-case cost clustering cost of a bad triangle \(t\) from the Pivot algorithm, \(z(t)\) be the cost of \(t\) in the randomly chosen input clustering, and \(c^{*}(t)\) be the optimal clustering cost of \(t\). Ailon et al. proved the following
**Theorem 4.2**.: _If there exist constants \(\beta\in[0,1]\) and \(\gamma\geq 1\) such that \(\beta w(t)+(1-\beta)z(t)\leq\gamma c^{*}(t)\) for all \(t\in T\), then the best of Pivot and the random input clustering yields a \(\gamma\)-approximation for consensus clustering._
To find the particular value of \(\gamma=11/7\) using the Pivot algorithm, Ailon et al. show the following
**Lemma 4.3**.: _For all \(t\in T\), \((3/7)w(t)+(4/7)z(t)\leq(11/7)c^{*}(t)\)._
Proof.: To prove this, they show that
\[f(t)=(3/7)w(t)+(4/7)z(t)-(11/7)c^{*}(t)\leq 0\]
where \(t=(w_{1},w_{2},w_{3})\) and
\[w(t) =w_{1}+w_{2}+w_{3}\] \[z(t) =2w_{1}(1-w_{1})+2w_{2}(1-w_{2})+2w_{3}(1-w_{3})\] \[c^{*}(t) =w_{1}+1-w_{2}+1-w_{3}\] \[1/2 \leq w_{1}\leq w_{j}\leq 1\text{ for }j=2,3\] \[w_{1}+w_{2}+w_{3}\leq 2\]
They find a global maximum of \(f(t)\) within the constrained area at \((w_{1},w_{2},w_{3})=(1/2,3/4,3/4)\). Here \(w(t)=2\), \(z(t)=5/4\), and \(c^{*}(t)=1\), which yields \(f(t)=0\). So this bound is tight.
Using Lemma 4.1, we replace \(w(t)\) with \(g(R)\cdot w(t)\) for Pivot on \(R\) inputs. Since \(w(t)\) and \(c^{*}(t)\) are linear functions, we can adjust their scalar multiples in \(f(t)\) without affecting the location of the maximum value. Evaluating at \(t^{*}=(w_{1},w_{2},w_{3})=(1/2,3/4,3/4)\), we get
\[(3/7) g(R)w(t^{*})+(4/7)z(t^{*})-\gamma c^{*}(t^{*})\] \[=(3/7)(2)g(R)+(4/7)(5/4)-\gamma=(6/7)g(R)+5/7-\gamma.\]
This is \(\leq 0\) when \(\gamma\geq(6/7)g(R)+5/7\). This yields the following
**Theorem 4.4**.: _The best of Pivot on \(R\) input clusterings and picking a random input clustering yields a \([(6/7)g(R)+5/7]\)-approximation algorithm for consensus clustering._
The following graphs (Figure 1) plot the consensus clustering approximation results for the Pivot algorithm. The graph on the left shows the value of the function \(g(R)\) where \(R\) is the number of samples used from the set of input clusterings; the baseline "full" is the constant value 1. The graph on the right shows the approximation bound based on \(R\) samples, compared against the \(11/7\) baseline for sampling all inputs.
## 5 Experiments
All algorithms in this section were implemented in Java1 and tested on a Linux server running Rocky Linux 8.7 with a 2.9 GHz processor and 16.2 GB of RAM. Plots show the mean result of 10 runs for each algorithm. Error bars show one standard deviation of the mean.
Footnote 1: code available at github.com/nathan-corder/cc-local-search
We compare the results of Pivot, Pivot with InnerLocalSearch (ILS--a LocalSearch method that only runs _inside_ individual Pivot clusters), Pivot with LocalSearch (LS), and Vote algorithms on each data set. Each algorithm computes similarities as needed using the approach outlined in Section 3. We restrict LocalSearch and InnerLocalSearch to just one iteration through the node set, since multiple iterations would require computing previously seen edge probabilities over again.
### Mushrooms Data
The Mushrooms2 dataset contains 8214 rows with 22 categorical attributes each (the original data contains 23 columns; the first column, which contains a classification into "poisonous" or "edible", is removed). We interpret each attribute as an input clustering: for a given attribute, all items with the same attribute value are put into one cluster. The Mushrooms data set has often been used for comparison in correlation [12] and consensus [14] clustering experiments.
Figure 1: Consensus Clustering Theoretical Analysis
Figure 2: Mushroom Consensus Clustering
We first compare the difference in running times of running Pivot on the fly versus precomputing all edge weights. Goder and Filkov [14] used the precomputed version of Pivot for consensus clustering. Their experiments on the Mushrooms dataset yielded an average runtime of 1222 seconds, which they noted was "dominated by the preprocessing." Our own time for preprocessing of the Mushrooms graph clocked in at 57.59 seconds, with an average run of Pivot being 0.0082 seconds afterward (out of 10 runs). However, running Pivot-on-the-Fly we computed the cluster labels in 0.029 seconds and had an average Pivot runtime of 0.0129 seconds afterward. Though each run of Pivot was marginally slower, we significantly reduce overall running time by not precomputing the entire similarity graph. For larger examples, storing precomputed edges becomes entirely infeasible.
In Figure 2 we show the results of attribute sampling, where each algorithm computes edge probabilities as needed. The Pivot algorithm was run 50 times for each \(R\in\{2,4,\ldots,22\}\), where \(R\) determines the number of randomly chosen input clusterings to use in the algorithm (with final disagreements computed against the full set of input clusterings).
We first note the theoretical claims of Pivot sampling. For example, using only \(R=2\) inputs, our bound suggests that the Pivot algorithm should return a result less than 1.434 times the disagreement on the full set of inputs. Here we see that the mean Pivot clustering disagreement at \(R=2\) is only 1.131 times the disagreement for the full set--well below the theoretical bound.
For the LocalSearch disagreement improvements, we note that both LS and ILS give similar results at \(R=2\) and begin to diverge as \(R\) increases. The effectiveness of ILS diminishes as \(R\) increases, whereas the full LS maintains its improvement levels. At \(R=22\) we see the largest gap between the two methods: the LS disagreement is 92.5% of the Pivot level, whereas ILS is only 98.2% of Pivot.
However, the running time graph is able to balance out the picture. For this graph, the Pivot running time is nearly instantaneous even as \(R\) increases. The running time of the full LS increases the most dramatically, starting around 5 seconds for \(R=2\) and ending at nearly 50 seconds for a single pass over the data set when \(R=22\); note that the time used by LS for the full set of attributes is comparable to computing the full similarity set. The ILS method is much more efficient, clocking in at about 15 seconds for \(R=22\). LS takes over three times the amount of time as ILS, but for only a small improvement over the results of ILS.
We note that the Mushrooms data set is not ideal for ILS. The Pivot algorithm runs quickly since it forms a small number of clusters (about 10 for \(R=22\)), which leads to larger cluster sizes (average max cluster size of about 3500 for \(R=22\)). Thus ILS is not able to reduce the running time over LS as effectively, though we still see some improvements in disagreement cost and running time on this example.
The Vote algorithm performs quite well on the Mushrooms data set. Vote does not need to compute as many similarities as the full LocalSearch, so it runs much more efficiently (on the same level as ILS for this example). Vote also yields the greatest level of improvement over Pivot on all attribute samplings, edging out the improvements yielded by LS.
### Facebook Data
The Facebook Government3 graph is one of several different social networks between similarly themed Facebook pages. Nodes in the graph represent individual pages, and edges between nodes represent mutual likes between pages. The Government graph has 7,057 nodes and 89,455 edges. To perform consensus clustering, we first generate 100 Pivot clusterings and then run the consensus clustering algorithms on the results.
Figure 3: Facebook Government Consensus Clustering
Figure 3 shows the disagreement cost and running time results of consensus clustering from our four algorithms. The Pivot algorithm was run 10 times for each of \(R\in\{10,20,\ldots,100\}\).
Our theoretical approximation bound on \(R=10\) inputs suggests that the Pivot algorithm should return a result less than 1.139 times the disagreement on the full set. Here we see that the mean Pivot clustering disagreement at \(R=10\) is only 1.038 times the disagreement for the full set. For the LocalSearch disagreement improvements, we note that both LS and ILS give similar results at \(R=10\) and begin to diverge slightly as \(R\) increases. At \(R=100\) however, the LS disagreement is only \(99.45\%\) of Pivot with ILS being \(99.8\%\) of Pivot.
For running times, we see that the Pivot algorithm's time increases more dramatically as \(R\) increases--starting with iterations under 5 seconds when \(R=10\), up to iterations of over 2 minutes when \(R=100\). This happens since the max cluster size \(d\) of the Pivot algorithm stays small (under 50), and also means that the ILS algorithm runs extremely quickly with iterations lasting about a tenth of second even for \(R=100\). The LS algorithm, on the other hand, begins to perform quite poorly even on a small number of input clusterings. By \(R=50\) the LS iterations clock in at about 4 minutes, and are over 10 minutes at \(R=100\). This graph is small enough to precompute the entire similarity graph, which only takes about 5 minutes to complete. Since LS runs much longer than ILS, and does not give much added value over ILS in terms of disagreements reduced, it makes sense to favor ILS over LS for consensus clustering improvement.
Once again, the Vote algorithm performs relatively well on this graph. Since Pivot forms a large number of clusters, it has to compute most of the similarities between pairs of nodes in the graph. Thus the Vote algorithm runs in time comparable to Pivot and ILS (taking only slightly longer). We also see that Vote performs nearly as well as LS, giving the most significant improvement in disagreement costs with the least additional time required beyond Pivot.
### Correlated Binary Data
The approximation bound analysis in Section 4.1 for the Pivot sampling algorithm relied on the independence of attributes in the data set. Here we show experimentally that sampling still provides good results even if attributes are correlated.
We generated binary data sets with \(|V|=10000\) and \(k=1000\) using bindata in R [20; 19]. All attributes are drawn using a fixed marginal probability and a fixed pairwise correlation, and 5 data sets are generated for each probability-correlation pair. For each data set we ran just the Pivot algorithm 10 times for each of \(R\in\{50,\ldots,1000\}\) (increasing by 50 each time), tracking the disagreement distance from the resulting clustering to the input clusterings.
For each data set, we rescale outlier disagreements that are more than two times the median distance from the median to that boundary value. We plot the averages of the disagreements across the 5 data sets for each probability-correlation pair. We do not report the running times since even the longest runs finish in about 5 seconds. The exact disagreements are displayed in Figure 4. For comparison, we fix the \(y\)-axis size of the graphs for each mean. The ratios to Pivot on the full set of attributes given in Figure 5.
As expected, we see that the lines get flatter as the correlation increases. For example, the data sets with mean 0.3 start with a ratio of 1.059 at \(R=50\) with correlation 0.1, then decreases to 1.032 (correlation 0.3), 1.029 (correlation 0.5), 1.019 (correlation 0.7), and 1.017 (correlation 0.9). The more correlated the attributes become, the more information we gain from smaller sample sets.
In general the multiplicative ratios reported in the experiments stay within the theoretical bounds. There are a few minor exceptions, due to the averaging and rescaling of results from different data sets. For example, the value of \(g(R)\) is 1.054 for \(R=50\) and 1.037 for \(R=100\). The result for 0.3 mean, 0.1 correlation reports a multiplicative ratio of 1.059 for \(R=50\) and 1.039 for \(R=100\) between the mean disagreement of the sample to the mean disagreement of the full set of clusterings. The 0.5 mean, 0.3 correlation and the 0.5 mean, 0.5 correlation graphs also have two bound violations each. The rest of the bounds on these data sets and the rest of the binary data sets fit within the predicted theoretical bounds.
## 6 Conclusion
In this paper we examined the consensus clustering problem from the standpoint of correlation clustering. We overcame some significant memory roadblocks to running the Pivot algorithm on larger data
Figure 4: Correlated Binary Data Consensus Clustering
Figure 5: Correlated Binary Data Ratios
sets, and showed the viability of running other correlation clustering algorithms like LocalSearch, InnerLocalSearch, and Vote for consensus clustering too. We also examined the possibility of performing consensus clustering on limited samples of input clusterings, and showed that we can still obtain quality clustering results even while using relatively small sample sets.
|
2310.00706 | Evaluating Speech Synthesis by Training Recognizers on Synthetic Speech | Modern speech synthesis systems have improved significantly, with synthetic
speech being indistinguishable from real speech. However, efficient and
holistic evaluation of synthetic speech still remains a significant challenge.
Human evaluation using Mean Opinion Score (MOS) is ideal, but inefficient due
to high costs. Therefore, researchers have developed auxiliary automatic
metrics like Word Error Rate (WER) to measure intelligibility. Prior works
focus on evaluating synthetic speech based on pre-trained speech recognition
models, however, this can be limiting since this approach primarily measures
speech intelligibility. In this paper, we propose an evaluation technique
involving the training of an ASR model on synthetic speech and assessing its
performance on real speech. Our main assumption is that by training the ASR
model on the synthetic speech, the WER on real speech reflects the similarity
between distributions, a broader assessment of synthetic speech quality beyond
intelligibility. Our proposed metric demonstrates a strong correlation with
both MOS naturalness and MOS intelligibility when compared to SpeechLMScore and
MOSNet on three recent Text-to-Speech (TTS) systems: MQTTS, StyleTTS, and
YourTTS. | Dareen Alharthi, Roshan Sharma, Hira Dhamyal, Soumi Maiti, Bhiksha Raj, Rita Singh | 2023-10-01T15:52:48Z | http://arxiv.org/abs/2310.00706v1 | # Evaluating Speech Synthesis by Training Recognizers on Synthetic Speech
###### Abstract
Modern speech synthesis systems have improved significantly, with synthetic speech being indistinguishable from real speech. However, efficient and holistic evaluation of synthetic speech still remains a significant challenge. Human evaluation using Mean Opinion Score (MOS) is ideal, but inefficient due to high costs. Therefore, researchers have developed auxiliary automatic metrics like Word Error Rate (WER) to measure intelligibility. Prior works focus on evaluating synthetic speech based on pre-trained speech recognition models, however, this can be limiting since this approach primarily measures speech intelligibility. In this paper, we propose an evaluation technique involving the training of an ASR model on synthetic speech and assessing its performance on real speech. Our main assumption is that by training the ASR model on the synthetic speech, the WER on real speech reflects the similarity between distributions, a broader assessment of synthetic speech quality beyond intelligibility. Our proposed metric demonstrates a strong correlation with both MOS naturalness and MOS intelligibility when compared to SpeechLMScore and MOSNet on three recent Text-to-Speech (TTS) systems: MQTTS, StyleTTS, and YourTTS.
Dareen Alharthi\({}^{1}\), Roshan Sharma\({}^{1}\), Hira Dhamyal\({}^{1}\), Soumi Maiti\({}^{1}\), Bhiksha Raj\({}^{1,2}\), Rita Singh\({}^{1}\)\({}^{1}\)Carnegie Mellon University, \({}^{2}\)Mohammed bin Zayed University of AI
automatic speech quality assessment, speech synthesis, speech recognition, metric
## 1 Introduction
Speech synthesis systems, aka Text-to-Speech (TTS) systems are increasingly becoming better. TTS systems are generally judged using the following two criteria: intelligibility and naturalness of the synthesized speech to human listeners. These metrics are traditionally measured by calculating the Mean Opinion Score (MOS) (or intelligibilty score) of a panel of listeners, who annotate the synthesized speech with their subjective evaluation. However, as is generally the norm for any annotation process involving human evaluators, computing MOS is time and resource-expensive. As an alternative, there are other proposed efficient algorithmically computable metrics [1, 2, 3, 4, 5] which measure _proxies_ of intelligibility, quality and naturalness of the synthesized speech for example, such as using an ASR model trained on real speech to evaluate Word Error Rate (WER) of the synthesized speech. However, we argue that these metrics fall short of capturing the real quality of the synthesized speech. In this paper, we propose a better approximation of these measures of synthetic speech, which we show to be highly correlated with MOS.
The top line of synthetic speech in intelligibility and naturalness is real speech, i.e. synthetic speech when closest to real speech would have the highest measure in these metrics. Therefore the question we pose in evaluating a TTS system is "How close is the quality and intelligibility of synthetic speech generated by the system to that of real speech and how can we evaluate this?". We hypothesize that quality and intelligibility differences between the synthetic and real speech are attributable to the distributional shift between the two, and any metric which attempts to quantify these differences must capture this shift. However explicit knowledge of the true distributions of the two is infeasible, and measurements must be made through mechanisms that invoke them implicitly.
Traditionally this is done by evaluating the synthetic speech on an ASR model trained on real speech. However, since speech synthesis is effectively a _maximum likelihood_ generating process that attempts to produce the most likely speech signal for any text, this can result in unrealistically high recognition accuracies biased in favor of the synthetic speech and, consequently, anomalous measurements of the speech quality. We argue that on the other hand, an ASR model trained on the synthetic speech and evaluated on real speech better captures the statistical difference between the two, and would be a better approximation of the closeness of the real and synthetic speech. Since the ASR models the distribution of the synthetic speech, its ability to recognize the real speech exhibits how closely the distributions of the synthetic training data matches with that of real testing data.
This paper makes the following contributions:
1. We propose a new evaluation method for TTS that captures distributional similarity between real and synthetic speech as a proxy for perceptual speech quality tests.
2. We compare the proposed metric to multiple automatic metrics and Mean Opinion Score (MOS), and show that our metric correlates well with human-provided MOS.
## 2 Background: Speech Synthesis and Evaluation
Recent advancements in speech synthesis systems have reached a point where they are often indistinguishable from human speech [6]. However, evaluating these systems has become increasingly complex. The most dependable method for evaluating speech synthesis systems from various perspectives is the Mean Opinion Score (MOS), in which human raters listen to synthesized speech and assess its naturalness, quality, and intelligibility using a 5-point Likert scale. However, this process is time-consuming, expensive, and subject to subjective judgments. To address these challenges, researchers have developed automatic metrics aimed at reducing evaluation costs. However, each metric is typically limited to evaluating a specific aspect of speech synthesis system performance, necessitating the use of multiple metrics to comprehensively assess these systems. Recent studies have tackled this challenge through the training of regression models using pairs of speech MOS scores [7] or by utilizing semi-supervised learning methods to acquire MOS scores. An important constraint associated with this method is the need for labeled datasets in the same domain, making it less generalizable [8] to any text-to-speech (TTS) system.
Unsupervised metrics have also been employed to assess various aspects of speech synthesis, such as the Equal Error Rate for measuring speaker similarity in synthesized speech and metrics like Frechet DeepSpeech Distance [5] (FDSD) and Frechet Speech Distance (FSD) [4] to measure the quality and diversity of synthetic speech. However, it's important to note that each of these metrics focuses on a single factor and cannot serve as standalone measures. Recently, the utilization of speech-language models to assess speech quality has revealed a correlation with MOS scores. The SpeechLM-Score [2] calculates the perplexity of synthetic speech by employing pretrained autoregressive unit speech language models (uLM) [9]. Another avenue of exploration involves Automatic Speech Recognition (ASR)-based metrics. One approach involves measuring the distance between synthetic and real speech [10] by computing various distance metrics to assess speaker, prosody, and environmental similarity within real distributions. A commonly used ASR evaluation method is the computation of Word Error Rate (WER) [1] for synthetic speech using pre-trained ASR models to measure intelligibility. Our proposed ASR evaluation approach seeks to evaluate both the naturalness and intelligibility of synthetic speech by quantifying the distribution shift between synthetic and real distributions.
## 3 Proposed Method
### Divergence metric for Distributional Shift
Given a text \(T\), let \(X_{r}\) be a random variable that represents real speech signals produced by humans to convey text \(T\). Let \(X_{s}\) be a random variable that represents synthesized speech from a TTS model for text \(T\). \(P(X_{r},T)\) and \(P(X_{s},T)\) are the joint distributions of the speech and text.
To evaluate the TTS Model, we want to compare these joint distributions - if the distributions are similar, synthetic speech has relatively high quality. Therefore, we want to compute a divergence \(\text{div}(P(X_{r},T),P(X_{s},T))\) between the probability distributions, that measures the distance between the two distributions, i.e. distributional shift.
Distributional shifts are typically computed using divergence metrics such as the Kulback-Leibler (KL) divergence [11], Jensen-Shannon divergence [12], the Earth-moover distance [13] etc. However, these metrics require explicit knowledge of the distributions, or at least the ability to compute the probability of a given instance, if sampling-based approaches are to be used, which is infeasible, since it requires explicit modelling of \(P(X_{s},T)\) (or \(P(X_{r},T)\)) whereas neural models only approximate the conditional probability of \(T\). Furthermore, even with explicit sampling, given the high dimensionality and time-series nature of the data, it would require sampling an infeasibly large number of pairs of \((X,t)\), where \(X\) is either \(X_{r}\) or \(X_{s}\), for reliable estimates.
To address the limitations of existing distributional similarity metrics, we propose an alternate metric that uses classification performances as a proxy to get distributional shifts. This classification-based pseudo-divergence uses probability distributions to get accuracy metrics. Given the two data distributions, input and labels, below we present the general case of the divergence metric.
Let \(P_{1}\) and \(P_{2}\) be two data distributions of random variables \(X\) and \(y\), where \(X\) is the input signal and \(y\) is the label. The predicted label for the given input \(x\) can be written as:
\[\hat{y_{1}}(x)=\text{argmax}_{y}P_{1}(y|x)\]
When the classification boundaries are learned from \(P_{1}\), and used to classify the data coming from the same distribution, the accuracy of this classification can be written as.
\[\mathbb{E}_{P_{1}(x)}[P_{1}(\hat{y}_{1}(x)|x)]\]
Using the classification boundaries learned on the distribution \(P_{1}\) and used to classify the data coming from the dis
Figure 1: Left: Model trained on real data and tested on synthetic data.
tribution \(P_{2}\), the accuracy can be written as:
\[\mathbb{E}_{P_{2}(x)}[P_{2}(\hat{y}_{1}(x)|x)]\]
The difference between the two classification accuracies captures the distributional shift between the \(P_{1}\) and \(P_{2}\). This can be written as:
\[d(P_{1},P_{2})=|\mathbb{E}_{P_{1}(x)}[P_{1}(\hat{y}_{1}(x)|x)]-\mathbb{E}_{P_{ 2}(x)}[P_{2}(\hat{y}_{1}(x)|x)]|\]
The absolute value is needed since this difference could be negative. Note that \(d(P_{1},P_{2})\) is a pseudo-divergence that goes to zero when \(P_{1}=P_{2}\) and is non-negative. It is also asymmetric, so \(d(P_{1},P_{2})\neq d(P_{2},P_{1})\).
We can use the above formulation to calculate the pseudo-divergence of the real and synthetic speech. The distributions \(P_{1}\) and \(P_{2}\) can be estimated using an ASR Model trained on the data samples taken from the real and synthetic speech respectively. Since this is asymmetric, it is important to note which divergence to calculate \(d(P_{1},P_{2})\) or \(d(P_{2},P_{1})\). Either the ASR Model trained on real speech and tested on synthetic speech or vice versa. Empirically we show that the model trained on synthetic and tested on real data is a more accurate metric for the distributional shift of the two distributions than doing it vice versa. We explain the intuition behind this in the following section.
### Real vs Synthetic data distribution
Figure 1 shows the joint distributions \(X\) and \(y\) where red curve shows when class \(y=1\) and blue curve shows class \(y=0\). Note that the real data has more variance than the synthetic data (which is true for the real and synthetic speech). When the classification boundary for the two classes is learned on the real data, there is some natural Bayes error associated with the class overlap present in the real data. When this classification boundary is used to do classification in the synthetic data, the error is zero, since the data distributions are far apart and there is no overlap in the two. In fact, there are multiple decision boundaries associated with different errors on the real data that would ensure zero error in the synthetic data. This zero error does not say anything about how different the real and synthetic data joint distributions are. The synthetic data distribution could be far off the chart, be highly unlikely compared to the real data, and still have zero error.
On the other hand, let's consider the case where the lower variance data, i.e. the synthetic data, is used for learning the classification boundary. The right part of Figure 1 shows this scenario. The dotted line shows the range where the decision boundary can lie such that the error rate on the synthetic data would be zero. However, this range of boundary would always be associated with greater than zero error on the real data. The higher the difference in the joint distributions in the real and synthetic distributions, the greater the range of errors in the real data.
Therefore, the second scenario is better representative of the distributional differences in the real and synthetic data distributions. We believe that this would hold for the real and synthetic speech distributions. An ASR model trained on synthetic speech and evaluated on real speech would be a better metric of the quality of the synthetic speech than doing it the other way around.
## 4 Experimental Setup
### Text-to-Speech-Synthesis
We evaluate the proposed method using three state-of-the-art open-source TTS systems: StyleTTS [14], MQTTS [15], and YourTTS [16]. These models utilize different techniques for synthesis, but all use a reference encoder to extract both speaker and style information from the input speech. For our assessments, we made use of the publicly released pre-trained models. The StyleTTS model, MQTTS, and YourTTS models we used were trained on LibriTTS [17], Gigaspeech [18] without audiobooks, and VCTK [19] respectively. A Deep-Phonemizer [20] was used to extract phonemes from the text for synthesis.
### Automatic Speech Recognition
In order to make evaluations robust and meaningful, we need to select strong End-to-End models. In this paper, we therefore elect to fine-tune Whisper rather than train from scratch using 10h of real/synthetic speech. We use the Whisper-medium multilingual model [21] as the initialization. We then fine-tune it within ESPNet [22, 23] using CTC loss [24]. ASR Inference was performed using beam search with a beam size of 5.
### Datasets
To generate synthetic speech for our evaluation, we utilized the LibriTTS [17] dataset, which is based on Librispeech [25]. From this dataset, we sample one subset of 10 hours containing speech data from all available speakers. All three TTS models used a speaker encoder to clone the identity of a given speech reference. It's worth noting that we excluded speech samples that were less than 4 seconds in duration and those exceeding 30 seconds in length. This exclusion was necessary as MQTTS and StyleTTS do not support short samples as references.
### Evaluation Metrics
**MOS-Naturalness (MOS-N)** : We conducted a crowdsourced Mean Opinion Score (MOS) evaluation to assess the naturalness of synthetic speech generated by each system, in comparison to real speech. We obtained 50 sentences from
the LibriTTS test-clean dataset and another 50 from the LibriTTS test-other dataset, resulting in a total of 100 samples each for real speech, MQTTS, YourTTS and StyleTTS. Each sample was evaluated by 10 raters, who were instructed to rate the naturalness of the speech on a scale of 1 to 5, with 1 indicating poor and 5 indicating excellent quality.
**MOS-Intelligibility(MOS-I)**: We assessed intelligibility of spoken words by using nonsense sentences [26], effectively eliminating sentence structure and grammar from the evaluation. This absence of structure allowed listeners to only focus on the quality of the synthesized speech and not be distracted by the grammar. Participants were presented with a choice between the original sentence and a transcription generated by the Whisper-medium. We specifically selected 60 sentences with relatively high Word Error Rate (WER) from a pool of 200 random sentences generated by ChatGPT [27]. Among these, 30 sentences were short (less than 10 words), while the other 30 were long. This allowed us to evaluate the impact of sentence length variation on intelligibility. We generated synthetic speech using the three TTS systems for the 60 sentences using a test-clean set as a reference for the model's speaker and style encoder. We used WebMushar [28] to create a test form along with Prolific for crowd-sourcing.
**Intelligibility of Synthetic Speech using WER from Pre-trained ASR**: We computed the WER for synthetic speech generated by three different systems using the Whisper medium multilingual. This model is pre-trained on real speech and evaluated on synthetic speech. This setting of training / testing demonstrates the traditional way that speech synthesis evaluation is performed. This evaluation was performed on both the test-clean and test-other datasets from LibriTTS.
## 5 Experimental Results
Table 1 reports the results of our experiments on Libri-TTS with the proposed evaluation method. We consider multiple metrics and report raw scores of the metric in the rows and relative ranking scores in brackets next to the raw score. The first row, named WER shows the case when the model is trained on real data and evaluated on synthetic data. The last row shows our setting, where the model is trained on synthetic data and evaluated on real data. Based on the absolute raw numbers of the metric, we rank the TTS systems from 1 to 3 based on which one performs the best to worst. For example in the row MOS-N, Style-TTS has the highest score and therefore has rank 1, followed by MQ-TTS and then YourTTS. In order to assess whether our metric is a good representation of the quality of synthetic speech, we compare the relative ranking of our metric with the other metrics. Two metrics with a matched relative ranking means that the metrics evaluate the quality of speech similarly and agree with each other.
First, we see that the Mean Opinion Score tests on naturalness (MOS-N) and intelligibility (MOS-I) agree on relative rankings between the synthetic speech models. Further, we observe that the traditionally used WER metric shown in the first row does not actually correlate completely with the MOS results. We observe similar issues with other popular metrics including SpeechLMScore and MOSNet.
From the last row, we observe that our metric evaluation of synthetic speech has a similar trend as the reported MOS scores, matching both MOS-N and MOS-I. Compared with the inconsistent result from the first row and the consistent result from our metric, we demonstrate the importance of the proposed evaluation method.
## 6 Conclusion
In this paper, we address the challenge of automatic evaluation for synthetic speech by modeling the similarity/dissimilarity between the distributions of synthetic and real speech. Existing divergence metrics require a large number of samples to capture the joint distribution and hence it is infeasible to employ them to calculate the distributional shift. In this paper, we introduce a new divergence measure that can be computed without knowledge of the joint distribution. The metric uses an ASR model as an approximation for the data distributions and the WER as a proxy for the quality of the synthesized speech. The metric is asymmetric, and it matters what the speech recognition models are trained and tested on. We show that in practice it is more accurate to train the model on synthetic speech and assess the resulting model's performance on real speech than doing it vice versa. Experiments using 3 public open-source speech synthesis systems show that our model correlates positively with subjective human Mean Opinion Scores for naturalness and intelligibility, while previously used ways for evaluating ASR performance trained on real and evaluated on synthetic does not correlate. Further, we show that it only takes small amounts of synthetic speech to train the ASR model to be able to make reliable judgments on the quality of the synthesized speech.
\begin{table}
\begin{tabular}{l|c c c c} Metric & Ground Truth & StyleTTS & MQTTS & YourTTS \\ \hline WER \(\downarrow\)[1] & 20.57 & **18.7**(1) & 29.35(3) & 22.1(2) \\ SpeechLMScore\(\uparrow\)[2] & 3.98 & 3.62 (3) & **4.13**(1) & 3.96 (2) \\ MOSNet \(\uparrow\)[7] & 4.30 & **4.49**(1) & 3.57(3) & 4.01(2) \\ MOS-N \(\uparrow\) & **3.69** & 3.68 (1) & 3.66 (2) & 3.59 (3) \\ MOS-I \(\uparrow\) & - & **0.698**(1) & 0.618(2) & 0.566 (3) \\ \hline Ours 10h \(\downarrow\) & **3.1** & 3.3 (1) & 3.9(2) & 4.5 (3) \\ \end{tabular}
\end{table}
Table 1: This table shows the scores for real and synthetic speech on multiple metrics for LibriTTS test-clean. MOSNet and SpeechLMScore scores are on the same 50 samples of MOS-N. Relative ranking among synthetic speech systems are shown in red inside the brackets. |
2310.07069 | Analyzing Distribution System Load Flow Through Linearization of
Non-Holomorphic Functions | This letter presents a novel non-iterative power flow solution for radial
distribution systems. In the pursuit of a linear power flow solution that
seamlessly integrates into other power system operations, an approximate
solution via complex linearization of non-holomorphic functions, making no
assumptions about the network's parameters was developed. This approach can be
readily adapted to different load models, and its accuracy is comparable to
other established conventional radial load flow analysis tools. | Ibrahim Habiballah, Wasiu Sulaimon, Fahad Al-Ismail | 2023-10-10T23:32:36Z | http://arxiv.org/abs/2310.07069v1 | # Analyzing Distribution System Load Flow Through Linearization of Non-Holomorphic Functions
###### Abstract
This letter presents a novel non-iterative power flow solution for radial distribution systems. In the pursuit of a linear power flow solution that seamlessly integrates into other power system operations, an approximate solution via complex linearization of non-holomorphic functions, making no assumptions about the network's parameters was developed. This approach can be readily adapted to different load models, and its accuracy is comparable to other established conventional radial load flow analysis tools.
Load flow analysis, unbalanced distribution system, complex linearization, holomorphic function.
## Nomenclature
\(n,m\) Number of nodes and branches.
\(V\in\mathbb{R}^{n}\) Node voltages.
\(V_{S}\) Specified voltage at the slack node.
\(V_{M}\in\mathbb{R}^{n-1}\) Node voltages except the slack node.
\(I_{M}\in\mathbb{R}^{n-1}\) Bus injections except the slack node.
\(A\in\mathbb{R}^{m\times n}\) Incidence matrix.
\(1_{M}\in\mathbb{R}^{n-1}\) Unity column vector.
\(\left(\cdot\right)^{*}\) Complex conjugate.
\(\odot\) Hadamard product.
## I Introduction
The objective of the load flow program is to compute the steady state of nodal voltages and angles, as well as the active and reactive power flow across all lines, based on the provided power injections at specific buses, generator outputs, and network conditions, as discussed in [1]. To accomplish this, numerical techniques are frequently utilized to identify suitable operating points essential for both offline and real-time operations. Nevertheless, it's essential to note that, in general, addressing the problem in the context of both transmission and distribution system analysis is regarded as NP-hard, as highlighted in [2].
Specifically within the context of distribution networks, dedicated iterative load flow techniques have been successfully developed for radial network configurations. One foundational approach for circuit analysis, which relies on the principles of Kirchhoff's Current Law (KCL) and Kirchhoff's Voltage Law (KVL), was introduced in [3]. Similarly, the recursive method based on the bi-quadratic equation describing the relationship between bus voltage magnitude and line flows was presented in [4]. This latter approach has served as a benchmark for several other iterative methods. An alternative, faster, and more efficient direct approach, rooted in network topology, eliminates the necessity for both an admittance matrix and a Jacobian matrix, as outlined in [5]. This approach entirely bypasses the requirement for backward-forward sweep (BFS) methods. A comprehensive overview of all these iterative techniques can be found in [6].
This key contribution of this letter lies in the development of an innovative, rapid, and linear load flow technique achieved through linearization of non-holomorphic functions in the complex plane. Moreover, this approach is adaptable to the ZIP model and can be seamlessly integrated into the optimization of power system operations.
## II Methodology
### _Problem Formulation_
Consider a generic radial feeder in Fig.1 represented as directed graph \(\mathcal{G}\left(N,L\right)\) where \(N\) and \(L\) are the number of nodes and branches respectively with the property that
\[N=L+1. \tag{1}\]
The goal of the load flow problem is to explicitly express the nodal voltages as a function of the bus injections. In this case, we rewrite the nodal voltages, nodal current injections, and the incidence matrix as \(V=\begin{bmatrix}V_{S}&V_{M}\end{bmatrix}^{T}\), \(I=\begin{bmatrix}I_{S}&I_{M}\end{bmatrix}^{T}\), and \(A=\begin{bmatrix}A_{S}&A_{M}\end{bmatrix}\) respectively, where \(A_{S}\in\mathbb{R}^{m}\) and \(A_{M}\in\mathbb{R}^{m\times n-1}\).
\[e=AV \tag{2}\]
\[e=ZI_{F} \tag{3}\]
\[I=A^{T}I_{F}. \tag{4}\]
where \(I_{F}\) and \(e\) are the vectors of the current flows and voltage drops in the lines respectively. Exploiting (2)-(3) by eliminating the \(I_{F}\) and \(e\) and with the assumption that the slack node is known with no current injections, we have
\[A_{M}^{-1}A_{S}V_{S}+V_{M}=A_{M}^{-1}Z\left(A_{M}^{T}\right)^{-1}I_{M} \tag{5}\]
Fig. 1: A generic distribution system radial feeder network.
where \(Z\in\mathbb{C}^{m\times m}\) is a diagonal matrix with entries as the impedance of each line. We can further simplify it by having the following result
\[V_{M}-1_{M}\cdot V_{S}=D\cdot I_{M} \tag{6}\]
where matrix \(D\) is \(A_{M}^{-1}Z\left(A_{M}^{T}\right)^{-1}\) and \(A_{M}^{-1}A_{S}V_{S}\) simplifies to a negative unity column matrix. Consequently, the \(Y_{bus}\) matrix of the whole network in terms of the incidence matrix is \(A^{T}CA\) which can fully expressed as
\[Y_{bus}=\begin{pmatrix}A_{M}^{T}Z^{-1}A_{S}&A_{M}^{T}Z^{-1}A_{M}\\ A_{M}^{T}Z^{-1}A_{S}&A_{M}^{T}Z^{-1}A_{M}\end{pmatrix} \tag{7}\]
where \(C=Z^{-1}\) is a diagonal matrix of all the line admittances.
### _ZIP Load Formulation_
Since the current injections are not usually specified, they can be rewritten (6) as in terms of complex power injections for ZIP load models at bus \(k\) as
\[I_{k}=h^{2}\cdot S_{Zk}^{*}\cdot V_{k}+h\cdot S_{Ik}^{*}+\frac{S_{Pk}^{*}}{V_ {k}^{*}} \tag{8}\]
where \(h=1/V_{base}\) and unity for per unit analysis. Moreover, (8) indicates that the only source of non-linearity is introduced by the constant power load injections. Therefore, this necessitates a very accurate linearization approach for a non-iterative solution.
#### Ii-B1 Constant Power Load
For the case of constant power load the expression in (6) can be reformulated for each node \(k\) as
\[V_{k}\cdot V_{k}^{*}-V_{S}\cdot V_{k}^{*}=\sum_{j=2}^{n}D_{kj}\cdot S_{Pk}^{*} \tag{9}\]
The power flow problem in (9) has non-linear term \(f(V)=\left|V_{k}\right|^{2}\) with respect to the nodal voltages. Moreover, these terms are non-holomorphic and cannot be linearized using the conventional linearization because it does not fulfil the Cauchy-Reimann condition for differentiability i.e \(f_{V}^{\prime}\neq 0\). Using Wirtinger derivatives as discussed in [7], linearization of non-holomorphic function \(f\left(V\right)\) around \(V_{0}\) is given below as
\[f\left(V\right)\approx f\left(V_{0}\right)+\left(V-V_{0}\right)f_{V}^{\prime} \left(V_{0}\right)+\left(\bar{V}-V_{0}\right)f_{V}^{\prime}\left(V_{0}\right) \tag{10}\]
Then from (9), the non-linear terms can be linearized around a voltage of \(1+0j\) according to (10) as
\[V_{k}\cdot V_{k}^{*}\approx V_{k}+V_{k}^{*}-1,\quad\forall k\in\mathcal{N} \tag{11}\]
where \(\mathcal{N}\) is the set of all nodes except the slack bus.
From (9) and (11) the final form for the linearized load flow can be expressed as
\[\alpha V_{M}^{*}-V_{M}=D\cdot S_{PM}^{*}+1_{M}. \tag{12}\]
where \(\alpha\) is \(V_{S}-1\). Since \(V_{S}\) is most of the time around 1.0 p.u, then the linearized form of (9) can be expressed as
\[V_{M}=D\cdot S_{PM}^{*}+1_{M}. \tag{13}\]
#### Ii-B2 Constant Impedance Load
In the case of constant impedance load, from (8), the expression for the node voltage at each bus and the compact form for all buses are shown below
\[V_{k}-V_{S}=\sum_{j=2}^{n}D_{kj}\cdot S_{Zk}^{*}\cdot V_{k} \tag{14}\]
\[\left(1_{M}-h^{2}\cdot D\cdot S_{ZM}^{*}\right)\odot V_{M}=1_{M}\cdot V_{S} \tag{15}\]
#### Ii-B3 Constant Current Load
Similarly, for the constant current load injections, the resulting formulation is shown below
\[V_{k}-V_{S}=h\cdot\sum_{j=2}^{n}D_{kj}\cdot S_{Ik}^{*} \tag{16}\]
\[V_{M}=h\cdot D\cdot S_{IM}^{*}+1_{M}\cdot V_{S} \tag{17}\]
### _ZIP Linearized Formulation_
Since the load flow formulation is rendered as a linear system, it is possible to superimpose the three formulations to form a generalized load flow equation. The compact form can be expressed below as
\[V_{M}=A^{-1}B. \tag{18}\]
where
\[A =diag\left(1_{M}-h^{2}\cdot D\cdot S_{ZM}^{*}\right) \tag{19}\] \[B =D\cdot S_{PM}^{*}+h\cdot D\cdot S_{IM}^{*}+1_{M}\cdot V_{S} \tag{20}\]
## III Results
The development above is easily extensible to a three-phase unbalanced system. A simple modification is to convert phase voltages to line voltages in the case of delta connected loads similar to the transformation in [8]. The evaluation metric to be adopted is the percentage line voltage unbalance rate as stated in [9] which can be expressed for a three phase node as
\[\%LUVR=\frac{\left|V_{max}-V_{avg}\right|}{V_{avg}} \tag{21}\]
### _Case 1_
Initially, the accuracy of the proposed method is demonstrated through a comparison with established linear distribution system power flow results documented in [4, 10]. The network under examination is a balanced 30-bus radial feeder, with the source voltage set at \(1.05\) p.u. To illustrate how the load flow results are influenced by the chosen linearization point in the model, the proposed method is formulated at two different linearization points: \(1.05\) p.u. and \(1.0\) p.u. Table I shows a decrease in accuracy in basic some basic metrics when the linearization point is established at \(1.0\) p.u., which diverges significantly from the source voltage of \(1.05\) p.u. The results provides additional insights into the sensitivity of the proposed technique making comparisons with other methods and underscoring the significance of selecting a linearization point in proximity to the specified source voltage.
### _Case 2_
In this scenario, the proposed method is demonstrated within the context of the ZIP model for both balanced and unbalanced case case. The data used for the load flow analysis in the balanced case is provided in [8] and all algorithms are implemented in Python environment. The evaluation metric is \(\epsilon_{k}\) which is the absolute difference between the BFS algorithm and the linearized load flow at node \(k\). Figure 2 shows that the average error difference is very small which indicates how the proposed algorithm is significantly similar to the iterative approach. Moreover, Table II further shows comparison with the iterative approach and a linear method proposed in [8] to demonstrate the accuracy of the proposed method.
For the unbalanced scenario, IEEE 37-bus test feeder data from [11] were utilized. The system is very unbalanced and has all its load delta connected. Figure 3 shows the percentage LUVR for both BFS and the proposed method. The number of nodes with LUVR exceeding \(1\%\) is the same for both algorithms. The \(1\%\) voltage unbalance benchmark is for derating motor loads due to system unbalance.
## IV Conclusion
An innovative, simple, and effective linear power flow model for radial distribution systems has been presented. The non-linearity associated with constant power loads was reformulated as a function of complex non-holomorphic terms which is then linearized with specialized complex derivatives. The accuracy of this method aligns satisfactorily with iterative techniques for both balanced and unbalanced distribution systems.
|
2301.09705 | An Optimal Control Strategy for Execution of Large Stock Orders Using
LSTMs | In this paper, we simulate the execution of a large stock order with real
data and general power law in the Almgren and Chriss model. The example that we
consider is the liquidation of a large position executed over the course of a
single trading day in a limit order book. Transaction costs are incurred
because large orders walk the order book, that is, they consume order book
liquidity beyond the best bid/ask. We model the order book with a power law
that is proportional to trading volume, and thus transaction costs are
inversely proportional to a power of trading volume. We obtain a policy
approximation by training a long short term memory (LSTM) neural network to
minimize transaction costs accumulated when execution is carried out as a
sequence of smaller suborders. Using historical S&P100 price and volume data,
we evaluate our LSTM strategy relative to strategies based on time-weighted
average price (TWAP) and volume-weighted average price (VWAP). For execution of
a single stock, the input to the LSTM is the cross section of data on all 100
stocks, including prices, volumes, TWAPs and VWAPs. By using this data cross
section, the LSTM should be able to exploit inter-stock co-dependence in volume
and price movements, thereby reducing transaction costs for the day. Our tests
on S&P100 data demonstrate that in fact this is so, as our LSTM strategy
consistently outperforms TWAP and VWAP-based strategies. | A. Papanicolaou, H. Fu, P. Krishnamurthy, B. Healy, F. Khorrami | 2023-01-23T20:24:20Z | http://arxiv.org/abs/2301.09705v4 | # An Optimal Control Strategy for Execution of Large Stock Orders Using LSTMs+
###### Abstract
In this paper, we simulate the execution of a large stock order with real data and general power law in the Almgren and Chriss model. The example that we consider is the liquidation of a large position executed over the course of a single trading day in a limit order book. Transaction costs are incurred because large orders walk the order book, that is, they consume order-book liquidity beyond the best bid/ask. We model these transaction costs with a power law that is inversely proportional to trading volume. We obtain a policy approximation by training a long short term memory (LSTM) neural network to minimize transaction costs accumulated when execution is carried out as a sequence of smaller sub orders. Using historical S&P100 price and volume data, we evaluate our LSTM strategy relative to strategies based on time-weighted average price (TWAP) and volume-weighted average price (VWAP). For execution of a single stock, the input to the LSTM includes the entire cross section of data on all 100 stocks, including prices, volume, TWAPs and VWAPs. By using the entire data cross section, the LSTM should be able to exploit any inter-stock co-dependence in volume and price movements, thereby reducing overall transaction costs. Our tests on the S&P100 data demonstrate that in fact this is so, as our LSTM strategy consistently outperforms TWAP and VWAP-based strategies.
**Keywords:** Price Impact, Order Books, Optimal Execution, LSTM Networks.
**JEL Codes:** C45, G10
Introduction
Institutional investors must consider transaction costs when trading large amounts of stock. For example, each month a multi-billion dollar mutual fund may execute several large stock orders when they rebalance their stock holdings. In the U.S. stock market, a large order would be to sell 1,000,000 shares of a stock with a typical daily trading volume of 25 million. A naive strategy is to place a single, very large market sell order on the exchange. This single order will consume most of the liquidity available in the limit order book and will result in an average price per share that is considerably lower than the best bid - if it even gets filled. A better strategy is to divide the trade into smaller sub-orders, which then get executed over the course of a fixed time period. In this paper, we cast this sub-dividing of large trades as an optimal control problem. We train a long short term memory (LSTM)-based neural network [17] to return an optimal policy for execution.
Liquidity is made available by market makers who submit limit orders at the different price ticks in the order book. Liquidity is consumed when a trader submits a market order to buy or sell. A market order that is not too big will get filled by limit orders at the best available prices. A very large market buy (sell) order will _walk the order book_, that is, it will consume all liquidity at multiple ticks. Walking the order book results in an average price per share that is equal to the initial best offer (bid) plus (minus) a transaction cost. Statistical studies of order book data have shown that the depth to which a large order walks the order book is approximately a concave power law of the number of shares [5, 29]. Simple calculation will show that sub-dividing a large order into a sequence of sub-orders will reduce these transaction costs, but to optimally sub-divide is more complicated because there are several (stochastic) variables to consider when designing a policy.
It is common practice to evaluate a policy in terms of the average price over the entire trade. Average prices to consider are the time-weighted average price (TWAP) and the volume-weighted average price (VWAP). In general, an optimal sub-order policy will minimize the expected value of transaction costs. Strategies that aim to achieve TWAP or VWAP can be optimal for executing large orders [19].
Our approach to the design of an optimal execution policy is to consider the problem amidst the uncertainty of real market data. Machine learning and deep neural networks are very good for learning policies directly from data without assistance of models. For large-cap U.S. stocks (i.e., the S&P100), there is plenty of data available upon which to train a network and conduct backtests. In this paper we perform these backtests, the conclusions from which indicate that neural networks are indeed effective for improvement of large-order execution strategies.
LSTM networks are a good candidate for constructing a policy function because they (a) do not require a model for the distribution of the market; we can learn directly from the data, (b) can handle Markov or non-Markov states, (c) are well-suited to learn in the episodic environment of single-day execution, and (d) can provide a policy with the required temporal dynamic.
Of particular relevance in this paper is the possible presence of inter-stock co-dependence in price and volume movements. The execution problem is posed for a single stock, but the input to the network includes all data from the entire market, which allows the LSTM network to learn inter-stock dependencies that may provide better prediction of prices and volumes, thereby improving the execution prices achieved by the policy. Inputting such a large amount of data might be a problem for more parsimonious policy functions, but the LSTM is capable of handling this high-dimensionality, and indeed, from our results on S&P100 stocks it appears that LSTM does learn useful patterns in this dataset.
### Background and Review
A prototypical model for optimal execution was introduced in [4]. A simplification of their paper is to assume that liquidity consumed by a market order refills quickly in the time between scheduled sub orders, and therefore a simple formulation of the problem will consider only the temporary impact on price. The speed of impacted-price reversion is studied in [18] with discussion on how faster reversion rates affect the aggressiveness of trade execution. There are also models that consider price impact with slower reversion of impacted price, for example [1, 6, 7] using Hawkes processes, and [24] wherein it is shown that an optimal execution policy should start with a large order to disrupt the order book's supply and demand balance, and then begin trading continuously as the order book refills.
Power laws have been observed in stock markets in the UK, China and the United States, as reported and discussed in [13], [10], [26], [32], [21], [14]. [5] estimated the power-law exponent to be around 0.67, which the obtained from a large dataset from Citibank. Closely related to this paper is [15] where a reinforcement approach is used to approximate the solution to [4], and also the Deep Q-Learning approach to optimal execution that was studied in [23].
Recently, there has been some progress on the development of machine learning algorithms for order book modeling and price execution [20, 22, 31, 30], and also [28] where reinforcement learning is implemented for optimal limit order placement in crypto markets.
### Results in this Paper
The main result in this paper is the improved execution strategies that we find by using LSTM. We assume that limit-order depth at each tick is inversely proportionally to volume, and increases by a power law across for ticks farther from the best bid/ask. A major advantage of our LSTM approach is that the network's input includes the entire cross section of market data, thereby utilizing any inter-stock co-dependence that may be present in volume or price changes. To evaluate the efficacy of the proposed approach, we implement LSTM execution on historical minute-by-minute stock market data from January 2020 - July 2022. Our results indicate that, compared with TWAP and VWAP strategies, execution with a trained LSTM network can save a 1-2 basis points (bps) per
stock on a given day when executing a block trade of S&P100 stock.
## 2 Order-Book Model and Optimal Policies
Let \(S_{t}\) and \(V_{t}\) denote the mid-price and volume, respectively, of a stock at time \(t\). Following the model described in [25, 27], the order book has limit order distribution \(\rho(t,s)\geq 0\), where the units of \(s\) are ticks relative to \(S_{t}\). Ticks with limit sell orders correspond to \(s>1\), ticks with \(s\in(0,1)\) correspond to limit buy orders, and the mid price corresponds to \(s=1\). An order of \(a\) shares consumes liquidity up to a relative price \(r_{t}(a)\) such that
\[a=\int_{1}^{r_{t}(a)}\rho(t,s)ds. \tag{1}\]
A simple form for \(\rho\) has limit orders distributed continuously in \(s\), proportionally to volume with the following power law,
\[\rho(t,s)=\frac{V_{t}}{\epsilon}|s-1|^{\beta} \tag{2}\]
where \(\beta\in[0,\infty)\), \(\epsilon>0\) is a scaling parameter, and \(V_{t}\) is the trading volume at time \(t\). In this paper, we seek to optimize execution of a large order over the course of a single trading day, in which case each \(V_{t}\) will be the total volume of trades that occurred in the \(t^{th}\) minute. The impact function in (2) is like the power law considered in [2, 3].
When relative price \(r_{t}(a)\) in (1) is computed with distribution \(\rho(t,s)\) in (2), we see a price that is a concave function of order size divided by volume,
\[r_{t}(a)=1+\mbox{sign}(a)\left(\frac{\epsilon(\beta+1)}{V_{t}}|a|\right)^{ \frac{1}{\beta+1}}. \tag{3}\]
The price \(S_{t}r_{t}(a)\) can also be thought of as the _impacted price_. Impacted price as a linear function of \(a\) implies \(\beta=0\) so that the order book has equal liquidity at all ticks. The prevailing conclusion in many empirical studies is that impacted price is a sub-linear concave function [5, 8, 9, 12], such as the square-root function in this case \(\beta=1\). Cases where \(\beta\) is less than zero are not considered because this would imply decreasing liquidity in successive ticks beyond the best bid/offer, which is rarely the case in practice.
The transaction costs incurred by walking the order book, as described by (1), (2) and (3), will be a convex function of trade size. The dollar amount of trading loss due to the price impact is computed as follows:
\[\mbox{loss}(t,a)=S_{t}\left|\int_{1}^{r_{t}(a)}s\rho(t,s)ds-a\right|=C_{ \epsilon,\beta}S_{t}(V_{t})^{-\frac{1}{\beta+1}}|a|^{\frac{\beta+2}{\beta+1}} \tag{4}\]
where \(C_{\epsilon,\beta}=\frac{1}{\beta+2}(\epsilon(\beta+1))^{\frac{\beta+2}{\beta +1}}\). From the convexity of (4) with respect to \(|a|\) it is clear that very large orders should be divided into sub-orders to reduce transaction costs.
**Remark 1** (Cost of Paying the Spread).: _This order book model ignores the bid-ask spread. For liquid stocks the bid-ask spread is usually 1 tick, or equivalently 1e. For such stocks the transaction costs for market orders filled at the best bid/ask can be proxied by \(S_{t}\pm.005\). This amounts to a flat fee for execution of an order and will remain constant for all strategies tested in this paper. Therefore, we omit the cost of the spread._
**Remark 2** (Exchange Fees).: _Typically there are exchange fees that may be proportional to the dollar amount traded. We do not consider fees because the problem we are considering is from the perspective of a large institutional investor for whom these types of fees are negligible for an order of this size, and usually decrease as the trade size increases._
**Remark 3** (Permanent Impact).: _We do not consider permanent impact in this paper. The assumption is that we are trading in highly liquid stocks for which the order-book is replenished very quickly after a sub-order. It would certainly be interesting to consider execution with permanent price impact, but in this paper, we focus our effort on finding policies that are able to optimize amidst the stochasticity and uncertainty in real-life historical price and volume data._
### Optimal Execution Policy
Let us work on a probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t=,0,1,2,\dots},\mathbb{P})\) where \(\mathcal{F}_{t}\) denotes the \(\sigma\)-algebra representing all information known to us at time \(t\). Assume that time \(t=0,2,3,\dots\) are equally spaced. Let the initial inventory be the number of shares \(x\), and consider the situation where this inventory needs to be completely liquidated by terminal time \(T\) (in the example we'll present, the time \(T\) is the final minute of the trading day). Let \(X_{t}\) denote the number of remaining unexecuted shares at time \(t\), for \(t=1,2,3,\dots,T\). Initially we have \(X_{0}=x\). An execution policy is a sequence of \(\mathcal{F}_{t}\)-adapted sub-orders \(a_{t}\) such that \(X_{t}=X_{t-1}+a_{t-1}\), that is, \(a_{t-1}\) is this execution policy's sub-order place at time \(t-1\) and executed at time \(t\); these sub-orders are chosen so that \(X_{T}=0\). Using the loss function given in (4), an optimal execution policy is the minimizer of the expected loss,
\[\min_{a}\ \mathbb{E}\sum_{t=1}^{T}\text{loss}(t,a_{t-1}) \tag{5}\] \[\text{s.t.}\] \[X_{t} =X_{t-1}+a_{t-1}\] \[X_{T} =0\] \[X_{0} =x\.\]
where the minimization is carried out over the family of \(\mathcal{F}_{t}\)-measurable policies \(a_{t}\). In (5) we are making allows for broad generality of the processes \((S_{t},V_{t})\), aside from them being well-defined on probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t=,0,1,2,\dots},\mathbb{P})\), and also assuming that our
trading does not affect them. Later on we will narrow the family of policies to \(a_{t}\) given by an LSTM network, but still this narrowing of policies will permit broad generality in the distribution of \((S_{t},V_{t})\) such as non-Markovianity and nonlinear dependence in volume.
The loss function in (5) is a risk-neutral optimization in the sense that there is no penalty on variance or risk. The optimization in (5) results in a policy that is on the frontier constructed in [4]; their frontier is formed by minimizing implementation shortfall with a penalty on its variance. The objective in (5) is similar to the objective in [19], which is risk neutral and also volume dependent.
### TWAP and VWAP Strategies
Before we approach solving (5) with full generality in \((S_{t},V_{t})\), we first discuss the two industry-standard benchmarks for large-order execution, namely TWAP and VWAP, and how they are related to (5).
**Definition 2.1** (Twap).: _The time weighted average price (TWAP) is \(\overline{S_{T}}=\frac{1}{T}\sum_{t=1}^{T}S_{t}\)._
The TWAP is a target for some execution policies because average execution price for large orders are often benchmarked against TWAP. A common execution strategy is the so-called _TWAP strategy_, wherein the policy is sub-divide the order into equally-sized deterministic sub-orders,
\[a_{t}=-\frac{x}{T}\qquad\text{for }t=0,1,\ldots,T-1. \tag{6}\]
The TWAP is easy to implement, as it is assured to satisfy terminal condition \(X_{T}=0\) and doesn't require any parameter estimation. However, TWAP strategy ignores volume and any other pertinent information acquired during the trading period. Indeed, volume is often a concern when it comes to evaluating trades, which is why execution policies often target the VWAP [11].
**Definition 2.2** (Vwap).: _The volume weighted average price (VWAP) is \(\overline{S_{V}}=\frac{\sum_{t=1}^{T}V_{t}S_{t}}{\sum_{t=1}^{T}V_{t}}\)._
A VWAP strategy is to sub-divide proportional to moments of the volume,
\[a_{t}=-\frac{x\overline{V}_{t+1}}{\sum_{t=1}^{T}\overline{V}_{t}}\qquad\text{ for }t=0,1,2\ldots,T-1\, \tag{7}\]
where \(\overline{V_{t}}=\left(\mathbb{E}(V_{t})^{-\frac{1}{\beta+1}}\right)^{-(\beta +1)}\) (see [19]). This VWAP strategy can be effective for single-day execution because volume follows a somewhat predictable "U" shape (see Figure 1). In practice, the building of a VWAP strategy requires some prior data to determine the typical evolution of volume over a trading period. For example, we if we observe a history of volumes from past trading days then we can use historically estimated \(\overline{V_{t}}\)'s in (7).
Neither the TWAP strategy of (6) or the VWAP strategy of (7) minimize the loss in (5). However, with some simplification of the volume process and minimal assumptions on
\(S_{t}\), we can show that VWAP is optimal if \(V_{t}\) is a deterministic function of \(t\), and we can show TWAP is optimal if \(V_{t}\) is deterministic and constant in \(t\).
**Proposition 2.1** (Deterministic Volume).: _Suppose \(V_{t}=\mathbb{E}V_{t}=\overline{V_{t}}\) for all \(t\), that is, volume is a deterministic function of \(t\). Also assume that \(S_{t}\) is a martingale with respect the filtration \((\mathcal{F}_{t})_{t=,0,1,2,\ldots}\). Then the VWAP strategy (7) is optimal, and if \(\overline{V_{t}}\) is constant then the TWAP strategy of (6) is optimal._
Proof.: For deterministic volume and omitting constant \(C_{\epsilon,\beta}\), the optimization in (5) for
Figure 1: The “U”-shaped pattern of daily volume in Apple.
stock \(i\) can be posed as
\[\min_{a}\ \mathbb{E} \Bigg{[}\sum_{t=1}^{T}S_{t}(\overline{V_{t}})^{-\frac{1}{\beta+1}}| a_{t-1}|^{\frac{\beta+2}{\beta+1}}\Bigg{]}\] s.t. \[X_{t} =X_{t-1}+a_{t-1}\] \[X_{T} =0\] \[X_{0} =x\.\]
This optimization can be written as a Lagrangian,
\[\mathbb{E}\sum_{t=0}^{T-1}\left(S_{t+1}(\overline{V}_{t+1})^{- \frac{1}{\beta+1}}|a_{t}|^{\frac{\beta+2}{\beta+1}}+\lambda_{t}(X_{t+1}-X_{t}- a_{t})\right)\,\] \[=\mathbb{E}\left[\sum_{t=0}^{T-1}\left(S_{t+1}(\overline{V}_{t+1} )^{-\frac{1}{\beta+1}}|a_{t}|^{\frac{\beta+2}{\beta+1}}-(\lambda_{t+1}- \lambda_{t})X_{t+1}-\lambda_{t}a_{t}\right)+\lambda_{T}X_{T}-\lambda_{0}X_{0} \right]\,\]
with terminal condition \(X_{T}=0\), with initial condition \(X_{0}=x\), and where \(\lambda_{t}\) is an \(\mathcal{F}_{t}\)-adapted Lagrange multiplier process. First-order conditions in \(a_{t}\) and \(X_{t+1}\) yield the following co-state equations,
\[\frac{\beta+2}{\beta+1}\mathbb{E}_{t}\left[\operatorname{sign}(a _{t})S_{t+1}\left(\frac{|a_{t}|}{\overline{V}_{t+1}}\right)^{\frac{1}{\beta+1} }\right]-\lambda_{t} =0\quad\text{for }t=0,1,2\ldots,T-1\] \[\lambda_{t}-\mathbb{E}_{t}\lambda_{t+1} =0\quad\text{for }t=0,1,2\ldots,T-2\,\]
with \(\mathbb{E}_{t}\) denoting expectation conditional on \(\mathcal{F}_{t}\). For \(x>0\) the optimal policy is the VWAP strategy,
\[a_{t} =-\frac{x\overline{V}_{t+1}}{\sum_{t=1}^{T}\overline{V_{t}}}\] \[\lambda_{t} =-\frac{\beta+2}{\beta+1}\left(\frac{x}{\sum_{t=1}^{T}\overline{ V_{t}}}\right)^{\frac{1}{\beta+1}}\mathbb{E}_{t}S_{t+1}\,\]
where the martingale property \(S_{t}=\mathbb{E}_{t}S_{t+1}\) ensures that \(\lambda_{t}=\mathbb{E}_{t}\lambda_{t+1}\). Furthermore, this optimal policy is the TWAP strategy if \(\overline{V_{t}}\) is constant in \(t\).
In [19] there are results similar to Proposition 2.1, and also a theorem suggesting under certain Markovian assumptions that a deterministic VWAP is the optimal \(\mathcal{F}_{t}\)-adapted policy for stochastic volume. The following proposition shows how the VWAP strategy in 7 can be optimal under certain assumptions on \(S_{t}\) and \(V_{t}\).
**Proposition 2.2** (Stochastic Volume).: _Define_
\[M_{t}=\frac{\left(\mathbb{E}V_{t}^{-\frac{1}{\beta+1}}\right)^{-1}}{V_{t}^{1/( \beta+1)}}\,\]
_and assume \(M_{t}\) is a martingale with respect the filtration \((\mathcal{F}_{t})_{t=0,1,2,\ldots}\). Also assume that \(S_{t}\) is a martingale with respect the filtration \((\mathcal{F}_{t})_{t=0,1,2,\ldots}\), independent of \(M_{t}\). Then the VWAP strategy (7) is optimal._
Proof.: Taking the Lagrangian approach as we did in the proof of Proposition 2.1, we arrive at the following co-state equations,
\[\frac{\beta+2}{\beta+1}\mathbb{E}_{t}\left[\operatorname{sign}( a_{t})S_{t+1}\left(\frac{|a_{t}|}{V_{t+1}}\right)^{\frac{1}{\beta+1}}\right]- \lambda_{t} =0\] \[\lambda_{t}-\mathbb{E}_{t}\lambda_{t+1} =0\.\]
For \(x>0\), we have
\[\lambda_{t}=-\frac{\beta+2}{\beta+1}\mathbb{E}_{t}\left[S_{t+1}\left(\frac{|a _{t}|}{V_{t+1}}\right)^{\frac{1}{\beta+1}}\right]\,\]
and if we insert the VWAP strategy of (7) we see that
\[\mathbb{E}_{t}\left[S_{t+1}\left(\frac{|a_{t}|}{V_{t+1}}\right)^ {\frac{1}{\beta+1}}\right]\] \[=S_{t}\mathbb{E}_{t}\left[\left(\frac{|a_{t}|}{V_{t+1}}\right)^{ \frac{1}{\beta+1}}\right]\] \[=\left(\frac{x}{\mathcal{K}}\right)^{\frac{1}{\beta+1}}S_{t} \mathbb{E}_{t}\left[\left(\frac{\left(\mathbb{E}(V_{t+1})^{-\frac{1}{\beta+1} }\right)^{-(\beta+1)}}{V_{t+1}}\right)^{\frac{1}{\beta+1}}\right]\] \[=\left(\frac{x}{\mathcal{K}}\right)^{\frac{1}{\beta+1}}S_{t} \mathbb{E}_{t}M_{t+1}\] \[=\left(\frac{x}{\mathcal{K}}\right)^{\frac{1}{\beta+1}}S_{t}M_{t}\,\]
where \(\mathcal{K}\) is the denominator of \(a_{t}\) in (7). Thus, when VWAP strategy (7) is used we have \(\lambda_{t}=-\left(\frac{x}{\mathcal{K}}\right)^{\frac{1}{\beta+1}}\frac{ \beta+2}{\beta+1}S_{t}M_{t}=-\left(\frac{x}{\mathcal{K}}\right)^{\frac{1}{ \beta+1}}\frac{\beta+2}{\beta+1}\mathbb{E}_{t}S_{t+1}M_{t+1}=\mathbb{E}_{t} \lambda_{t+1}\), thereby confirming that it is an optimal policy.
An example for martingale \(M_{t}\) in Proposition 2.2 is log-normal volume (same as the example given in [19]), \(\log(V_{t+1}/V_{t})=\mu_{t}+\sigma_{t}Z_{t}\), where \((Z_{t})_{t=1,2,...}\) is a sequence of standard normals independent of the past, and where \(\mu_{t}\) and \(\sigma_{t}\) are deterministic functions of \(t\) that are calibrated so that \(V_{t}\) adheres to a U-shape over the course of a trading day; it is straightforward to check that this yield an \(M_{t}\) that is a martingale. At this point however, rather than pursue specification of the underlying stochastic processes, we will instead choose to leave the data distribution unspecified and train an LSTM to optimal trade based on historical observations of real-life financial markets.
## 3 LSTM Execution Policy - Experimental Setup
A policy approximation to the optimal solution of (5) can be obtained by training a neural network on historical market data. We train an LSTM neural network to minimize the objective in (5), and then compare with the TWAP strategy of (6) and the VWAP strategy of (7). The choice of LSTM rather than a convolutional neural network (CNN) or a recurrent neural network (RNN) is based on the following two considerations. Firstly, the problem in (5) is time-dependent, requiring the neural network to memorize prior information. A CNN is static and thus cannot memorize prior information for use in backpropagation whereas a RNN can memorize prior information, but suffers from the gradient vanishing problem [16]. The LSTM is able to handle both of these considerations.
The backtests we conduct will have a fixed number of sub-orders and a fixed submission time. We will submit sub-orders every 5 minutes for a total of 78 total sub-orders over the 390 minutes of the trading day. Each of the strategies we test, namely the TWAP, VWAP and LSTM strategies, will execute in the same 5 minute intervals, thus ensuring a fair comparison. To be as realistic as possible, we also assume that the LSTM lags one minute behind the real-time market (i.e., the input of the LSTM network only includes the data up to and including the prior minute) to prevent the LSTM from having any foresight bias.
Fig. 2 (a) shows the structure of a single LSTM unit. We refer to the internal parameters of the LSTM unit as the state. At a particular time \(t\), the LSTM unit updates its state to \(state_{t}\) using the old state, \(state_{t-1}\), and the new input, \(input_{t}\). The LSTM unit also generates an output, \(output_{t}\). While \(state_{t}\) is then used to update \(state_{t+1}\), \(output_{t}\) is used for other calculation purposes. Fig. 2 (b) shows the structure of the LSTM network used: it has two LSTM layers with 50 LSTM units in each. With fewer or smaller LSTM layers the network tends to underfit, whereas larger, more complex LSTMs have comparable performance to that of our architecture but with increased computational complexity. The input has length 401, which comprises the current minute, the S&P 100 stocks' prices, their volumes, and the remaining inventories in each stock under both TWAP and VWAP strategies. By including in the input, the current inventories remaining under TWAP and VWAP strategies, the LSTM strategy performs no worse than TWAP or VWAP as
Figure 2: (a) The structure of LSTM unit. (b) The structure of the used LSTM network.
the LSTM can simply replicate them. The LSTM state also contains the current level of inventory in stock \(i\) and after passing the results through a sigmoid activation function, the network outputs the updated inventory remaining in each stock \(i\) for the current time. The total number of network parameters is 116,100.
### Data and Parameter Estimation
Our data was obtained from FirstRate Data and consists of minute-by-minute prices and volumes from January 2, 2020, through July 1, 2022, for the S&P100 stocks as listed on March 21, 2022. The data was split into nine groups for training and testing the LSTM networks, as shown in Table 1. The S&P100 contains the largest 100 companies in the U.S. stock market by market capitalization. The limit order book for each of these stocks is extremely deep at all times, meaning that there is plenty of liquidity and it is very unlikely that a sub-order will not get filled. For these liquid stocks, the simulated performance of the LSTM policy will be a realistic characterization of how it will perform in real-life trading.
For the power law in (2), we take \(\beta=.67\) so that the impact in (3) has a power of.6 as suggested by [5]. However, we will conduct backtests both with constant \(\beta\) and stochastically fluctuating \(\beta\), the latter being a more realistic description of real-life order books. We set \(\epsilon\) so that the transaction cost equals 0.01%-0.02% (i.e., 1-2 bps) of the value traded, which is realistic for S&P100 stocks. For example, when trading 1 million shares, an appropriate \(\epsilon\) would be 0.003. Then, \(C_{\epsilon,\beta}\) is calculated by:
\[C_{\epsilon,\beta}=\tfrac{1}{\beta+2}(\epsilon(\beta+1))^{\tfrac{\beta+2}{ \beta+1}}\approx 7.87\times 10^{-5}. \tag{8}\]
For \(i=1,2,\ldots,100\) let \(S_{t}^{i}\) and \(V_{t}^{i}\) denote the price and volume for the \(i^{th}\) stock in the dataset. The \(\sigma\)-algebra \(\mathcal{F}_{t}\) is generated by \(\bigcup_{i}\{(S_{u}^{i},V_{u}^{i})_{u=0,1,\ldots,t}\}\). We train the neural network separately to optimally execute for each individual stock, but the inputs
\begin{table}
\begin{tabular}{|c|c c|c c|} \hline Fold & Training Data & Days & Testing Data & Days \\ \hline
1 & From 2020-01-02 To 2020-03-27 & 60 & From 2020-03-30 To 2020-06-03 & 45 \\
2 & From 2020-03-30 To 2020-06-23 & 60 & From 2020-06-24 To 2020-08-27 & 45 \\
3 & From 2020-06-24 To 2020-09-17 & 60 & From 2020-09-18 To 2020-11-20 & 45 \\
4 & From 2020-09-18 To 2020-12-11 & 60 & From 2020-12-14 To 2021-02-19 & 45 \\
5 & From 2021-01-04 To 2021-03-30 & 60 & From 2021-03-31 To 2021-06-04 & 45 \\
6 & From 2021-03-31 To 2021-06-24 & 60 & From 2021-06-25 To 2021-08-30 & 45 \\
7 & From 2021-06-25 To 2021-09-20 & 60 & From 2021-09-21 To 2022-11-23 & 45 \\
8 & From 2021-09-21 To 2021-12-14 & 60 & From 2021-12-15 To 2022-02-18 & 45 \\
9 & From 2022-01-03 To 2022-03-29 & 60 & From 2022-03-30 To 2022-06-03 & 45 \\ \hline \end{tabular}
\end{table}
Table 1: The datasets and their sizes.
to the network include the vectors of all prices and volumes at time \(t\), which we denote as \(\mathbf{S}_{t}=(S_{t}^{1},S_{t}^{2},\ldots,S_{t}^{100})\) and \(\mathbf{V}_{t}=(V_{t}^{1},V_{t}^{2},\ldots,V_{t}^{100})\), respectively.
### Algorithm & LSTM Training
Because sub-orders are executed every five minutes, the total number of executions during the trading day is \(390/5=78\). The output of the LSTM is \(X_{t}^{i}\) which represents the remaining inventory for stock \(i\) at time \(t\). We train the LSTM network on each of the nine folds for each of the S&P100 stocks. The total number of trained LSTM networks for all nine folds is \(100\times 9=900\). The loss function used to train the LSTM network for stock \(i\) is the following empirical approximation of (5),
\[L^{i}=C_{\epsilon,\beta}\sum_{\ell=1}^{78}S_{5\ell}^{i}(V_{5\ell}^{i})^{-\frac{ 1}{\beta+1}}|a_{5\ell-1}^{i}|^{\frac{\beta+2}{\beta+1}}=C_{\epsilon,\beta} \sum_{\ell=1}^{78}S_{5\ell}^{i}(V_{5\ell}^{i})^{-\frac{1}{\beta+1}}|X_{5\ell} ^{i}-X_{5\ell-5}^{i}|^{\frac{\beta+2}{\beta+1}}. \tag{9}\]
Define the LSTM network weights as \(w\), then the training problem becomes
\[w^{*}=\operatorname*{arg\,min}_{w}L^{i}(w)=\operatorname*{arg\,min}_{w}C_{ \epsilon,\beta}\sum_{\ell=1}^{78}S_{5\ell}^{i}(V_{5\ell}^{i})^{-\frac{1}{ \beta+1}}|X_{5\ell}^{i}(w)-X_{5\ell-5}^{i}(w)|^{\frac{\beta+2}{\beta+1}}.\]
```
-Initialize parameters of LSTM units \(w\) for k = 1 to NUM_EPOCH do \(X_{0}^{i}=x_{0}^{i}\), \(L^{i}=0\), \(t=1\), \(\beta=0.67\), \(lr=0.001\), \(h_{0}^{i}=\) None while\(t<390\)do \(X_{t}^{i},h_{t}^{i}=LSTM(h_{t-1}^{i},t,\mathbf{S}_{t},\mathbf{V}_{t},\mathbf{X}_{ t}^{V},\mathbf{X}_{t}^{T})\) ifmod(t,5)=0 then \(L^{i}+=C_{\epsilon,\beta}S_{t}^{i}(V_{t}^{i})^{-\frac{1}{\beta+1}}|X_{t}^{i}-X_ {t-5}^{i}|^{\frac{\beta+2}{\beta+1}}\) endif \(t+=1\) endwhile \(\#\#\#\#\) Close all the positions \(\#\#\#\#\) \(X_{390}^{i}=0\) \(L^{i}+=C_{\epsilon,\beta}S_{390}^{i}(V_{390}^{i})^{-\frac{1}{\beta+1}}|X_{390} ^{i}-X_{385}^{i}|^{\frac{\beta+2}{\beta+1}}\) \(\#\#\#\#\) Update the LSTM weights \(\#\#\#\#\) \(w=\) Adam(\(L^{i}\), \(lr\), \(w\)) endfor
```
**Algorithm 1** Training of LSTM Architecture for Optimal Execution Every 5 Minutes In A 390-Minute Trading Day for Stock \(i\)
Algorithm 1 shows the training procedure for stock \(i\)'s LSTM network. For each training we loop for 10,000 epochs which allows us to train single LSTM in less than 20 minutes using GPUs; using the CPU, the runtime is around 10 hours. We initialize the LSTM state \(h^{i}\) as **None**. We also initialize \(h^{i}\) with random numbers but observe little difference. Adam is used as the optimizer with a learning rate (\(lr\)) of 0.001. The initial inventory is 5% of the stock's average daily volume, that is \(x_{0}^{i}=.05\times A^{i}\) where
\[A^{i}=\text{sample mean of daily volume for stock $i$}.\]
We also train for a fixed amount of shares for each stock, i.e., \(x_{0}^{i}\equiv 10^{6}\). Note that although the time increments are 5 minutes, that data in between trade times is still seen by the LSTM, which means the strategy makes full use of the available information. The trading day has exactly 390 minutes, and so the first trade occurs at time \(t=5\), and the final trade occurs at minute 390 when the market closes. The LSTM networks are trained on nine folds, each comprising 60 days of one-minute data, as described in Table 1 for the S&P100 stocks. We consider the days to be independent of each other, therefore, the shape of the LSTM training input is (60, 390, 401). The ratio of training data size (\(60\times 390\times 401=9,383,400\)) to trained parameter size (116,100) is over 80. Therefore, over-fitting is highly unlikely to occur.
The \(\mathbf{X}_{t}^{V}\) and \(\mathbf{X}_{t}^{T}\) in Alg. 1 represent the remaining inventories for the S&P100 stocks
Figure 3: The first row shows \(-a_{t}\) and the second row shows \(X_{t}\) for selected stocks.
by using VWAP and TWAP strategies at the minute \(t\), i.e.,
\[\mathbf{X}_{t}^{T} =\left(x_{0}^{1}-\frac{t}{T}x_{0}^{1},\ldots,x_{0}^{100}-\frac{t}{T }x_{0}^{100}\right)\] \[\mathbf{X}_{t}^{V} =\left(x_{0}^{1}-\frac{\sum_{j=1}^{t}\overline{V}_{j}^{1}}{\sum_ {t=1}^{T}\overline{V}_{t}^{1}}x_{0}^{1},\ldots,x_{0}^{100}-\frac{\sum_{j=1}^{t }\overline{V}_{j}^{100}}{\sum_{t=1}^{T}\overline{V}_{t}^{100}}x_{0}^{100} \right). \tag{10}\]
The TWAP strategy does not require the estimation of any parameters. As for the VWAP strategy, we utilized the 60 days' training volume data to estimate \(\overline{V}_{t}^{i}\) for each fold and stock \(i\) as described in (7), which is \(\overline{V_{t}^{i}}=\left(\mathbb{E}(V_{t}^{i})^{-\frac{1}{\beta+1}}\right)^ {-(\beta+1)}\). As an example, Fig. 3 shows the evolution of the actions and remaining inventories for the three strategies for three different stocks. During testing, the LSTM strategy is compared with the same TWAP and VWAP strategies in (10).
## 4 LSTM Execution Policy - Experimental Results
### Evaluation Metrics
The metric used to compare our LSTM strategy with VWAP and TWAP strategies is the transaction cost \(L^{i}\) as in (9). A smaller \(L^{i}\) means a better performance of the selected execution strategy for stock \(i\). We consider the following scenarios: the noiseless order-book case with \(x_{0}^{i}=.05\times A^{i}\) and the fixed amount of shares \(x_{0}^{i}=10^{6}\), and the noisy order-book case (i.e., \(\beta\) is stochastic) with \(x_{0}^{i}=.05\times A^{i}\) and \(x_{0}^{i}=10^{6}\). **From here onward, we drop the super-script \(i\) for notational simplicity except in places where it is necessary to show our results.**
### Noiseless Order-Book Case
We test the trained LSTM models on the datasets shown in Table 1. The parameter \(\epsilon\) in (8) is set to 0.006 for \(x_{0}=.05\times A\) and is set to 0.003 for \(x_{0}=10^{6}\) so that the transaction cost is approximately 1-2bps of the traded equity values. The first row in Fig. 4 shows the average daily transaction cost for each fold. On average over the nine folds, the daily transaction cost is $811,433 for the LSTM strategy, $843,248 for the VWAP strategy, and $939,695 for the TWAP strategy for the case \(x_{0}=.05\times A\). For the fixed initial shares case, the daily transaction cost is $5,748,601 for the LSTM, $6,135,958 for the VWAP, and $6,760,903 for the TWAP. Fig. 5 shows how the daily transaction cost evolves with execution time for the case \(x_{0}=.05\times A\). The LSTM strategy tends to incur large transaction costs towards the end of the trading session, whereas the VWAP strategy tends to incur large transaction costs in the morning and the TWAP strategy incurs transaction costs relatively consistently throughout the day. However, the LSTM has an overall lower transaction cost. Similar execution behavior for the three strategies is observed in the fixed initial shares case.
Figure 4: Performance of LSTM in noiseless order-book case.
Figure 5: Transaction cost with execution time for noiseless order-book case with \(x_{0}=.05\times A\). The unit for \(y\)-axis is \(10^{5}\) dollars.
The second row in Fig. 4 shows the transaction costs saved by using the LSTM strategy compared to the VWAP strategy for each stock over the nine testing folds. The LSTM strategy outperforms the VWAP by having a smaller transaction cost for all the stocks in \(x_{0}=.05\times A\) case and the average saving by LSTM is $260 for each stock. As for the fixed initial shares case, the average savings become $4,030. The third row shows the transaction costs saved by using the LSTM strategy compared to the TWAP strategy for each stock over the nine testing folds. Similarly, the LSTM is able to save $1,280 in the percentage of daily volume case and $10,280 in the fixed initial shares case.
Tables 2 and 3 list the top ten stock cases for which the LSTM saves the most and the bottom for LSTM saves the least compared to VWAP and TWAP strategies, respectively, for the \(x_{0}=.05\times A\). Most of the stocks in which the LSTM saves most are large capitalization tech companies. The median savings are approximately $170 if LSTM is used rather than than VWAP strategy, and $710 if LSTM is used rather than TWAP strategy. Tables 4 and 5 list the top and bottom ten stocks in which the LSTM saves the most/least compared to VWAP and TWAP strategies, respectively, for the \(x_{0}=10^{6}\) case. A median savings of approximately $550 is achieved by using LSTM rather than VWAP strategy, and $2,360 by using LSTM rather than TWAP strategy.
### Noisy Order-Book Shape
The \(\beta=.67\) estimated in [5] is an average. In any given minute the limit order books may have a different power law near to but not equal to.67. To model this variation, we consider \(\beta\) to be stochastic, i.e.,
\[\beta_{t}=0.67+\eta_{t}, \tag{11}\]
where \(\eta_{t}\) is a random variable with uniform distribution on \((-0.3,0.3)\). The range of noise is \(0.6/0.67\approx 0.9\). The trading loss then becomes
\[L=\sum_{\ell=1}^{78}C_{\epsilon,\beta_{5\ell}}S_{5\ell}(V_{5\ell})^{-\frac{1}{ \beta_{5\ell}+1}}|X_{5\ell}-X_{5\ell-5}|^{\frac{\beta_{5\ell}+2}{\beta_{5\ell }+1}}, \tag{12}\]
where \(C_{\epsilon,\beta_{5\ell}}\) is the stochastic version of \(C_{\epsilon,\beta}\) obtained by substituting \(\beta_{5\ell}\) into (8).
The first row in Fig. 6 shows the average daily transaction cost \(L\) for each fold in the noisy order-book case. On average over the nine folds, the daily transaction cost is $847,328 for the LSTM strategy, $871,606 for the VWAP strategy, and $973,718 for the TWAP strategy for \(x_{0}=.05\times A\). As for \(x_{0}=10^{6}\), the daily transaction cost is $5,975,930 for the LSTM, $6,344,318 for the VWAP, and $6,946,579 for the TWAP. Compared with the noiseless order-book shape case, the transaction costs increase for all the strategies. However, the LSTM strategy still consistently has a smaller transaction cost than the VWAP and TWAP strategies. Note that the LSTM networks used are the same as in the
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline Ticker & Daily & Equity Value & VWAP (bps) & LSTM (bps) & Savings (bps) \\ \(\downarrow\) & Volume & of Traded (\$) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) \\ \hline AMZN & 42.65 M & 329.26 M & 55.98 (1.70) & 53.81 (1.63) & 2.18 (0.07) \\ TSLA & 28.20 M & 909.89 M & 162.77 (1.79) & 161.49 (1.77) & 1.29 (0.01) \\ MSFT & 20.61 M & 256.82 M & 35.02 (1.36) & 33.87 (1.32) & 1.16 (0.04) \\ AAPL & 87.88 M & 570.61 M & 77.39 (1.36) & 76.24 (1.34) & 1.14 (0.02) \\ GOOG & 0.62 M & 66.42 M & 12.20 (1.84) & 11.11 (1.67) & 1.08 (0.16) \\ NFLX & 3.85 M & 93.59 M & 18.52 (1.98) & 17.73 (1.89) & 0.78 (0.08) \\ UNH & 2.07 M & 39.53 M & 6.82 (1.73) & 6.17 (1.56) & 0.65 (0.16) \\ TMO & 0.93 M & 22.87 M & 4.33 (1.89) & 3.75 (1.64) & 0.57 (0.25) \\ NVDA & 32.57 M & 274.19 M & 39.69 (1.45) & 39.12 (1.43) & 0.56 (0.02) \\ META & 15.87 M & 219.53 M & 31.12 (1.42) & 30.59 (1.39) & 0.54 (0.02) \\ \hline \(\vdots\) & & & & & \\ PM & 3.24 M & 13.50 M & 2.09 (1.55) & 1.93 (1.43) & 0.17 (0.12) \\ \(\vdots\) & & & & & \\ \hline AIG & 4.07 M & 8.88 M & 1.43 (1.61) & 1.37 (1.54) & 0.06 (0.07) \\ DUK & 1.93 M & 8.90 M & 1.33 (1.50) & 1.27 (1.43) & 0.06 (0.07) \\ COP & 6.52 M & 18.26 M & 2.61 (1.43) & 2.55 (1.40) & 0.06 (0.03) \\ SO & 3.12 M & 9.28 M & 1.32 (1.42) & 1.27 (1.37) & 0.05 (0.06) \\ CL & 2.94 M & 11.28 M & 1.61 (1.43) & 1.56 (1.38) & 0.05 (0.05) \\ EXC & 5.72 M & 9.33 M & 1.35 (1.45) & 1.30 (1.40) & 0.04 (0.05) \\ WBA & 4.62 M & 10.07 M & 1.49 (1.48) & 1.45 (1.44) & 0.04 (0.04) \\ PG & 4.69 M & 31.84 M & 4.29 (1.35) & 4.25 (1.34) & 0.04 (0.01) \\ DD & 3.24 M & 10.72 M & 1.90 (1.77) & 1.87 (1.75) & 0.03 (0.02) \\ BK & 3.88 M & 8.69 M & 1.29 (1.48) & 1.27 (1.46) & 0.01 (0.02) \\ \hline \end{tabular}
\end{table}
Table 2: Ten most and least saving stocks by LSTM compared with VWAP strategy for noiseless order-book case with \(x_{0}=.05\times A\) and the median saving stock.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline Ticker & Daily & Equity Value & TWAP (bps) & LSTM (bps) & Savings (bps) \\ \(\downarrow\) & Volume & Traded (\$) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) \\ \hline TSLA & 28.20 M & 909.89 M & 180.43 (1.98) & 161.49 (1.77) & 18.94 (0.21) \\ AMZN & 42.65 M & 329.26 M & 64.19 (1.95) & 53.81 (1.63) & 10.38 (0.32) \\ AAPL & 87.88 M & 570.61 M & 84.61 (1.48) & 76.24 (1.34) & 8.37 (0.15) \\ NVDA & 32.57 M & 274.19 M & 44.06 (1.61) & 39.12 (1.43) & 4.94 (0.18) \\ MSFT & 20.61 M & 256.82 M & 38.40 (1.50) & 33.87 (1.32) & 4.53 (0.18) \\ NFLX & 3.85 M & 93.59 M & 21.47 (2.29) & 17.73 (1.89) & 3.73 (0.40) \\ META & 15.87 M & 219.53 M & 34.09 (1.55) & 30.59 (1.39) & 3.50 (0.16) \\ BA & 14.42 M & 139.82 M & 26.20 (1.87) & 23.01 (1.65) & 3.19 (0.23) \\ GOOG & 0.62 M & 66.42 M & 13.14 (1.98) & 11.11 (1.67) & 2.02 (0.30) \\ PYPL & 7.05 M & 70.38 M & 13.17 (1.87) & 11.34 (1.61) & 1.83 (0.26) \\ \hline \(\vdots\) & & & & & \\ ORCL & 8.61 M & 30.58 M & 5.21 (1.70) & 4.50 (1.47) & 0.71 (0.23) \\ \(\vdots\) & & & & & \\ \hline AIG & 4.07 M & 8.88 M & 1.71 (1.92) & 1.37 (1.54) & 0.34 (0.38) \\ EMR & 2.12 M & 8.54 M & 1.61 (1.89) & 1.28 (1.50) & 0.33 (0.39) \\ MET & 3.72 M & 9.63 M & 1.70 (1.76) & 1.38 (1.44) & 0.31 (0.32) \\ DUK & 1.93 M & 8.90 M & 1.58 (1.78) & 1.27 (1.43) & 0.31 (0.35) \\ SO & 3.12 M & 9.28 M & 1.57 (1.70) & 1.27 (1.37) & 0.31 (0.33) \\ CL & 2.94 M & 11.28 M & 1.86 (1.65) & 1.56 (1.38) & 0.29 (0.26) \\ DOW & 3.72 M & 9.69 M & 1.62 (1.67) & 1.36 (1.41) & 0.25 (0.26) \\ BKNG & 0.22 M & 22.54 M & 3.96 (1.75) & 3.70 (1.64) & 0.25 (0.11) \\ WBA & 4.62 M & 10.07 M & 1.70 (1.68) & 1.45 (1.44) & 0.25 (0.25) \\ BK & 3.88 M & 8.69 M & 1.51 (1.73) & 1.27 (1.46) & 0.23 (0.27) \\ \hline \end{tabular}
\end{table}
Table 3: Ten most and least saving stocks by LSTM compared with TWAP strategy for noiseless order-book case with \(x_{0}=.05\times A\) and the median saving stock.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \hline Ticker & Daily & Equity Value & VWAP (bps) & LSTM (bps) & Savings (bps) \\ \(\downarrow\) & Volume & of Traded (\$) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) \\ \hline GOOG & 0.62 M & 2150.54 M & 1046.09 (4.86) & 947.77 (4.41) & 98.32 (0.46) \\ BKNG & 0.22 M & 2076.48 M & 1741.56 (8.39) & 1671.94 (8.05) & 69.62 (0.34) \\ BLK & 0.46 M & 707.64 M & 439.79 (6.21) & 394.23 (5.57) & 45.57 (0.64) \\ TMO & 2.98 M & 493.96 M & 194.35 (3.93) & 171.52 (3.47) & 22.82 (0.46) \\ AVGO & 1.08 M & 435.77 M & 149.75 (3.44) & 133.44 (3.06) & 16.32 (0.37) \\ CHTR & 0.61 M & 616.99 M & 289.98 (4.70) & 278.15 (4.51) & 11.83 (0.19) \\ ADBE & 1.43 M & 494.80 M & 140.58 (2.84) & 129.16 (2.61) & 11.42 (0.23) \\ LMT & 0.96 M & 365.99 M & 126.62 (3.46) & 117.34 (3.21) & 9.28 (0.25) \\ UNH & 2.89 M & 381.23 M & 84.43 (2.21) & 75.98 (1.99) & 8.45 (0.22) \\ ACN & 1.17 M & 276.23 M & 81.30 (2.94) & 72.95 (2.64) & 8.35 (0.30) \\ \hline \hline \(\vdots\) & & & & \\ ABT & 4.05 M & 111.71 M & 14.13 (1.27) & 13.58 (1.22) & 0.55 (0.05) \\ \(\vdots\) & & & & \\ \hline BA & 14.42 M & 193.90 M & 12.90 (0.67) & 12.82 (0.66) & 0.08 (0.00) \\ BK & 3.88 M & 44.78 M & 5.83 (1.30) & 5.75 (1.28) & 0.08 (0.02) \\ GM & 13.78 M & 44.10 M & 2.48 (0.56) & 2.43 (0.55) & 0.05 (0.01) \\ C & 18.34 M & 56.97 M & 2.74 (0.48) & 2.69 (0.47) & 0.05 (0.01) \\ AAPL & 87.88 M & 129.86 M & 2.40 (0.18) & 2.36 (0.18) & 0.04 (0.00) \\ WFC & 14.04 M & 37.81 M & 1.46 (0.39) & 1.43 (0.38) & 0.03 (0.01) \\ XOM & 27.13 M & 53.12 M & 2.25 (0.42) & 2.23 (0.42) & 0.03 (0.01) \\ PFE & 25.04 M & 40.06 M & 1.59 (0.40) & 1.57 (0.39) & 0.02 (0.01) \\ BAC & 47.20 M & 34.29 M & 0.95 (0.28) & 0.93 (0.27) & 0.02 (0.01) \\ F & 70.81 M & 11.81 M & 0.26 (0.22) & 0.26 (0.22) & 0.00 (0.00) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ten most and least saving stocks by LSTM compared with VWAP strategy for noiseless order-book case with \(x_{0}=10^{6}\) and the median saving stock.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline Ticker & Daily & Equity Value & TWAP (bps) & LSTM (bps) & Savings (bps) \\ \(\downarrow\) & Volume & Traded (\$) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) & (unit: \(\$10^{3}\)) \\ \hline GOOG & 0.62 M & 2150.54 M & 1126.70 (5.24) & 947.77 (4.41) & 178.93 (0.83) \\ BKNG & 0.22 M & 2076.48 M & 1805.40 (8.69) & 1671.94 (8.05) & 133.46 (0.64) \\ BLK & 0.46 M & 707.64 M & 476.08 (6.73) & 394.23 (5.57) & 81.85 (1.16) \\ CHTR & 0.61 M & 616.99 M & 337.32 (5.47) & 278.15 (4.51) & 59.17 (0.96) \\ AVGO & 1.08 M & 435.77 M & 177.81 (4.08) & 133.44 (3.06) & 44.37 (1.02) \\ TMO & 0.93 M & 493.96 M & 214.06 (4.33) & 171.52 (3.47) & 42.53 (0.86) \\ ADBE & 1.43 M & 494.80 M & 161.14 (3.26) & 129.16 (2.61) & 31.97 (0.65) \\ LIN & 1.07 M & 270.08 M & 109.40 (4.05) & 79.80 (2.95) & 29.59 (1.10) \\ LMT & 0.96 M & 365.99 M & 144.76 (3.96) & 117.34 (3.21) & 27.41 (0.75) \\ COST & 1.45 M & 404.42 M & 124.43 (3.08) & 102.18 (2.53) & 22.24 (0.55) \\ \hline \(\vdots\) & & & & \\ JNJ & 4.49 M & 156.07 M & 19.59 (1.26) & 17.23 (1.10) & 2.36 (0.15) \\ \(\vdots\) & & & & \\ \hline C & 18.34 M & 56.97 M & 3.08 (0.54) & 2.69 (0.47) & 0.39 (0.07) \\ INTC & 22.63 M & 51.50 M & 2.50 (0.48) & 2.11 (0.41) & 0.38 (0.07) \\ GM & 13.78 M & 44.10 M & 2.78 (0.63) & 2.43 (0.55) & 0.35 (0.08) \\ XOM & 5.87 M & 53.12 M & 2.52 (0.47) & 2.23 (0.42) & 0.29 (0.05) \\ AAPL & 87.88 M & 129.86 M & 2.62 (0.20) & 2.36 (0.18) & 0.26 (0.02) \\ PFE & 25.04 M & 40.06 M & 1.82 (0.46) & 1.57 (0.39) & 0.25 (0.06) \\ WFC & 4.62 M & 37.81 M & 1.65 (0.44) & 1.43 (0.38) & 0.22 (0.06) \\ BAC & 47.20 M & 34.29 M & 1.08 (0.32) & 0.93 (0.27) & 0.15 (0.04) \\ T & 45.78 M & 19.94 M & 0.61 (0.30) & 0.55 (0.27) & 0.06 (0.03) \\ F & 70.81 M & 11.81 M & 0.30 (0.26) & 0.26 (0.22) & 0.04 (0.03) \\ \hline \end{tabular}
\end{table}
Table 5: Ten most and least saving stocks by LSTM compared with TWAP strategy for noiseless order-book case with \(x_{0}=10^{6}\) and the median saving stock.
Figure 6: Performance of LSTM in noisy order-book case.
noiseless order-book shape case. Therefore, the LSTM strategy is robust to noise perturbations in \(\beta\). As expected, we empirically observe that the difference between LSTM and VWAP in daily transaction cost decreases with the increase in noise intensity. The second row in Fig. 6 shows the transaction costs saved by using the LSTM strategy rather than the VWAP and TWAP strategies for each stock. Compared with the noiseless order-book case, the average savings becomes smaller. Similarly, Tables 6 and 7 show the top/bottom ten stock cases in which the LSTM saves the most/least transaction costs compared to VWAP and TWAP strategies, respectively, for \(x_{0}=.05\times A\). The median savings are around $160 for the VWAP case and $690 for the TWAP case. Tables 8 and 9 show the top/bottom ten stock cases for \(x_{0}=10^{6}\). The median savings are around $400 for the VWAP case and $2,070 for the TWAP case.
## 5 Conclusion
We have shown how LSTM can be used for optimal execution of large stock orders in a limit order book. Our backtests demonstrate that LSTM can outperform TWAP and VWAP-based strategies in order books with both noiseless and noisy power-law parameter. It is possible that the improved performance of the LSTM is due to its ability to aggregate across multiple stocks and detect effects such as heteroskedasticity among both the price and the volume time series. There are a variety of future avenues to continue this work. One such direction is to include permanent price impact and to see how LSTM can adjust to early sub-orders adversely affecting price. Another direction would be to consider optimizing length of trading period and frequency of trading, both of which were static hyperparameters in this paper.
## Declaration of Interest
This work was partially supported by NSF grant DMS-1907518.
|
2306.13922 | Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels | Deverbal nouns are nominal forms of verbs commonly used in written English
texts to describe events or actions, as well as their arguments. However, many
NLP systems, and in particular pattern-based ones, neglect to handle such
nominalized constructions. The solutions that do exist for handling arguments
of nominalized constructions are based on semantic annotation and require
semantic ontologies, making their applications restricted to a small set of
nouns. We propose to adopt instead a more syntactic approach, which maps the
arguments of deverbal nouns to the universal-dependency relations of the
corresponding verbal construction. We present an unsupervised mechanism --
based on contextualized word representations -- which allows to enrich
universal-dependency trees with dependency arcs denoting arguments of deverbal
nouns, using the same labels as the corresponding verbal cases. By sharing the
same label set as in the verbal case, patterns that were developed for verbs
can be applied without modification but with high accuracy also to the nominal
constructions. | Aviv Weinstein, Yoav Goldberg | 2023-06-24T10:07:01Z | http://arxiv.org/abs/2306.13922v1 | # Unsupervised Mapping of Arguments of Deverbal Nouns
###### Abstract
Deverbal nouns are nominal forms of verbs commonly used in written English texts to describe events or actions, as well as their arguments. However, many NLP systems, and in particular pattern-based ones, neglect to handle such nominalized constructions. The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation and require semantic ontologies, making their applications restricted to a small set of nouns. We propose to adopt instead a more syntactic approach, which maps the arguments of deverbal nouns to the universal-dependency relations of the corresponding verbal construction. We present an unsupervised mechanism--based on contextualized word representations--which allows to enrich universal-dependency trees with dependency arcs denoting arguments of deverbal nouns, using the same labels as the corresponding verbal cases. By sharing the same label set as in the verbal case, patterns that were developed for verbs can be applied without modification but with high accuracy also to the nominal constructions.
## 1 Introduction
Systems that aim to extract and summarize information from large text collections often revolve around the concept of predicates and their arguments. Such predicates are often realized as verbs (_the performers interpret the music_), but the same predicative concepts can also be realized as nouns (_musical interpretation by the performers_). This process of realizing verbal predicates as nouns is called _nominalization_, and it involves changing the syntactic structures around the content words participating in the construction, while keeping its semantics the same. In this work, we are interested in mapping arguments of nominal constructions that appear in text, to the corresponding ones in verbal structures (i.e., to identify the syntactic object role of _music_ and syntactic subject role of _performers_, in _music interpretation by the performers_).
Nominalizations,also known as nominal predicates, are nouns derived from words of a different part of speech, such as verbs or adjectives. For example, in English1, the nominalization _interpretation_ is derived from the verb _interpret_, and the nominalization _precision_ is related to the adjective _precise_. The usage of nominalizations is widespread in English text, and according to Gurevich et al. (2007), about half of all sentences in written texts contain at least one nominalization. In our work, we observed a ratio of 120k nominalizations to 180k verbs, in a random collection of 100k Wikipedia sentences. Thus, interpretation of nominalizations is central to many language understanding tasks. In the current work, We focus on nominalizations which are derived solely from verbs, commonly called deverbal nouns.
Footnote 1: While this work focuses on English nominalizations, the phenomena itself is not English specific.
Existing attemptsaround identifying arguments of nominalizations either rely on a predefined semantic roles ontology (e.g., SRL based roles such
Figure 1: Example of our task. Top: verbal argument structure. Middle: nominal argument structure. Bottom: nominal structure enriched with corresponding verbal argument labels (thick blue edges).
as those in VerbNet (Schuler, 2005) or FrameNet (Baker et al., 1998)) as suggested by Pradhan et al. (2004), Pado et al. (2008) and Zhao and Titov (2020), or consider a limited subset of nominalized structures (Lapata (2000) and Gurevich and Waterman (2009)). Early works approached the task in a fully supervised manner (Lapata (2000), Pradhan et al. (2004)), hence suffering from insufficient annotated nominal data. To overcome that, Pado et al. (2008) and more recently Zhao and Titov (2020) considered a transfer scenario from verbal arguments to nominal arguments while assuming only supervised data for verbs. Nevertheless, their methods were limited to specific predicates, even with extensive annotated verbal data. Moreover, the previous works considered each a different set of argument types due to supervision constraints.
Our Proposed TaskRather than relying on a predefined semantic roles ontology, in this work we propose to map the arguments of deverbal nouns to the _syntactic_ arguments of the corresponding active verbal form. This allows us to define a task with a consistent and a restricted label set (syntactic subject, syntactic object, syntactic prepositional modifier with preposition X), while still maintaining expressivity: if one knows how to extract the verbal argument from the active verbal form, they will be able to also extract the nominal ones.
A natural formulation is to ask "How will this verb arguments be realized in a deverbal noun construction?". However, this approach is problematic, as the same verbal structure, e.g. _IBM appointed Sam as manager_, can be realized in many different ways around the same nominalization, including: _IBM's appointment of Sam as manager_, _Sam's appointment as manager by IBM_ and _Sam's IBM appointment as manager_.
One solution would be to ask for all the possible nominal realizations. This is the approach taken by nominalization lexicons such as NomLex (Macleod et al., 1998). However, this is also problematic in practice, as the different possible syntactic structures may conflict when encountering a nominalization within a sentence (_IBM's appointment_ vs. _Sam's appointment_).
We resolve this by asking the opposite question: "given a nominalized instance within a sentence and its set of arguments, how will these arguments map to those of an active verb construction?". That is, rather than asking "how will this verbal construction be realized as a nominal one" we ask "how will this nominal case be realized as an active verb construction". Using this formulation, we define a corpus enrichment task, in which we take in a corpus of syntactic trees, and annotate each deverbal noun case with its nominal arguments, using the corresponding verbal argument labels. An example of the trees enrichment is provided in Figure 1.
Potential UtilityOur motivation follows that of Tiktinsky et al. (2020): we imagine the use of the enhanced trees in systems that integrates universal dependency trees (Nivre et al., 2016) as part of their logic, using machine-learned or pattern-based techniques. Our proposed enrichment will allow users to search for a verb construction, and retrieve also nominal realizations of the same relation.
One proposed usage case regards the task of Open Information Extraction (OpenIE; Etzioni et al., 2008), which refers to the extraction of relation tuples from plain text, without demanding a predefined schema. These tuples can be extracted from both verbal and nominal phrases, e.g., the tuple (Steve Jobs; founded; Apple) from the phrase _Steve Jobs founded Apple_ and the tuple (IBM; research) from the phrase _IBM's research_. Some OpenIE systems, such as Renoun (Yahya et al., 2014) and Angeli et al.'s (2015) system, integrate rule-based patterns to extract such relations from nominal phrases, e.g., (X; Y) from phrases of the structure "X's Y". However, these patterns can be misleading, as _IBM's research_ interprets differently from _Rome's destruction_ (IBM researched vs. Rome was destructed), leading to contradicting relations. To overcome that, we suggest using verb-based patterns to extract relations from nominal phrases, upon integrating our enhanced trees. Concretely, based our enhanced trees, an OpenIE system can use a pattern that detects the nsubj-phrase and dobj-phrase for both verbs and nouns, to construct the relation tuple (nsubj; verb/noun; dobj). With this approach, different nominal phrases with the same syntactic structure, would properly map to different ordered relations, as (destruction; Rome) for the phrase _Rome's destruction_.
An Unsupervised ApproachWe take an unsupervised approach to this nominal-to-verbal argument mapping, relying on pre-trained contextualized word representations. The intuition behind our approach is that in order to resolve nominal arguments to verbal ones, there are two prominent signals: the semantic types of the arguments, and
their syntactic configuration with respect to their predicate. We hypothesize that pre-trained contextualized word embeddings capture both of these signals (as shown in Section 7.2), and also capture the similarities between the verbal and nominal cases (as demonstrated in Appendix A). Briefly, our approach works by identifying the candidate arguments of each deverbal noun instance, retrieving a set of sentences containing the corresponding active verb form, encoding both the deverbal noun instance and the active verb sentences using a masked language model, and searching for a mapping that maximizes some similarity metric between the nominal argument candidates and the verbal instances.
Our contributions in this workare thus two-fold: (1) we formulate the task of aligning nominal arguments to the arguments of their corresponding active verbal form; and (2) we propose an unsupervised method for tackling this task. We also provide code2 for enriching universal dependency trees (Nivre et al., 2016) with nominal arguments.
Footnote 2: Our code is available at [https://github.com/AviVWn/NounVerbUDTransfer](https://github.com/AviVWn/NounVerbUDTransfer)
## 2 Deverbal Nouns
Deverbal nouns are one type of nominalizations which are derived specifically from verbs, e.g., the deverbal noun _treatment_ is derived from the verb _treat_. The events represented by deverbal nouns are described using phrases in the sentence that complement the nouns. The arguments of the deverbal noun correspond to the arguments of the matching verb; each matches a different question about the action taken. For instance, in the phrase _professional treatment of illness_, _professional_ refers to the actor/subject of the verb _treat_ (professionals), and _illness_ refers to the object of the action _treat_.
The deverbal nouns, as typical nouns, are most often complemented by other noun phrases (_treatment of illness_, _his treatment_ and _health treatment_) and adjectives (_professional treatment_). Implicit and other types of complementing arguments are not considered part of this work's scope. Each deverbal noun defines a unique structure of these arguments, assigning different roles for the same typed arguments. For instance, consider the phrases _time preference of the individual_ and _individual waste of time_, which match the same syntactic structure ("noun-compound _of_ noun"). However, the first sentence matches the structure "Obj Noun of Subj" ("individuals\({}_{2}\) prefer time\({}_{1}\)"), and the second sentence refers to the structure "Subj Noun of Obj" ("individual\({}_{1}\) waste time\({}_{2}\)"). Furthermore, even the same deverbal noun may demand different labels for similar arguments in different contexts. For example, in the phrase "_Rome's destruction_", _Rome_ was destroyed, whereas in the phrase "_Rome's destruction of the city_", _Rome_ is the destroyer. Therefore, the argument roles are not determined solely by syntactic structure, and incorporate a mix of syntactic configuration, argument semantics, and predicate-specific information.
## 3 Related Works
Arguments of nominalizations were long investigated in the field of NLP. One early research explored the syntactic structure of the arguments and modeled the structure of many nominalizations, resulting in a detailed lexicon called NomLex (Macleod et al., 1998). The lexicon seeks to describe the allowed complements structures for a nominalization and relate the nominal complements to the arguments of the corresponding verb. Following the publishing of NomLex, Meyers et al. (1998) described how an Information Extraction (IE) system could exploit the linguistic information in the NomLex lexicon. Yet, the suggested approach remained hardly utilized by further research, as many works only exploited the verb-noun pairs specified by the lexicon.
Regarding identifying and labeling nominalization's arguments, a supervised approach was suggested while considering various task settings. One preceding paper by Lapata (2000) presented a probabilistic procedure to infer whether the modifier of a nominalization (the head noun) stands in subject or object relation with it. For instance, the algorithm should predict that the modifier's role in the phrase _child behavior_ is subject since the phrase refers to the _child_ as the agent of the action described by the verb _behave_. Stated differently, this procedure focuses on extracting only one specific argument of nominalizations in a noun phrase. Another distinguished paper by Pradhan et al. (2004) considered FrameNet-based (Baker et al., 1998) semantic arguments of nominalizations and applied a machine learning framework forentive nominalizations in English and Chinese, aiming to identify and label their arguments. Finally, Kilicoglu et al. (2010) published a similar approach for nominalizations used in biomedical text.
Some related works acknowledge the shortage of labeled argument nominalizations and suggest unsupervised methods for data expansion based on labeled argument verbs. Similarly to ours, these works exploited the similarity and alignment of the noun-verb arguments. For example, Pado et al. (2008) and Zhao and Titov (2020) considered the argument labeling task for nominalizations in a setup where the verbal sentences are human labeled, and with regards to semantic role labeling (SRL) arguments. Pado et al. (2008) exploited the similarities between the argument structure of event nominalizations and corresponding verbs while utilizing common syntactic features and distributional-semantic similarities. More recently, Zhao and Titov (2020) suggested a variational auto-encoder method, in which the labeler serves as an encoder, whereas the decoder generates the selectional preferences of the arguments for the predicted roles.
A different approach taken by Gurevich and Waterman (2009) using a fully unsupervised manner while automatically extracting and labeling verbal arguments of verbs from a large parsed corpus of Wikipedia. This approach resembles an intermediate stage of ours yet differs as it considers a reduced set of argument types (subject and object) and a reduced possible set of argument syntax for the nominalizations (possessive and 'of' arguments). Lately, Lee et al. (2021) engaged with a different task with similar applications. They suggested an unsupervised method for paraphrasing clauses with nominalizations into active verbal clauses.
## 4 Task Definition
As discussed in the introduction, we define a task of labeling the arguments of deverbal nouns within a sentence, with labels of the arguments in the corresponding active verb constructions. Here we provide a more complete and formal definition. While our aim is to label all of the deverbal nouns in a given corpus, here we focus on describing the task with relation to a single instance of a sentence and a deverbal noun within it.
We consider the syntactic arguments of active verbal forms to belong to the set \(L\) consisting of the universal dependency relations _nsubj_, _dobj_ and _nmod:X_, where \(X\) is a preposition (e.g., _nmod:in_, _nmod:on_, _nmod:with_). In words, the syntactic subject, syntactic object, and arguments attached as prepositional phrases where the identity of the preposition is part of the relation. While these prepositions may correspond to many different semantic roles, for a given verb they usually indicate a concrete and unique role.
Formally, given a sentence with words \(w_{1},\ldots,w_{n}\), and a marked deverbal noun within the sentence (say in position \(w_{i}\)), we seek to find \(K\) pairs of the form \((rel_{k},w_{j_{k}})\), \(1\leq k\leq K\), where \(rel_{k}\in\{nsubj,dobj,nmod:X\}\) and \(w_{j_{k}}\) is a word in the sentence (\(j_{k}\) is an index of a sentence word). For simplicity, we also demand that every relation type cannot be repeated more than once in the identified set of pairs. These pairs indicate arguments of the deverbal noun and their relations to it, expressed using an active-verb label set.
In Figure 1, the blue edges of the bottom tree indicate the output _(nsubj, 1)_, _(dobj, 6)_. Note that the task includes both the _identification_ of the arguments and their _label assignment_.
## 5 Methodology
While we intend to handle all deverbal nouns in a given collection of sentences, here we focus on how to resolve a single deverbal noun. We identify deverbal nouns and their corresponding verbal forms based on a given lexicon of verb-noun pairs, which we consider as input. In this work, we use the NomLex lexicon (Macleod et al., 1998), where future work can also replace this with a learned model.
Given a deverbal noun within a sentence, we first identify its potential arguments. This is realized by searching a set of syntactic relations in the corresponding universal dependency tree (we use the UDv1 parser trained by Tiktinsky et al. (2020) via the spaCy toolkit3). We then label the arguments by comparing their contextualized word embeddings to those of the corresponding verb arguments, in a set of sentences containing this verb (we further motivate this comparison in Appendix A). Finally, based upon the labeled arguments, we construct the final output as pairs of the arguments' label (i.e. verbal UD relation) and the arguments' head word.
Footnote 3: [https://spacy.io](https://spacy.io)
### Argument Identification
Given a sentence and a specific deverbal noun within, we first identify the phrases which could correspond to the desired arguments of the matching verb. The identified set of phrases is referred to as "argument candidates". Naively, every phrase in the sentence can complement the deverbal noun
\[\ell_{n}=\arg\max_{\ell}\,sim(\mathbf{a_{n}},avg(\{\mathbf{\tilde{a}} \mid\ell(\tilde{a})=\ell,\tilde{a}\in\tilde{A}\})) \tag{1a}\] \[\ell_{n}=\arg\max_{\ell}\,sum(\{sim(\mathbf{a_{n}},\mathbf{\tilde{a}} )\mid\ell(\tilde{a})=\ell,\mathbf{\tilde{a}}\in knn(\mathbf{a_{n}},\mathbf{ \tilde{A}},k)\}) \tag{1b}\]
and be considered as an argument, thus resulting in a relatively large set of candidates. To reduce this set, we consider the syntactic dependency tree of the sentence, searching for words that stand with direct dependency relation with the deverbal noun. Then, for every identified word we construct the argument candidate as the phrase corresponding to the subtree headed by this word according to the dependency tree. More specifically, we observed that arguments of deverbal nouns are realized using words that stand with the deverbal nouns in a small set of possible syntactic relations: _nmod:poss, compound_, _amod_, and _nmod:X_. Table 1 provides an example of these syntactic relations, using argument candidates for the deverbal noun _analysis_. In Section 7.1 we compare this approach and other considered approaches to identify the arguments.
### Argument Labeling
Upon argument identification, we aim to label the identified argument candidates of the deverbal nouns, with the desired argument types (_nsubj_, _dobj_, _nmod:X_ or \(\emptyset\)), such that the labels align to the labels of the corresponding arguments in the active verbal form (the label \(\emptyset\) indicates that this argument candidate is not in fact an argument of the noun, such as _primary_ in the phrase _the primary influence_). For instance, in the sentence _The emperor's destruction of Paris_, we wish to label _the emperor_ as _nsubj_ and _Paris_ as _dobj_, since the sentence can only be understood as the verbal sentence _The emperor destroyed Paris_.
Concretely, denote the argument candidates as \(a_{1},\dots,a_{N}\). We need to assign them with labels \(\ell_{1},\dots,\ell_{N}\), where \(\ell_{i}\in\{\emptyset,nsubj,dobj,nmod:X\}\), under the constraint that every two arguments \(a_{i}\), \(a_{j}\), can share labels if and only if they match the label \(\emptyset\) (as emphasized in the defined task).
We start from obtaining a set of verbal reference sentences \(S\), containing \(M\) sentences \(s_{1},\dots,s_{M}\), each sentence \(s_{m}\) contains the verbal form of the deverbal noun (these are obtained using a simple keyword search). In each of these instances \(s_{m}\), we use simple active and passive verbal dependency patterns to identify the \(A_{m}\) verbal arguments \(\tilde{a}_{1}^{m},...,\tilde{a}_{A_{m}}^{m}\), labelled as \(\tilde{\ell}_{1}^{m},\dots,\tilde{\ell}_{A_{M}}^{m}\). Intuitively, we now seek to find for each of our nominal argument \(a_{n}\) the most similar verbal argument \(\tilde{a}_{j}^{m}\), and match their labels. In our experiments, we obtained a set \(S\) containing about 1,500 reference sentences4 regarding every verb that were required by the evaluation datasets.
Footnote 4: We considered \(\ll\)1,500 reference sentences for less frequent verbs.
We encode both the input sentence and the reference sentences using a contextualized encoder (we use BERT-large-uncased [5] in this work), resulting in vectors \(\mathbf{a_{1}},\dots,\mathbf{a_{N}}\) for the input sentence and vectors \(\mathbf{\tilde{a}_{1}^{m}},...,\mathbf{\tilde{a}_{A_{m}}^{m}}\) for each verb reference sentence \(s_{m}\). We denote the entire set of verbal arguments as \(\tilde{A}\) and the corresponding set of vectors as \(\mathbf{\tilde{A}}\). We use a metric function \(sim(\mathbf{a},\mathbf{\tilde{a}})\) over the pair of vectors to quantify their similarity (we use _cosine_ similarity in this work). We then choose the label of each nominal argument \(a_{n}\) independently5 based on its closest neighbours in \(\mathbf{\tilde{A}}\). We consider two variants: in the first one (1a, nearest-avg-argument), we select the label \(\ell_{n}\) by averaging the reference vectors for each verbal argument label, and then choosing the label whose corresponding average vector is the most similar to the nominal argument's vector. In the second variant (1b, k-nearest-argument), we take the k-nearest verbal argument vectors (we use k=5) to the nominal argument vector. We compute the sum of similarities between \(\mathbf{a_{n}}\) and each of the k-nearest vector \(\mathbf{\tilde{a}}\) corresponding to each label, and choose
\begin{table}
\begin{tabular}{c c} \hline Phrase & UD Relation \\ \hline \hline _his analysis_ & nmod:poss \\ _data analysis_ & compound \\ _linguistic analysis_ & amod \\ _analysis of **the data**_ & nmod:of \\ \hline \end{tabular}
\end{table}
Table 1: The types of UD relations we used to identify candidate arguments, and their example with the deverbal noun _analysis_.
the label with the highest sum.
For both labeling variants, we assign the label \(\emptyset\) for arguments whose similarity with any other reference argument does not pass a chosen threshold.
## 6 Evaluation Data
Our task is to identify arguments of deverbal nouns and assign each one of them a label from the set \(L=\{nsubj,dobj,\textit{nmod:X}\}\). For evaluation, we need sentences with deverbal nouns whose arguments are labeled with these relations. For example, the deverbal noun _relocation_ in the phrase _Family relocation to Manchester_ should be labeled with the pairs _(nsubj, 1)_ and _(nmod:to, 4)_, as specified in Section 4.
We create three such evaluation datasets, the first based on a nominalization paraphrasing dataset, and the other two are based on the NomLex lexicon, while they differ by the coverage of deverbal nouns that they consider, as we further explain. Moreover, to compare our method's performance to earlier works, we consider the CoNLL-2009 dataset Hajic et al. (2009) for evaluation, as we discuss in 7.3.
The paraphrasing-derived evaluation setis derived from a manually annotated dataset for the task of paraphrasing sentences from nominal to verbal form Lee et al. (2021). The original dataset includes a collection of 449 samples from 369 unique sentences representing 142 different verbs. Each sample represents a paraphrasing between the original nominalinalization phrase (from a given sentence) and a verbal clausal phrase, for instance _genetic analysis from a sample_ which is paraphrased as _analyze genes from a sample_. For every paraphrasing sample, the dataset specifies the components of the nominal phrase within the structure "_adj/noun_ nominalization _prep pobj_", and the components of the active verbal phrase ("_arg0_ verb _arg1 pp_").
To construct our evaluation set based on this data, we first match each of the nominal components adj/noun and pobj with a verbal component from the set of arg0, arg1 and pp, choosing the one with the closest orthography to the nominal one. From this, we derive the verbal argument labeling for the components of the nominal phrase. Then, we replace each verbal label with its matching UD relation.6 Finally, for every nominal component we determine its head word position in the given context. The word positions paired with the matching verbal relations, construct a sample in our new paraphrasing-derived evaluation set.
Footnote 6: _arg0 \(\mapsto\) nsubj, arg1 \(\mapsto\) dobi, pp \(\mapsto\) nmod:X_, where X is determined by the leading preposition.
In the course of dataset construction, we filter out some data samples. To start with, data samples that specify two nominal components that match the same verbal component are removed from our dataset, as they do not fit the constraints of the defined task. For example, in the phrase _environmental assessment for the project_ the combined components of the noun can be understood together as the object of the matching verb (_assess the environmental impact of the project_), hence resulting with two nominal arguments labeled with the same verbal relation. Secondly, we consider only the first single data sample for every repeated nominal phrase to ensure a single truth of labeling for every nominal phrase. Following the filtering process we remain with 309 samples with 122 different verbs.
The NomLex evaluation setsare constructed using the NomLex lexicon.7 The NomLex lexicon contains a list of about 4k deverbal nouns, and for each of them specifies the various ways in which their arguments can be realized syntactically, and how they map to the corresponding verbal arguments. For example, an adapted NomLex entry for a deverbal noun like _destruction_ would specify the related forms of the noun (i.e., the verb and other related deverbal nouns) and, most significantly, a set of dependency-tree patterns corresponding to several different realizations of the noun. Each dependency-tree pattern represents a set of labeled arguments in a specific dependency tree. For instance, the entry of _destruction_ would contain a pattern that corresponds to the dependency structure shown in the middle of Figure 1 and demands the labeling of _Rome_ as subject and _city_ as object. Hence, using a parsed dependency tree of a sentence with a deverbal noun, we can extract the labeled arguments in the sentence for any specified pattern that fulfills the sentence's dependency structure. However, this method does not allow for a definitive decision in many cases, as the lexicon often contains multiple labeled contradicting patterns. In Section 7 we show that relying solely on NomLex results in a significantly lower precision.
Footnote 7: We converted the NomLex lexicon from its original LISP-based formatting and phrase-structure trees, to a more modern form encoded in JSON and using UD syntactic relations. The code for this conversion is accessible at [https://github.com/AvivWn/NounVerbUDTransfer](https://github.com/AvivWn/NounVerbUDTransfer).
We collect English Wikipedia sentences from
Guo et al. (2020) that contain a deverbal noun, and for each sentence, we identify the deverbal noun's arguments and labels based on the adapted NomLex entry as described above. We discard sentences for which the entry suggests two or more different assignments, when matching two or more dependency patterns. We then map NomLex's labels into the corresponding dependency relations of the active verbal form. To match the examples in the paraphrasing dataset, we consider only data samples with two labeled arguments each. We divide the collected samples into two evaluation sets based on the verbal form of the represented deverbal nouns. \(\mathbf{NomLex}_{paraphrasing}\) considers only samples which refer to verbs that appear in the paraphrasing-derived corpus, whereas \(\mathbf{NomLex}_{other}\) considers samples that match 315 other verbs. In each evaluation set, we keep 25 labeled sentences for each verb.
Tune/Test SplitOur method is unsupervised but still requires tuning of hyperparameters. We keep a tuning subset for each origin of the evaluation set (paraphrasing-derived and NomLex), which is also used for evaluation during development. In the paraphrasing dataset, we sample 20% of the dataset to construct the tuning set while keeping aside 80% of the dataset for evaluation. Out of the 122 verbs in the paraphrasing-derived evaluation set, 12 appear only in the tuning set, 83 only in the test set, and 27 appear in both sets. The split aims to ensure that the results are not verb-specific and to prevent overfitting, as we do hyperparameter optimization on the tuning set, which does not contain all the verbs that appear in the test set. To tune the method for NomLex-based data, we perform a similar tune-test split on \(\mathbf{NomLex}_{paraphrasing}\) based upon the same tune-test verb division made for the paraphrasing evaluation set. Concretely, NomLex instances of the 12 tuning-only verbs and 83 test-only verbs are included only in the NomLex tuning set and test set, correspondingly; Instances of the 27 common verbs are divided into the tune-test sets in a 20%-80% ratio. Moreover, we preserve entirely \(\mathbf{NomLex}_{other}\) corpus for testing.
Evaluation MetricsWe use two evaluation metrics: **Relation-F1** is the F1 score of all the predicted word-relation pairs compared to the gold labeled pairs (without distinguishing argument labels, for comparability with Zhao and Titov (2020) which uses CoNLL-2009 evaluation scorer Hajic et al. (2009)). **Exact-Match** scores how many noun instances had all their relations identified and labeled correctly. A predicted relation is considered correct if it matches both the same argument head word and the same label as the gold relation.
## 7 Experiments and Results
In this section, we consider the results of our method on the evaluation sets and experiments we conducted concerning the two stages of our method. The setup which produced the best results is discussed in 7.2, including the chosen hyperparameters, which were tuned over the tuning sets.
BaselineAs a baseline for our approach, we considered the same process we used for generating the NomLex evaluation sets. More specifically, for a given parsed sentence with a given deverbal noun, our baseline method attempts to match the deverbal noun instance with all dependency patterns in appropriate entry within the adapted NomLex lexicon. Every fulfilled pattern should result in a set of labeled arguments. The combined set of non-colliding arguments, i.e., arguments that match a single argument type, are then mapped into pairs of headwords and UD relations, which are also the output of the baseline method.
### Argument Identification
Using the set of relation labels in Section 5.2 and considering each one of them as an argument candidate, we cover 94.6% of all the relations in our paraphrasing-derived test-set, while producing 76 candidates (16.2% of all proposed candidates) that are not arguments. We find this to be of sufficient coverage and accuracy for the paraphrasing dataset. Regarding the NomLex evaluation sets, all arguments are identified using that relations set (100% coverage), while producing 24.8% and 23.1% non-argument candidates for \(\mathbf{NomLex}_{paraphrasing}\) and \(\mathbf{NomLex}_{other}\), respectively. As NomLex does not consider adjectival arguments, we choose to consider a reduced set of dependency relations without the _amod_ relation, keeping the same coverage and producing only 8.8% and 8.7% non-argument candidates, respectively.
For the paraphrasing-derived dataset we also consider two other alternatives: relying on the information in the NomLex lexicon for each noun, resulting in coverage of 58.5% and producing 6.9% non-argument candidates, and relying on NomLex
lexicon while also considering _amod_ relations, resulting in an increased coverage (85.3%) and increased non-argument candidates (13.9%). These low coverage results are anticipated as NomLex lexicon lacks the representation of some nominal structures, hence we chose the label-set approach as it was the most effective one.
We explored the resulted argument candidates and gathered three main reasons for the non-argument candidates. First, some correspond to arguments missing in the evaluation set. In the paraphrasing set, this is due to the focus on two arguments structure for each deverbal noun; In contrast, in the NomLex evaluation sets, this is primarily due to discarding of undetermined arguments and for the lack of prepositional adjuncts representation (which are captured using the dependency relations). Other resulted non-argument candidates are misaligned with the correct arguments, not sharing the same head-word, as emerged from a human-based evaluation set (such as paraphrasing-derived). Finally, the remaining non-arguments are indeed not an argument of the noun.
### Argument Labeling
Main ResultsWe experiment with two different labeling methods, as discussed in Section 5.2: nearest average of reference argument representations for each argument (nearest-avg-argument); k-nearest reference arguments (k-nearest-argument). The results of the various labeling methods are shown in Table 2 while considering the most suitable identification method for every evaluation set as raised from the argument identification comparison. We report our results on the three test sets and in comparison with the performance of the baseline method and naive 'all-subject' and 'all-object' methods (which label all argument relations with _nsubj_ and _dobj_, respectively). As articulated from our results, both labeling methods performed better than the baseline regarding the paraphrasing evaluation set. Moreover, k-nearest-argument outperformed nearest-avg-argument on all metrics of all evaluation sets. Best results were attained by calibrating the methods on the matching tuning sets, e.g., selecting a specific threshold for labeling \(\emptyset\)-typed arguments (0.56 for paraphrasing tune-set and 0.48 for NomLex tune-set). Yet, we examined similar performance tendencies between the tuning sets and the test sets (see Appendix B), implying a generalization of our method for other examples. We further validated our method generalization for any arbitrary verb, by scoring relatively similar results on NomLex\({}_{other}\) as on NomLex\({}_{paraphrasing}\) without additional tuning, while each considers nouns that match a different set of verbs. The extended results in Appendix B also demonstrate the Relation-F1 scores of our best method regarding the most common relations in the test sets.
Importance of ContextualizationArguments of verbs and deverbal nouns share semantics, as both commonly paraphrase the same entity in different contexts. For instance, the subject of the verb _acquire_ usually matches the semantic role of a 'HUMAN' (_John acquired the ingredients_) or a 'COMPANY' (_Apple acquired another startup company_). The same subjects can be realized in a deverbal noun context, as in _The ingredients acquisition of john_ and _Apple's acquisition of the startup company_, correspondingly. The semantic role of words can be represented by vector representations, both contextualized representations as BERT and uncontextualized representations as Word2Vec Mikolov et al. (2013) vectors. We compared our main results with pre-trained BERT-based representations to uncontextualized representations, using pre-trained Fasttext Word2Vec model made by Bojanowski et al. (2017). The results of our method regarding the two representations are shown in Table 3. Us
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Paraphrasing-derived} & \multicolumn{3}{c}{\(\text{NomLex}_{paraphrasing}\)} & \multicolumn{3}{c}{\(\text{NomLex}_{other}\)} \\ Method & F1 & Exact & F1 & Exact & F1 & Exact \\ \hline \hline baseline (NomLex-based) & 43.42 & 7.66 & - & - & - & - \\ all-subject & 27.67 & 0.00 & 37.04 & 0.00 & 41.52 & 0.00 \\ all-object & 36.50 & 0.00 & 40.24 & 0.00 & 38.19 & 0.00 \\ nearest-avg-argument & 44.08 & 17.74 & 39.81 & 18.38 & 40.10 & 19.49 \\ k-nearest-argument & **62.93** & **36.29** & **53.74** & **34.98** & **53.67** & **35.06** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The best results of the two suggested labelers on the three test sets, compared to the baseline process and the naive methods. Regarding metrics, ‘F1’ refers to Relation-F1 and ‘Exact’ refers to Exact-Match.
ing Word2Vec we see a decrease of about 25% in Relation-F1 and about 40% in Exact-Match compared to BERT results using our best method, from which we conclude that the context of the argument also affects the performance of our method.
Syntax vs SemanticsThe previous experiment has demonstrated that the contextualized vectors outperform the static ones, suggesting the need for more than word semantics. In the following experiment, we further quantify the contribution of syntactic position vs. argument semantics to the final predictions. We manipulate the paraphrasing evaluation set by switching the sentence positions of the two specified arguments for each tagging sample. Note that the resulting sentence is usually neither grammatically nor semantically correct. Then, we apply our labeling stage while considering the BERT vectors over the arguments in the new positions. When compared to the labels of the same arguments received in the original positions, we see almost 70% difference. Thus, the syntactic position has an innegligible effect on the verb-noun alignment that our method aims to resolve.
### Comparison to Earlier Work
Existing unsupervised attempts that approach the nominal argument labeling task as a transfer scenario from verbal arguments to nominal arguments (as our work), rely on a predefined semantic roles ontology. For instance, Zhao and Titov (2020) consider SRL roles of verbs to label nouns with the same set of roles, as appears in CoNLL-2009 dataset Hajic et al. (2009). Our defined task and proposed methods do not require a predefined semantic roles ontology, yet can be tested on one for comparability with such existing work. Thus, we apply our labeling methods on CoNLL-2009 nominal test data after verbalizing the nominal predicates in the dataset while considering the CoNLL-2009 verbal train data as verbal references. For evaluation comparability with Zhao and Titov (2020), we skip the argument identification stage and assume the identified arguments are given. Finally, we calculate the F1 performance (as discussed for "Relation-F1" in Section 6) of our methods, which we compare to the matching ones reported by Zhao and Titov (2020). As shown in Table 4, our best method ('k-nearest-argument') outperforms their baselines ('Most-frequent', 'Factorization' and 'Direct-transfer'). However, their 'Full-system' approach transcends our method by exploiting a supervised verbal SRL system and data augmentations, which we do not use in our work.
## 8 Conclusions
In this work, we formulate the task of aligning arguments of deverbal nouns to the arguments of their corresponding active verbal form. We formulate the task as a UD enrichment task, aiming to enrich deverbal nouns in text with verbal UD relations for the matching nominal arguments. Our formulation, compared to the ones suggested in previous works, does not rely on a predefined roles ontology.
We suggest an unsupervised approach to this nominal-to-verbal argument mapping based on pretrained contextualized word representations. Our method tries to match nominal identified arguments with automatically extracted arguments of the corresponding verb. The suggested method outperforms the NomLex-based baseline, which is based on an expertly constructed comprehensive lexicon. We also show the importance of contextualization, experiencing a 25% decrease in performance when using uncontextualized vectors. Moreover, we further validate our hypothesis that semantics and syntactic structure are captured in the considered word representations using a dedicated experiment.
We provide a standalone code for enriching universal dependency trees with nominal arguments for a given parsed corpus, which can be integrated into NLP systems that use universal dependency patterns as part of their design or features.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & F1 \\ \hline \hline Most-frequent & 56.51 \\ Factorization & 44.48 \\ Direct-transfer & 55.85 \\ Full-system & **63.09** \\ \hline k-nearest-argument (Ours) & 58.82 \\ \hline \hline \end{tabular}
\end{table}
Table 4: F1 results reported by Zhao and Titov (2020) on CoNLL-2009 nominal test data, compared to the result of our best labeler applied on the same dataset.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & BERT & Word2Vec \\ \hline \hline nearest-avg-arg & **44.08 (17.74)** & 20.78 (4.44) \\ k-nearest-arg & **62.93 (36.29)** & 46.53 (21.37) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The best results of the suggested labelers using BERT and Word2Vec representations, on the paraphrasing test set, specified as “Relation-F1 (Exact-Match)”.
### Limitations
The main drawback of the work is in its evaluation, which was performed on datasets which were not manually annotated for the task, but adapted to it in various means. While we believe these evaluation sets do provide a strong indication regarding task performance, evaluating on bespoke data explicitly annotated for the task is usually preferable. Another limitation is language specificity: the work currently focuses on English, without considering other languages, which are also left for future work.
## Ethics Statement
Like all works that depend on embeddings, the resulting models may be biased in various ways. Users should take this into consideration when deploying them in products.
## Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
|
2302.07278 | The structure of magnetic fields in spiral galaxies: a radio and
far-infrared polarimetric analysis | We propose and apply a method to quantify the morphology of the large-scale
ordered magnetic fields (B-fields) in galaxies. This method is adapted from the
analysis of Event Horizon Telescope polarization data. We compute a linear
decomposition of the azimuthal modes of the polarization field in radial
galactocentric bins. We apply this approach to five low-inclination spiral
galaxies with both far-infrared (FIR: 154 $\mu$m) dust polarimetric
observations taken from the Survey of ExtragALactic magnetiSm with SOFIA
(SALSA) and radio (6 cm) synchrotron polarization observations. We find that
the main contribution to the B-field structure of these spiral galaxies comes
from the $m=2$ and $m=0$ modes at FIR wavelengths and the $m=2$ mode at radio
wavelengths. The $m=2$ mode has a spiral structure and is directly related to
the magnetic pitch angle, while $m=0$ has a constant B-field orientation. The
FIR data tend to have a higher relative contribution from other modes than the
radio data. The extreme case is NGC 6946: all modes contribute similarly in the
FIR, while $m=2$ still dominates in the radio. The average magnetic pitch angle
in the FIR data is smaller and has greater angular dispersion than in the
radio, indicating that the B-fields in the disk midplane traced by FIR dust
polarization are more tightly wound and more chaotic than the B-field structure
in the radio, which probes a larger volume. We argue that our approach is more
flexible and model-independent than standard techniques, while still producing
consistent results where directly comparable. | William Jeffrey Surgent, Enrique Lopez-Rodriguez, Susan E. Clark | 2023-02-14T19:00:01Z | http://arxiv.org/abs/2302.07278v2 | # The structure of magnetic fields in spiral galaxies: a radio and far-infrared polarimetric analysis
###### Abstract
We propose and apply a method to quantify the morphology of the large-scale ordered magnetic fields (B-fields) in galaxies. This method is adapted from the analysis of Event Horizon Telescope polarization data. We compute a linear decomposition of the azimuthal modes of the polarization field in radial galactocentric bins. We apply this approach to five low-inclination spiral galaxies with both far-infrared (FIR: \(154~{}\mu\)m) dust polarimetric observations taken from the Survey of ExtraGLactic magnetiSm with SOFIA (SALSA) and radio (\(6~{}\)cm) synchrotron polarization observations. We find that the main contribution to the B-field structure of these spiral galaxies comes from the \(m=2\) and \(m=0\) modes at FIR wavelengths and the \(m=2\) mode at radio wavelengths. The FIR data tend to have a higher relative contribution from other modes than the radio data. The extreme case is NGC 6946: all modes contribute similarly in the FIR, while \(m=2\) still dominates in the radio. The \(m=2\) mode has a spiral structure and is directly related to the magnetic pitch angle, while \(m=0\) has a constant B-field orientation. The average magnetic pitch angle in the FIR data is smaller and has greater angular dispersion than in the radio, indicating that the B-fields in the disk midplane traced by FIR dust polarization are more tightly wound and more chaotic than the B-field structure in the radio, which probes a larger volume. We argue that our approach is more flexible and model-independent than standard techniques, while still producing consistent results where directly comparable.
XXX
## 1 Introduction
Large-scale spiral magnetic field (B-field) structures are frequently observed in spiral galaxies (e.g., Beck et al., 2019; Lopez-Rodriguez et al., 2022). These B-fields are thought to be generated via a mean-field dynamo driven by differential rotation of the galactic disk and turbulent helical motions (Shukurov and Subramanian, 2021). The three-dimensional structure of galactic B-fields can be decomposed into radial (\(B_{\rm r}\)), azimuthal (\(B_{\phi}\)), and vertical (\(B_{\rm z}\)) components, where the coordinate system is typically defined relative to the core of the galaxy. The structure of the disk magnetic field (e.g., at the midplane \(z=0\)) is often summarized using the pitch angle \(\Psi_{\rm B}=\arctan(B_{\rm r}/B_{\phi})\)(Krasheninnikova et al., 1989). In this formalism, a perfectly toroidal B-field has \(\Psi_{\rm B}=0^{\circ}\), and a perfectly radial B-field has \(\Psi_{\rm B}=90^{\circ}\).
The full three-dimensional structure of galactic B-fields is not directly measurable, but \(\Psi_{\rm B}\) can be estimated from polarimetric measurements of a galactic disk. For any point in the galaxy, \(\Psi_{\rm B}\) is the angle between the local magnetic field orientation and the tangent to a circle with origin at the galaxy's center that passes through that point. The latest compilation of \(\Psi_{\rm B}\) from radio polarimetric observations of \(19\) nearby galaxies shows that the \(\Psi_{\rm B}\) is mostly constant within the central \(5-10\) kpc, with values in the range of \(20-35^{\circ}\)(Beck et al., 2019). The \(\Psi_{\rm B}\) were found to be systematically offset by \(5\)-\(10^{\circ}\) when compared with the molecular (CO) spiral arms (Van Eck et al., 2015; Frick et al., 2016)--i.e, the magnetic pitch angles are more open than the molecular gas arms.
Far-infrared (FIR) polarimetric observations have shown to reveal different components of the large-scale B-fields in the disk of galaxies (e.g., Borlaff et al., 2021; Lopez-Rodriguez et al., 2021, 2022). The detailed study performed in the spiral galaxy M51 showed that the radio and FIR magnetic pitch angles are similar within the central \(6~{}\)kpc, but at larger radii the FIR \(\Psi_{\rm B}\) wrapped tighter than at radio wavelengths (Borlaff et al., 2021). The reason for this difference may be caused by the interaction of M51 with M51b and/or the injection of kinetic energy driven by the strong star formation region in the outskirts of the spiral arms of M51. The difference in the observed B-field structure arises from the different nature of the interstellar medium (ISM) associated with the FIR and radio wavelengths. The FIR polarization arises from thermal emission of magnetically aligned dust grains tracing a weighted density medium along the line-of-sight (LOS) and within the beam (i.e., full-width-at-half-maximum, FWHM) of the observations of a dense (\(\log_{10}(N_{\rm HI+H2}[\rm cm^{-2}])=[19.96,22.91]\)) and cold (\(T_{\rm d}=[19,48]\) K) component of the ISM (SALSA IV, Lopez-Rodriguez et al., 2022).
The pitch angles have been characterized assuming a priori function of a spiral arm--i.e., logarithmic spirals. Specifically, the pitch angles have been derived using a logarithmic spiral
function fitted to the spiral arms of the gas tracers and/or a multi-mode logarithmic spiral B-field fitted to the magnetic arms (Fletcher et al., 2011; Van Eck et al., 2015). Then, a mean pitch angle value of the entire galaxy is estimated and compared between tracers. A wavelet-based approach was used in the spiral galaxy M83 (Frick et al., 2016) with the shape and width of the kernel as user-defined parameters. Borlaff et al. (2021) performed the same wavelet analysis to the morphological spiral arms and estimated the mean pitch angle per annulus as a function of galactocentric radius for the magnetic pitch angles. A model-independent systematic study of the magnetic pitch angles as a function of galactocentric radii and tracers is required.
Our goal is to characterize the B-field morphology of spiral galaxies using a model-independent approach. We make use of the linear polarimetric composition approach (Palumbo et al., 2020) applied to analyze the B-field structure around the supermassive black hole of M87 from the Event Horizon Telescope (EHT) (Event Horizon Telescope Collaboration et al., 2021, 2020). This method is a specific case (\(m=2\)) of the general E/B decomposition widely used to analyze the polarization from the cosmic microwave background (CMB; e.g., Zaldarriaga, 2001). Lopez-Rodriguez et al. (2021, SALSA II) applied this method to the B-fields in the starburst ring of NGC 1097 showing that the radio B-field is dominated by a spiral B-field (with an azimuthal mode \(m=2\)), while a constant (\(m=0\)) B-field dominates at FIR wavelengths. The \(m=2\) B-field was attributed to a magnetohydrodynamic (MHD) dynamo, and the \(m=0\) B-field was associated with galactic shocks between the bar and the starburst ring. These results showed the potential of this method to analyze the B-fields in the multiphase of galaxies. Here, we apply the linear polarimetric decomposition to analyze the B-field orientation in a sample of five spiral galaxies with resolved radio and FIR polarimetric observations. We describe in Section 2 the methodology of the linear polarimetric decomposition. The results of the decomposition of the B-field morphology using FIR and radio observations are shown in Section 3. Our discussions are described in Section 4, and our main conclusions are summarized in Section 5.
## 2 Methods
We adapt the method (Palumbo et al., 2020) proposed for the analysis of EHT measurements of the polarized emission around the supermassive black hole in M87 (Event Horizon Telescope Collaboration et al., 2021, 2020). Here, we summarize the method and describe how this linear polarimetric decomposition can be used to estimate the underlying B-field structure of a spiral galaxy.
### Decomposition into azimuthal B-Field modes
We start by describing the linear polarization via the complex polarized intensity
\[P_{\rm B}(\rho,\phi)\equiv-Q(\rho,\phi)-iU(\rho,\phi), \tag{1}\]
where \(Q\) and \(U\) are the Stokes parameters of linear polarization, and \(\rho\) and \(\phi\) are radial and azimuthal coordinates, respectively. The sign convention in the definition of \(P_{\rm B}\) represents our interest in the B-field orientation, which is rotated by \(90^{\circ}\) from the electric vector position angle measured in radio and FIR polarimetric observations. The measured polarization field is decomposed into azimuthal modes, \(m\), with amplitudes of \(\beta_{m}\) via the decomposition definition
\[\beta_{m}=\frac{1}{I_{\rm ann}}\int_{\rho_{\rm min}}^{\rho_{\rm max}}\int_{0} ^{2\pi}P(\rho,\phi)e^{im\phi}\rho d\phi d\rho \tag{2}\]
\(\beta_{m}\) is a dimensionless complex number. Its absolute value, \(|\beta_{m}|\), corresponds to the amount of coherent power in the \(m^{\rm th}\) mode, and the argument, \(\angle\beta_{\rm m}\), corresponds to the average pointwise rotation of the B-field orientation within an annulus of radius \([\rho_{\rm min},\rho_{\rm max}]\). We define \(\phi=0^{\circ}\) as the B-field orientation in the north direction with positive values increasing along the counterclockwise direction (East of North). This decomposition can be thought of as a radially averaged azimuthal Fourier transform of the complex polarization field, where the \(\beta_{m}\) coefficients are Fourier coefficients corresponding to the internal Fourier modes. We provide a collection of examples of ring-valued linear polarization fields corresponding to \(0\leq m\leq 3\) periodic modes with different values of the \(\beta_{m}\) coefficient in Figure 1 (see also figure 1 of Palumbo et al., 2020). The B-field orientations along its respective ring-valued linear polarization field are offset by an angle \(\theta_{m}\) given by half of the complex phase of \(\beta_{m}=(\Re(\beta_{m})+\Im(\beta_{m}))\) within \([-90,90]^{\circ}\)
\[\theta_{m}=\frac{1}{2}\arctan\left(\frac{\Im(\beta_{m})}{\Re(\beta_{m})}\right) \tag{3}\]
Figure 1: Examples of ring-valued linear polarization fields. The morphology of the linear polarization field corresponding to \(0\leq m\leq 3\) periodic modes with different values of the \(\beta_{m}\) coefficient are presented.
The \(m=0\) mode corresponds to a constant B-field orientation, \(m=1\) mode corresponds to a half dipole field structure, and \(m=2\) corresponds to a radial and toroidal distribution in the real space and a spiral structure in the complex space. Note that the \(m=2\) mode is analogous to the \(E\) and \(B\) mode decomposition commonly used in studies of CMB polarization (e.g., Kamionkowski et al., 1997; Seljak & Zaldarriaga, 1997; Zaldarriaga, 2001), where the real part of \(\beta_{2}\) is the \(E\)-mode and the imaginary part is the \(B\)-mode. We show the reconstruction of a non-trivial B-field orientation with a combination of \(m=0\) and \(m=2\) modes in Figure 2.
### Implementation of the algorithm
We estimate the B-field orientation over a spiral galaxy as follows.
1. We construct a two-dimensional map of azimuthal angles such that \(\phi=0^{\circ}\) corresponds to North (up), and \(\phi\) increases in the counterclockwise direction (East from North).
2. We define a set of radial masks centered at the peak of the galaxy's central emission at a given wavelength. The first step is to define the grid of radial distances, i.e., the projected distance of each pixel from the galaxy's center in the \(x,y\) plane (with the line of sight along \(z\)). If every galaxy were perfectly face-on, the radial masks would be simple circular annuli of this grid. In practice, we rotate the radial distance grid by the galaxy's inclination, \(i\), and tilt, \(\theta\), angles. We calculate \(r^{\prime}=R_{x}[i]R_{z}[\theta]r\), where \(r\) is the original radius, \(r^{\prime}\) is the new radial distance, and \(R_{x}[i]\), \(R_{z}[\theta]\) are the rotation matrices for the inclination and tilt, respectively. We can thus define the projected annulus for any given inner and outer radius.
3. We compute \(P_{\rm B}(\rho,\phi)\equiv-Q(\rho,\phi)-iU(\rho,\phi)\), the complex-valued polarized intensity (Equation 1), where \(Q\) and \(U\) are the measured Stokes parameters at the galactrocentric radius of \(\rho=\sqrt{x^{2}+y^{2}}\) and azimuthal angle \(\phi\).
4. Using Equation 2, we calculate the amplitude \(|\beta_{m}|\) and angle \(\angle\beta_{m}\) for every annulus.
5. For \(m=2\) only, we take the product of the basis function \(e^{im\phi}\) and the complex-valued polarization field \(P(\rho,\phi)=Q(\rho,\phi)+iU(\rho,\phi)\). Note that this definition of the polarized intensity is the opposite sign to Equation 1. We define this for \(m=2\) to facilitate comparison with other measurements of galaxy magnetic pitch angles: this quantity represents the tangent to the local circumference at a given distance from the galaxy's center, which is equivalent to the pitch angle of the B-field, \(\Psi_{2}\). The angles \(\angle\beta_{2}\) and \(\Psi_{2}\) are thus complementary. Figure 2 illustrates the definition of each.
We estimate the uncertainty on our decomposition parameters via a Monte Carlo technique. We generate \(5000\) realizations of the Stokes \(I\), \(Q\), and \(U\) fields by randomly sampling each pixel from a Gaussian distribution centered on the measured value, and with a standard deviation equal to the measurement uncertainty. From each realization we compute \(|\beta_{m}|\), \(\angle\beta_{m}\), and \(\Psi_{2}\) for each annulus defined by radial range \([\rho_{\rm min},\rho_{\rm max}]\). We compute the mean and standard deviation of each quantity over the \(5000\) samples. This method was implemented in python, and the code is available in the Appendix.
## 3 Application
We describe the data used in this work and show the results of this method to quantify the B-field structure of spiral galaxies.
### Archival data
We apply the method presented in Section 2 to a sample of five spiral galaxies. These spiral galaxies are the only publicly available objects with combined FIR and radio polarimetric observations. The application to both wavelengths allows us to characterize the B-field morphology in two different phases of the ISM. Table 1 lists the properties of the galaxy sample used in this work.
The FIR data were taken from the Survey of ExtragALacticmagnetism SM with SOFIA (SALSA1) published by Lopez-Rodriguez et al. (2022b, SALSA IV). All FIR polarimetric observations were performed using SOFIA/HAWC+ at \(154\)\(\mu\)m with a beam size (FWHM) of \(13.6^{\prime\prime}\) and a pixel scale of \(6.90^{\prime\prime}\) (i.e., Nyquist sampling). For a detailed description of the data reduction see Lopez-Rodriguez et al. (2022c, SALSA III) and for an analysis of the polarization fraction see Lopez-Rodriguez et al. (2022b, SALSA IV).
Footnote 1: Data from SALSA can be found at [http://galmagfields.com/](http://galmagfields.com/)
The radio polarimetric observations were obtained with the Very Large Array (VLA) and the Effelsberg 100-m radio tele
Figure 2: Example of the B-field decomposition of a galaxy. The composition of the \(m=2\) mode with \(\beta_{2}=-i\) and the \(m=0\) mode with \(\beta_{0}=-i\) is shown. Additionally, the relationship between the averaged pointwise rotation of the B-field orientation, \(\angle\beta_{2}\), and the magnetic pitch angle, \(\Psi_{2}\), is illustrated at the top of the \(m=2\) mode on the left.
scope at \(6\) cm with a typical angular resolution of \(8^{\prime\prime}\). For a detailed description of the data reduction of each galaxy we defer to Fletcher et al. (2011, M51), Frick et al. (2016, M83), Soida et al. (2001, NGC 3627), Chyzy & Buta (2008, NGC 4736), and Beck (1991, 2007, NGC6946). The \(6\) cm polarimetric observations were selected because they are the common radio wavelength of all galaxies and have higher signal-to-noise ratios than the \(3\) cm. At longer radio wavelengths (\(18,\,20\) cm), the observations can be strongly affected by Faraday rotation (Beck et al., 2019). For all radio observations, the Stokes \(I\), \(Q\), and \(U\) were convolved with a 2D Gaussian kernel to match the beam size of the HAWC+ observations. Then each galaxy was reprojected to match the footprint and pixelization of the HAWC+ observations. The smoothed and reprojected radio polarimetric observations are publicly available on the SALSA website. Figure 3 shows the measured B-field orientation at \(154\)\(\mu\)m and \(6\) cm for each of the spiral galaxies used for our study. For visualization purposes, we display one B-field orientation per beam and only polarization measurements with \(PI/\sigma_{PI}\geq 3\), where \(PI\) is the polarized intensity and \(\sigma_{PI}\) is the associated uncertainty.
### Results of the B-field orientation decomposition
We apply the steps presented in Section 2.2 to the five galaxies shown in Figure 3. For each galaxy, we select data with \(I/\sigma_{I}\geq 10\), where \(I\) and \(\sigma_{I}\) are the Stokes \(I\) and its uncertainty, respectively. Table 1 shows the galaxy's inclination, \(i\), and tilt, \(\theta\), angles used to define the projected annuli. We calculate the radial profiles selecting the width of each annulus to be equal to the beam size of the HAWC+ observations, i.e., \(13\farcs 6\) (\(2\) pixels). The core (\(2\) beams = \(27\farcs 2\) = \(4\) pixels) of each galaxy is masked because of the limited number of independent measurements in that innermost region. Each decomposition is centered at the location of the peak total intensity of the radio emission. All of these galaxies have an unresolved core at radio wavelengths, while the FIR emission shows an extended core (e.g., M83) or a dearth of central emission (e.g., M51). We test the robustness of the central coordinate selection by varying the central coordinates by \(\pm 1\) pixel in all directions. Specifically, we moved the central coordinates by \(\pm 1\) pixel and estimated a mean error \(<10^{\circ}\) at FIR wavelengths and \(<4^{\circ}\) at radio wavelengths in the final pitch angles, \(\Psi_{2}\), across the entire disk. We show the results (Figures 4 and 5) out to the largest radius where we can measure an uncertainty on \(\Psi_{2}\leq 30^{\circ}\).
#### 3.2.1 Amplitudes
We compute \(\beta_{m}\) for the \(-3\leq m\leq 3\) modes and show their relative amplitudes as a function of the radii. The fractional amplitude per annulus, per galaxy is
\[|\tilde{\beta_{m}}|=\frac{|\beta_{m}|}{\sum_{m=-3}^{m=3}|\beta_{m}|}, \tag{4}\]
where the uncertainties are estimated using the Monte Carlo approach described in Section 2.2. We can further average over all annuli and all galaxies to estimate the mean relative mode amplitude of our spiral galaxies (Figure 6 and Table 2).
At radio (\(6\) cm) wavelengths, we find that \(m=2\) is the most dominant mode for our spiral galaxies. We estimate that the dominant \(m=2\) mode at \(6\) cm has a mean relative contribution of \(|\tilde{\beta_{2}}|=0.39\pm 0.04\), averaged over annuli. The B-field modes with \(m=0\), \(m=3\), and \(m=1\) have similar relative contributions to the radio polarization of \(|\tilde{\beta_{0}}|=0.14\pm 0.03\), \(|\tilde{\beta_{3}}|=0.12\pm 0.02\), and \(|\tilde{\beta_{1}}|=0.11\pm 0.02\), respectively. All negative modes have relative amplitudes \(<0.1\) in the radio, but together they account for \(|\tilde{\beta_{m<0}}|=0.23\pm 0.04\) of the total average mode amplitude.
At FIR (\(154\)\(\mu\)m) wavelengths, \(m=2\) and \(m=0\) have similar relative contributions of \(|\tilde{\beta_{2}}|=0.18\pm 0.04\) and \(|\tilde{\beta_{0}}|=0.18\pm 0.03\) averaged over the full disk. The modes \(m=1\) and \(m=3\) have the same relative amplitude of \(0.13\). The negative modes have relative amplitudes in the range of \(0.10-0.12\), and they sum to \(|\tilde{\beta_{m<0}}|=0.33\pm 0.05\) of the to
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ Galaxy} & Distance\({}^{1}\) & Scale & Type\({}^{*}\) & Inclination (i)\({}^{2}\) & Tilt (PA)\({}^{2}\) & References \\ & (Mpc) & (pc/\({}^{\prime\prime}\)) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & \\ \multicolumn{1}{c}{(a)} & (b) & (c) & (d) & (e) & (f) & (g) \\ \hline M51 & \(8.58\) & \(41.21\) & Sa & \(22.5\pm 5\) & \(-7\pm 3\) & \({}^{1}\)McQuinn et al. (2017); \({}^{2}\)Colombo et al. (2014) \\ M83 & \(4.66\) & \(22.38\) & SAB(s)c & \(25\pm 5\) & \(226\pm 5\) & \({}^{1}\)Tully et al. (2013); \({}^{2}\)Crosthwaite et al. (2002) \\ NGC 3627 & \(8.90\) & \(42.75\) & SAB(s)b & \(52\pm 1\) & \(176\pm 1\) & \({}^{1}\)Kennicutt et al. (2003); \({}^{2}\)Kuno et al. (2007) \\ NGC 4736 & \(5.30\) & \(25.46\) & SA(r)ab & \(36\pm 7\) & \(292\pm 2\) & \({}^{1}\)Kennicutt et al. (2003); \({}^{2}\)Dicaire et al. (2008) \\ NGC 6946 & \(6.80\) & \(32.66\) & Sc & \(38.4\pm 3.0\) & \(239\pm 1\) & \({}^{1}\)Karachentsev et al. (2000); \({}^{2}\)Daigle et al. (2006) \\ \hline \end{tabular} \({}^{*}\)Galaxy type from NASA/IPAC Extragalactic Database (NED; [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/))
\end{table}
Table 1: Galaxy Sample. _Columns, from left to right:_ (a) Galaxy name. (b) Galaxy distance in Mpc. (c) Physical scale in pc per arcsec. (d) Galaxy type. (e) Inclination of the galaxy in degrees. (f) Position angle of the long axis of the galaxy in the plane of the sky. (g) References for the distance, inclination, and tilt angles.
Figure 3: B-field orientation of our sample of spiral galaxies. Polarization measurements are shown with constant length to illustrate the inferred B-field orientations at \(154~{}\mu\)m (black lines) and \(6\) cm (white lines) overlaid on the \(154~{}\mu\)m total intensity (color scale). All figures share the same colorscale as shown in the colorbar of NGC 6946. Polarization measurements per beam (red circle) with \(PI/\sigma_{PI}\geq 3.0\) are shown, where \(PI\) and \(\sigma_{PI}\) are the polarized intensity and its uncertainty, respectively.
Figure 4: Linear decomposition results of the FIR and radio B-fields for M51 and M83. The titles of each galaxy show the intensity map at FIR wavelengths to guide the reader with the morphological structure of the galaxy. We show the FIR and radio magnetic pitch angles, \(\Psi_{2}\), as a function of the galactocentric radius with the middle solid line corresponding to the mean values and the width of the line corresponding to \(\pm 1\sigma\) (Left). Note that for M51, we have included \(\angle\beta_{0}\) which is the angle associated with the \(m=0\) mode. The center panel shows the relative amplitude of each mode as a function of distance from the galactic center at FIR wavelengths with the width of each band corresponding to the value of the mode’s relative amplitude. Additionally, on the top of each band for each mode are error bars, showing \(\pm 1\sigma\). The right panel shows the same but at radio wavelengths.
tal average FIR polarization mode amplitude. The FIR polarization data thus show significantly weaker mode preference compared to the radio polarization data in general. This is evident in the individual galaxy mode decompositions (Fig. 4). The radio polarization data are consistent with the radio polarization data.
ures 4 and 5), as well as the mean relative mode contribution for each data set (Figure 6).
NGC 6946 displays the most extreme difference between the mode decomposition in the radio and the FIR data (Figure 5, last row). In the radio, the \(m=2\) mode is strongly dominant, with \(|\tilde{\beta_{2}}|=0.42\pm 0.07\). By contrast, all modes contribute roughly equally to the FIR polarization (\(|\tilde{\beta_{m}}|=[0.12-0.17]\)). This is perhaps not surprising because of the galaxies in our sample, NGC 6946 has the most irregular B-field morphology in the FIR. We recalculate the mean relative mode amplitudes for the four galaxies excluding NGC 6946, and we find that the Table 2 values do not change within the stated uncertainties.
#### 3.2.2 Magnetic pitch angles
The magnetic pitch angle, \(\Psi_{2}\), is the complementary angle to the \(\angle\beta_{2}\) angle estimated by the B-field mode decomposition method (Figure 2). We show radial profiles of the pitch angles for each galaxy in Figures 4 and 5. We also estimate the mean pitch angles, \(\langle\Psi_{2}\rangle\), per galaxy at FIR and radio wavelengths (Table 3).
We estimate that the mean pitch angle at FIR wavelengths is smaller than at radio wavelengths (Table 3), i.e., \(\langle|\Psi_{2}^{\rm FIR}|\rangle<\langle|\Psi_{2}^{\rm Radio}|\rangle\). This result indicates that radio spiral B-fields are more open than FIR spiral B-fields in our sample. In addition, the FIR wavelengths have pitch angles with a larger angular dispersion, \(\pm 24^{\circ}\), than at radio wavelengths, \(\pm 8^{\circ}\). This result shows that radio spiral B-fields are more ordered than FIR spiral B-fields.
At radio (6 cm) wavelengths, we estimate that \(\Psi_{2}^{\rm Radio}\) increases as the galaxy radius increases. This result is consistent with the literature (Beck et al., 2019) and implies that \(\Psi_{2}^{\rm Radio}\) opens up toward the outskirts of the disk. It is interesting to note the drastic change of the pitch angle from \(-30^{\circ}\) to \(-10^{\circ}\) at a projected radial distance of \(\sim 2.5\) kpc in M83 (Figure 4). At \(\sim 2.5\) kpc, the pitch angle varies in the interface between the bar region and the spiral arms as determined from the velocity fields of gas tracers (i.e., HII, CO) and stellar dynamics (Kenney and Lord, 1991).
At FIR (\(154~{}\mu\)m) wavelengths, \(\Psi_{2}^{\rm FIR}\) shows many variations as a function of the galactrocentric radius, although in all cases except NGC 6946 there is a large-scale spiral ordered B-field evident in the FIR polarization. We find that at certain radii (e.g. \(2.5-3.5\) kpc) in NGC 6946 the FIR B-field has similar pitch angles to those measured at radio wavelengths (Fig. 5). However, the angular dispersion of NGC 6946 is large, \(\pm 41^{\circ}\), across the disk when compared to the radio B-fields, \(\pm 5^{\circ}\) (Table 3).
## 4 Discussion
### Galactic dynamo
We quantified the large-scale ordered B-fields in the disk of galaxies as a linear combination of axisymmetric modes. We found that B-fields with \(m=2\) (spiral pattern) modes dominate at radio wavelengths, while \(m=2\) and \(m=0\) (constant) modes have similar contributions at FIR wavelengths. The FIR data show a larger relative contribution from higher modes than the radio wavelengths. We discuss these measurements in the context of galactic dynamo theory.
Turbulent dynamo theory explains the measured B-fields as a combination of fluctuation (or small-scale) dynamos and mean-field (or large-scale) dynamos (Subramanian, 1998; Brandenburg and Subramanian, 2005; Shukurov and Subramanian, 2021). In this picture, the large-scale B-fields are generated by differential rotation of the galaxy disk and turbulent helical motions. The turbulent B-fields are generated by turbulent gas motions at scales \(\lesssim 50-100\) pc scales of energy injection by supernova explosions and stellar feedback
\begin{table}
\begin{tabular}{l l l} \hline \hline Mode & \(\langle|\beta_{m}^{\rm FIR}|\rangle\) & \(\langle|\beta_{m}^{\rm Radio}|\rangle\) \\ \hline
3 & \(0.13\pm 0.03\) & \(0.12\pm 0.02\) \\
2 & \(0.18\pm 0.04\) & \(0.39\pm 0.04\) \\
1 & \(0.13\pm 0.02\) & \(0.11\pm 0.02\) \\
0 & \(0.18\pm 0.03\) & \(0.14\pm 0.03\) \\ -1 & \(0.11\pm 0.02\) & \(0.09\pm 0.02\) \\ -2 & \(0.12\pm 0.02\) & \(0.08\pm 0.02\) \\ -3 & \(0.10\pm 0.03\) & \(0.06\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 2: Mean relative amplitudes of a spiral galaxy. _Columns, from left to right:_ (a) Mode. (b) FIR relative amplitude. (c) Radio relative amplitude. The errors represent the standard deviation of the pitch angle profile, not the uncertainties of the average value.
Figure 6: Mean relative amplitudes of the B-field modes of a composited spiral galaxy. FIR (red) and radio (blue) relative amplitudes for modes \(-3\leq m\leq 3\) are shown. The B-field pattern associated with each mode is shown at the top.
(Ruzmaikin et al., 1988; Brandenburg and Subramanian, 2005; Haverkorn et al., 2008). Present-day FIR and radio polarimetric observations (Beck et al., 2019; Lopez-Rodriguez et al., 2022) with spatial resolutions of \(\geq 100\) pc cannot resolve the turbulent B-fields in galaxies. The measured B-fields are dominated by the large-scale B-fields, although angular fluctuations of polarimetric properties across the disk can be estimated and compared to expectations for sub-beam-scale physics like star formation, shear, and shocks.
The total B-field can be described as the sum of a regular (or coherent) component and a random component (e.g., Haverkorn, 2015; Beck et al., 2019). A well-defined B-field direction within the beam size of the observations is described as a regular B-field. The random B-field component may have spatial reversals within the beam of the observations, which can be isotropic or anisotropic. The directions of the isotropic random B-fields have the same dispersion in all spatial dimensions. An anisotropic random B-field has a well-defined average orientation in addition to sub-beam-scale reversals. Observationally, the combination of anisotropic and regular B-fields is known as ordered B-fields. Polarized radio synchrotron emission traces the ordered (regular and anisotropic random) B-fields in the plane of the sky, which depends on the strength and geometry of the B-fields, and the cosmic ray electron density. Regular B-fields can only be traced using Faraday rotation measures, which are sensitive to the direction of the B-field along the LOS. Polarized FIR dust emission is sensitive to the density-weighted line-of-sight average of the plane-of-sky B-field orientation, in addition to dust properties like column density and temperature. In this work, we have measured and quantified the large-scale ordered B-fields in spiral galaxies traced by both FIR and radio polarimetry.
Other works have analyzed the regular B-field structure of galaxies using linear models of the mean-field galactic dynamo (e.g., Krause and Wielebinski, 1991; Berkhuijsen et al., 1997; Fletcher et al., 2004, 2011). These models assume an expanded B-field pattern in Fourier series in the azimuthal angle. Each mode is a logarithmic spiral with a constant magnetic pitch angle with the sum of all modes providing a non-axisymmetric B-field. The radio B-field orientations, corrected for Faraday rotation, are fitted using a linear superposition of logarithmic spiral B-fields in three dimensions. The azimuthal wave number \(m_{\rm d}=0\) is an axisymmetric B-field with constant B-field direction, \(m_{\rm d}=1\) is a bisymmetric B-field with two opposite spiral B-field directions, and \(m_{\rm d}=2\) is a quadrisymmetric B-field with alternating B-field directions. Most of the studied galaxies in radio polarimetric observations are dominated by \(m_{\rm d}=0\)(Beck et al., 2019), although higher modes are sometimes required (e.g., Ehle and Beck, 1993; Rohde et al., 1999). \(m_{\rm d}>2\) cannot be studied with the spatial resolutions provided by current radio polarimetric observations because they are not sensitive to small spatial variations of the B-field direction. For some galaxies, any combination of modes provides a good fit for the B-field orientations (Beck et al., 2019, table 6). For the galaxies and wavelengths in our sample, this approach finds that the disk of M51 is dominated by \(m_{\rm d}=0\) and \(m_{\rm d}=2\) with a relative amplitude of \(0.72\pm 0.06\)(Fletcher et al., 2011). The halo is dominated by \(m_{\rm d}=1\) and also has \(m_{\rm d}=2\) with a relative amplitude of \(0.30\pm 0.09\). M83 is dominated by \(m_{\rm d}=1\) and has \(m_{\rm d}=0\) with a relative amplitude of \(0.43\pm 0.3\)(Beck et al., 2019). NGC 6946 has similar contributions of \(m_{\rm d}=0\) and \(m_{\rm d}=2\)(Ehle and Beck, 1993; Rohde et al., 1999). The rotation measures of NGC 3627 did not show distinguishable patterns, which was attributed to Faraday rotation from an extended hot, low-density ionized magnetized halo (Soida et al., 2001).
Table 3 shows a comparison of the magnetic pitch angles between our measurements, \(\Psi_{2}\), and the literature, \(p_{\rm o}\). We show the mean and ranges of the ordered B-field pitch angles estimated using the B-field orientations from radio polarimet
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ Galaxy} & \(\langle\Psi_{2}^{\rm FIR}\rangle\) & \(\langle\Psi_{2}^{\rm Radio}\rangle\) & \(\Psi_{2}^{\rm Radio}\) & \(\langle p_{\rm o}^{\rm Radio}\rangle\) & \(p_{\rm o}^{\rm Radio}\) & References \\ & (\({}^{\circ}\)) & (\({}^{\circ}\)) & & & & \\ \multicolumn{1}{c}{(a)} & (b) & (c) & (d) & (e) & (f) & (g) \\ \hline M51 & \(25\pm 17\) & \(28\pm 5\) & \([16-34]\) & \(22\pm 2\) & \([19-27]\) & Fletcher et al. (2011) \\ M83 & \(-41\pm 17\) & \(-29\pm 8\) & \(-[12-36]\) & \(-30\pm 3\) & \(-[23-35]\) & Beck et al. (2019) \\ NGC 3627 & \(4\pm 17\) & \(49\pm 8\) & \([41-51]\) & \(37\pm 8\) & \([16-68]\) & Soida et al. (2001) \\ NGC 4736 & \(-21\pm 6\) & \(-30\pm 3\) & \(-[24-32]\) & \(-35\pm 5\) & - & Chyzy and Buta (2008) \\ NGC 6946 & \(6\pm 41\) & \(-17\pm 5\) & \(-[8-25]\) & \(-27\pm 2\) & \([30-32]\) & Ehle and Beck (1993); Beck et al. (2019) \\ \(\langle|\vec{\Psi_{2}}|\rangle\) & \(21\pm 24\) & \(29\pm 8\) & - & \(30\pm 7\) & - & \\ \hline \end{tabular}
\end{table}
Table 3: Mean magnetic pitch angle per galaxy and wavelength from this work vs. literature. _Columns, from left to right:_ (a) Galaxy name. (b) FIR magnetic pitch angle. (c) Radio magnetic pitch angle. (d) Range of radio magnetic pitch angle. (e) Radio magnetic pitch angles of the ordered B-fields from the literature. (f) Range of radio magnetic pitch angles of the ordered B-fields from the literature. (g) References of (d, e). In this table, the errors represent the standard deviation of the pitch angle profile, not the uncertainties of the average value.
ric observations obtained from the literature. The range of \(p_{\rm o}\) in some of the galaxies shows the minimum and maximum values of the literature magnetic pitch angles within the galactocentric radii used in our analysis. We see that \(p_{\rm o}\) is similar to our measured \(\langle\Psi_{2}^{\rm Radio}\rangle\). They are not equal because a) \(p_{\rm o}\) is affected by several B-field modes, and b) we only used the LOS that have high-SNR measurements in both the FIR and the radio data. Despite that the LOS associated with the FIR measurements is not well-sampled in the inter-arm regions (Figure 3), the mean magnetic pitch angles across the disk at radio wavelengths are similar using both methods, \(\langle\Psi_{2}^{\rm R\tilde{a}dio}\rangle=29\pm 8^{\circ}\) vs. \(\langle\tilde{p_{\rm o}}\rangle=30\pm 7^{\circ}\). This result implies that the radio B-fields in the arms are very similar to the B-fields dominating in the interarms, although the arms have larger contributions from star formation activity.
Galactic dynamo models provide the azimuthal wave number, \(m_{\rm d}\), and their associated pitch angles for the disk B-field and the helical B-field. Because the large-scale regular B-field is modeled as a linear combination of logarithmic spiral modes, all dynamo modes are equal to our \(m=2\) (spiral B-field). Although \(m=2\) is dominant, our method shows that the measured B-field is a combination of several B-field patterns (Figure 6). These modes (and the combination of them) can be interpreted as non-axisymmetric ordered B-fields showing deviations from the large-scale spiral ordered B-fields, perhaps due to particular physics (e.g., star-forming regions, shearing, compression, and/or shocks) across the disk. We note that the rotation measure distribution across a galaxy can also be used to measure pitch angles and B-field direction, providing complementary information (Beck et al., 2019).
### Comparison with geometrical models
The B-field morphologies in galaxies have also been quantified using pure geometrical models. We describe these methods and discuss their advantages and caveats.
_Axisymmetric B-fields_: This approach estimates the pixel-by-pixel pitch angles across the galaxy disk. The measured B-field orientations are reprojected and tilted to obtain a face-on view of the galaxy, and an azimuthal template is subtracted from the data. The radial pitch angles are estimated as the mean of the pitch angles at a given annulus. This method was developed by Borlaff et al. (2021) and applied to the same M51 observations presented here. The advantages of this method are that a) the pitch angles on a pixel-to-pixel basis can be estimated, and b) the pitch angles are estimated without prior assumptions about the morphology of the B-field pattern. The pixel-by-pixel map can be used to estimate the means of the magnetic pitch angles from specific areas of the disk, like spiral arms or inter-arm regions, by applying user-defined masks (Borlaff et al., 2021). The disadvantage is that the angular offsets between the measured B-field orientations and the azimuthal profiles are assumed to be the magnetic pitch angles. The B-field modes cannot be estimated.
_Three-dimensional axisymmetric spiral B-fields_: This approach uses a three-dimensional regular B-field model with an axisymmetric spiral B-field configuration. This B-field model is a mode of a galactic dynamo with a symmetric spiral pattern in the galactic midplane with a helical component. This method has been used in the FIR polarimetric observations of Centaurus A (Lopez-Rodriguez, 2021), radio polarimetric observations of other galaxies (Braun et al., 2010), and in our Galaxy (Ruiz-Granados et al., 2010). The advantages of this method are that the three-dimensional B-field component, and the pixel-by-pixel pitch angles across the disk can be obtained. The disadvantages are that a parametric B-field model has to be assumed, and the best-fit B-field is not unique due to the large number of free parameters and the ambiguity of the three-dimensional information of the B-fields from observations (e.g., Braun et al., 2010; Ruiz-Granados et al., 2010). In addition, the B-field modes cannot be estimated.
The aforementioned models provide the magnetic pitch angle from the measured B-field orientations. Note that our estimated \(\Psi_{2}\) is the intrinsic magnetic pitch angle associated with a purely spiral axisymmetric B-field. Figure 7 shows a comparison of the measured magnetic pitch angles using mohawc by Borlaff et al. (2021) and our pitch angle, \(\Psi_{2}\), for M51. Borlaff et al. (2021) showed that the radio and FIR polarimetric observations do not necessarily trace the same B-field component. This result is clearly detected at galactocentric distances of \(r\geq 5\) kpc in Borlaff et al. (2021), but only if the spiral arms are analyzed separately from the inter-arm regions. Those authors used a mask to separate the arm and interarm regions based on the total integrated emission (i.e., moment 0) of HI. The measured FIR and radio magnetic pitch angles are identical when the full disk was analyzed at once (Borlaff et al., 2021, Fig. 7). Our new method obtains the same result - a difference in the FIR and radio pitch angles at large radius - but without the need to mask the data to separate physical components of the disk. Figure 7 emphasizes the potential of our method to characterize B-field morphologies in the multi-phase ISM using a model-independent
Figure 7: Comparison of magnetic pitch angles measurements, \(\Psi_{2}\), between methods at FIR and radio wavelengths. Our method and the pitch angles by Borlaff et al. (2021) are shown.
approach that does not require masking to separate different galactic components.
### Broader applications
The method presented here is adapted from Palumbo et al. (2020). The polarimetric linear decomposition has been applied to the B-field orientation generated by magnetohydrodynamic accretion disk simulations and proposed as a model-independent approach to measuring the accretion state of the M87 black hole observed with the EHT (see also Event Horizon Telescope Collaboration et al., 2021, 2021, 2022). While we adapted this decomposition and applied to the B-fields of spiral galaxies, our method can also be applied to any vector field where a circle or ellipse is a geometry of particular interest. Apart from galaxies, this method could also be adapted to other ISM morphologies, such as supernova remnants or wind-blown bubbles in star-forming regions (e.g., Tahani et al., 2022), or radio synchrotron loops (e.g., Vidal et al., 2015).
One straightforward extension of the method presented here would be to quantify the morphology of galaxy structure observed via the total intensity distribution at different wavelengths. One very simple approach to perform this analysis would be to compute the spatial gradient of the emission in order to encode morphological information as a vector field. One could then apply this method directly and compare the results to the magnetic field structure. Similarly, one could quantify the intensity morphology using the Hessian, which measures local curvature in the image plane and thus has been widely used for measuring the orientations of filamentary structures in astrophysical observations (e.g., Planck Collaboration et al., 2016).
Using this approach to compare galaxy structure to galaxy magnetic field structure opens up intriguing possibilities for morphological insights beyond pitch angle comparisons. Here we can draw further upon analogies to the \(E/B\) decomposition of the polarization field that is frequently used to characterize CMB polarization and has recently been widely applied to Galactic emission of diverse physical origins (e.g. Clark et al., 2015; Krachmalnicoff et al., 2018; Planck Collaboration et al., 2016, 2020). The correlation between the total intensity and \(E\)- or \(B\)-mode polarization field in Galactic dust emission is related to the degree of alignment or misalignment of filamentary density structures and the magnetic field (Huffenberger et al., 2020; Clark et al., 2021; Cukierman et al., 2022). The quantification of these correlations in Galactic dust polarization is clearly extended to the formalism presented here, in order to compute the correlation between various modes of magnetic structure and various tracers of galactic emission structure.
## 5 Conclusions
We have adapted and successfully applied a new model-independent B-field decomposition approach to measure the large-scale ordered B-field orientations associated with five spiral galaxies using FIR and radio polarimetric observations. With radio (\(6\) cm) measurements, we found that the B-fields of spiral galaxies were mainly composed of \(m=2\) with additional but subdominant contributions from \(m=0,m=3\), and \(m=1\). With FIR (\(154\)\(\mu\)m) measurements, the most dominant modes were \(m=2\) and \(m=0\) with smaller relative contributions from \(m=1\) and \(m=3\). At both radio and FIR wavelengths, the overall contribution of \(|\beta_{m<0}|\) was less than \(|\beta_{m\geq 0}|\). NGC 6946 is the extreme case in our sample. In this galaxy, radio measurements still showed \(m=2\) to be dominant, with the rest of the modes contributing roughly the same to the overall B-field orientation. By contrast, in the FIR, no particular mode dominates the B-field structure.
We also found that the mean pitch angle of these galaxies is smaller in the FIR data than in the radio, i.e. \((|\Psi_{2}^{\rm FIR}|)<\langle|\Psi_{2}^{\rm Radio}|\rangle\). If this trend holds, the implication is that radio spiral B-fields are more open than FIR spiral B-fields. Overall, we find that \(\Psi_{2}\) increases with increasing radius at radio wavelengths, meaning that the magnetic field structure opens out toward the outskirts of the galaxy. With FIR wavelengths we found greater angular dispersion than with radio wavelengths, indicating that FIR spiral B-fields are less ordered than radio spiral B-fields.
Based on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA) under the 08_0012 Program. SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NNA17BF53C, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. SOFIA (HAWC+), VLA
apply(Robitaille & Bressert, 2012; Robitaille, 2019), astropy(Astropy Collaboration et al., 2022), pandas(pandas development team, 2020), matplotlib(Hunter, 2007), scipy(Virtanen et al., 2020)
|
2301.10254 | Neutrino Electromagnetic Properties and the Weak Mixing Angle at the LHC
Forward Physics Facility | The LHC produces an intense beam of highly energetic neutrinos of all three
flavors in the forward direction, and the Forward Physics Facility (FPF) has
been proposed to house a suite of experiments taking advantage of this
opportunity. In this study, we investigate the FPF's potential to probe the
neutrino electromagnetic properties, including neutrino millicharge, magnetic
moment, and charge radius. We find that, due to the large flux of tau neutrinos
at the LHC, the FPF detectors will be able to provide the strongest
laboratory-based sensitivity to the tau neutrino magnetic moment and
millicharge by searching for excess in low recoil energy electron scattering
events. We also find that, by precisely measuring the rate of neutral current
deep inelastic scattering events, the FPF detectors have the potential to
obtain the strongest experimental bounds on the neutrino charge radius for the
electron neutrino, and one of the leading bounds for the muon neutrino flavor.
The same signature could also be used to measure the weak mixing angle, and we
estimate that $\sin^2 \theta_W$ could be measured to about $3\%$ precision at a
scale $Q \sim 10$ GeV, shedding new light on the long-standing NuTeV anomaly. | Roshan Mammen Abraham, Saeid Foroughi-Abari, Felix Kling, Yu-Dai Tsai | 2023-01-24T19:00:01Z | http://arxiv.org/abs/2301.10254v1 | # Neutrino Electromagnetic Properties and the Weak Mixing Angle
###### Abstract
The LHC produces an intense beam of highly energetic neutrinos of all three flavors in the forward direction, and the Forward Physics Facility (FPF) has been proposed to house a suite of experiments taking advantage of this opportunity. In this study, we investigate the FPF's potential to probe the neutrino electromagnetic properties, including neutrino millicharge, magnetic moment, and charge radius. We find that, due to the large flux of tau neutrinos at the LHC, the FPF detectors will be able to provide the strongest laboratory-based sensitivity to the tau neutrino magnetic moment and millicharge by searching for excess in low recoil energy electron scattering events. We also find that, by precisely measuring the rate of neutral current deep inelastic scattering events, the FPF detectors have the potential to obtain the strongest experimental bounds on the neutrino charge radius for the electron neutrino, and one of the leading bounds for the muon neutrino flavor. The same signature could also be used to measure the weak mixing angle, and we estimate that \(\sin^{2}\theta_{W}\) could be measured to about 3% precision at a scale \(Q\sim 10\) GeV, shedding new light on the long-standing NuTeV anomaly.
+
Footnote †: preprint: DESY-22-196, UCI-HEP-TR-2022-06
## I Introduction
Neutrino properties are crucial to understanding our Universe and have been prime targets of particle physics experiments. The electromagnetic (EM) properties of neutrinos, in particular, can be tested in existing and future experiments. These measurements include the mass-dimension 4 neutrino millicharge, the mass-dimension 5 neutrino dipole moments, and the mass-dimension 6 neutrino charge radius. These properties can, for example, be used to determine whether neutrinos have a Dirac or Majorana nature [1; 2] and to probe new physics beyond the Standard Model (SM) [3]. These neutrino properties could be linked to intriguing experimental anomalies, including the NuTeV anomaly [4] and the Xenon 1T excess [5] (although the latter was determined most likely to be from an SM background [6]). Large neutrino dipole moments, for example, can also affect the mass gap of black holes [7; 8]. Interesting models were proposed to generate neutrino EM couplings much larger than the SM predictions [9; 10; 11; 12; 13; 14] and to connect the anomalies to the neutrino properties [15]. Currently, the SM predictions of these properties are several orders of magnitude smaller than the present upper bounds, obtained from reactor neutrinos [16; 17], accelerator neutrinos [18; 19; 20], and solar neutrinos [21; 22; 23; 24; 25], to name a few. For a connection between neutrino electromagnetic properties and CP phases, see Ref. [26].
The LHC provides one of the most exciting opportunities in studying high-energy neutrinos and tau neutrinos, given its high center-of-mass energy. The forward region at the LHC, in particular, provides a large flux of neutrinos coming from meson decays [27]. The Forward Physics Facility (FPF) [28] at the LHC is ideally placed on studying these TeV energy neutrinos. Previously, interesting signatures from the neutrino dipole portal [29; 30], were studied at FPF [31], but a proper analysis of the future capability of FPF on neutrino EM properties are sorely lacking at this moment.
In this letter, we utilize the FPF to study interesting properties of neutrinos: the neutrino millicharge, magnetic moment, and charge radius. By looking at low recoil energy electron scattering and neutral current deep inelastic scattering (DIS) events, we show that we can reach competitive sensitivity for these properties. Most excitingly, we can set the world's leading limit on neutrino charge radius for the electron neutrino, while for the muon neutrino, we come within a factor of a few from the SM prediction. For the tau neutrino, FPF's limits on the magnetic moment are an order of magnitude better than the DONUT results [19] and bounds on millicharge and charge radius constitute some of the few existing measurements for the tau neutrino.
The neutrino interaction with the target material, investigated in this study, also depend sensitively on electroweak parameters. In this context, the precise measurement of the neutral current neutrino DIS rate can also be translated to a precise measurement of the weak mixing angle. This would allow one to test the anomalous result obtained by NuTeV [4].
The paper is organized as follows. We briefly review neutrino EM properties in Sec. II and introduce the detectors under consideration at the FPF in Sec. III. In Sec. IV, we discuss our signal characteristics. We present our results on the neutrino EM properties in Sec. V and discuss the measurement of the weak mixing angle in |
2307.12915 | Consensus-based Participatory Budgeting for Legitimacy: Decision Support
via Multi-agent Reinforcement Learning | The legitimacy of bottom-up democratic processes for the distribution of
public funds by policy-makers is challenging and complex. Participatory
budgeting is such a process, where voting outcomes may not always be fair or
inclusive. Deliberation for which project ideas to put for voting and choose
for implementation lack systematization and do not scale. This paper addresses
these grand challenges by introducing a novel and legitimate iterative
consensus-based participatory budgeting process. Consensus is designed to be a
result of decision support via an innovative multi-agent reinforcement learning
approach. Voters are assisted to interact with each other to make viable
compromises. Extensive experimental evaluation with real-world participatory
budgeting data from Poland reveal striking findings: Consensus is reachable,
efficient and robust. Compromise is required, which is though comparable to the
one of existing voting aggregation methods that promote fairness and inclusion
without though attaining consensus. | Srijoni Majumdar, Evangelos Pournaras | 2023-07-24T16:16:23Z | http://arxiv.org/abs/2307.12915v1 | Consensus-based Participatory Budgeting for Legitimacy: Decision Support via Multi-agent Reinforcement Learning
###### Abstract
The legitimacy of bottom-up democratic processes for the distribution of public funds by policy-makers is challenging and complex. Participatory budgeting is such a process, where voting outcomes may not always be fair or inclusive. Deliberation for which project ideas to put for voting and choose for implementation lack systematization and do not scale. This paper addresses these grand challenges by introducing a novel and legitimate iterative consensus-based participatory budgeting process. Consensus is designed to be a result of decision support via an innovative multi-agent reinforcement learning approach. Voters are assisted to interact with each other to make viable compromises. Extensive experimental evaluation with real-world participatory budgeting data from Poland reveal striking findings: Consensus is reachable, efficient and robust. Compromise is required, which is though comparable to the one of existing voting aggregation methods that promote fairness and inclusion without though attaining consensus.
Keywords:participatory budgeting reinforcement learning consensus legitimacy social choice decision support collective decision making digital democracy
## 1 Introduction
Participatory budgeting (PB) is a bottom-up collective decision-making process with which citizens decide how to spend a budget of the local municipality [16, 4]. Citizens initially submit proposals for implementation of various project ideas, i.e. public welfare amenities. These are evaluated by the city officials and finally, a subset is put for voting. Citizens then express their preferences using different input voting methods such as approval or score voting [9]. Finally, voting aggregation methods are applied to select the winner projects [4].
The selection of the winner projects depends on both input and aggregation methods [2]. As preferences via approvals or scores are based on self-interest, voting outcomes may yield different satisfaction levels, under-representation, and poor legitimacy. For a more stable, conclusive, shared and legitimate voting outcome, a form of systematic and scalable deliberation is missing among citizens so that individual preferences are exchanged, debated and compromised in a viable way to reach consensus [6]. This challenge is addressed in this paper.
A new multi-agent reinforcement learning approach (MARL-PB) is introduced to model a novel iterative consensus-based PB process. In the proposed approach, consensus emerges as a result of (i) reward-based learning based on project ideas proposed and selected in the past and (ii) decentralized voter communication that supports information exchange and deliberation.
MARL-PB is implemented as a decision-support system that finds applicability in three use cases by three beneficiaries as shown in Figure 1: (i) _Citizens_: digital assistance to communicate, deliberate and reach a common ground for which projects to implement. This is expected to increase the participation, satisfaction and legitimacy in participatory budgeting. (ii) _Policy-makers_: digital assistance to filter out projects during the project ideation phase with the aim to put for voting a reasonable and legitimate number of projects that results in informed and expressive choices during voting without informational overload. (iii) _Researcher_: digital assistance for the assessment of fair and inclusive voting aggregation methods (e.g. equal shares, Phragmen) via comparisons with a fine-grained consensus-based model such as the one of MARL-PB.
MARL-PB is extensively assessed using state-of-the-art real-world participatory budgeting datasets [15] from Poland. The following three research questions are addressed: (i) _How effective multi-agent reinforcement learning is to assist voters reach consensus in participatory budgeting?_ (ii) _What level of flexibility is required by voters to compromise and reach consensus in participatory budgeting?_ (iii) _How efficient and robust a consensus-based participatory budgeting is by using multi-agent reinforcement learning?_. The quality of consensus, its efficiency and robustness are studied, along with how they are influenced by factors such as the following: (i) number of possible consensus bundles, (ii) in-degree of the communication network, (iii) number of voters, (iv) districts, (v) voting aggregation methods and (vi) project attributes.
Figure 1: Consensus-based participatory budgeting using a multi-agent reinforcement learning (MARL-PB). A decision-support framework is designed for three different use cases and beneficiaries: citizens, policy-makers and researchers.
The contributions of this paper are summarized as follows: (i) The multi-agent reinforcement learning approach of MARL-PB to model and implement an iterative consensus-based PB process. (ii) A decision-support framework based on MARL-PB to digitally assist three use cases by three beneficiaries. (iii) An extension of the reward-based learning strategy with a gossip-based agents communication protocol for decentralized information exchange and consensus building. (iv) A compilation of metrics that characterize and assess the legitimacy of the consensus-based PB process. (v) Practical and revealing insights about the nature of the achieved consensus: requires compromises comparable to the ones of the voting aggregation methods that promote fairness and inclusion. (vi) An open-source software artifact of MARL-PB for reproducibility and encouraging further research in this niche research area 1
Footnote 1: [https://github.com/DISC-Systems-Lab/MARL-PB](https://github.com/DISC-Systems-Lab/MARL-PB) (last accessed: July 2023).
This paper is outlined as follows: Section 2 reviews related work. Section 3 introduces the consensus-based approach. Section 4 illustrates the empirical results and findings. Section 5 concludes this paper and outlines future work.
## 2 Related Literature Review
This section provides an overview of related literature, with a focus on iterative reward-based learning for collective decision-making processes.
Social dilemma games such as Prisoners Dilemma has been studied in the context of reward-based learning agents [12] with two agents and discrete rewards, i.e. punishment (0) or no punishment (1), to explore the compromises that two agents make to reach consensus. Using multiple agents, this experiment provides insights on how learning can stabilize using limited voters and deterministic rewards. This provides a relevant direction for dealing with voting for social choice and collective preferences.
Airiau et al. [1] model an iterative single-winner voting process in a reinforcement learning setup to analyze the learning capabilities of voting agents to obtain more legitimate collective decisions. The proposed framework provides a new variant of iterative voting that allows agents to change their choices at the same time if they wish. The rank of the winner at every stage in the preference order of voters is used as a reward for the agents to learn and re-select. The proposed work by Liekah et al. [11] additionally calculates the average satisfaction among voters in every iteration based on the winner and individual preferences.
\begin{table}
\begin{tabular}{l l l l l} \hline Aspect & Macy et al. [12] & Airiau et al. [1] & Liekah et al. [11] & Proposed Approach (MARL-PB) \\ \hline
**Outcome** & Single Winner & Single Winner & Single Winner & Multiple Winners \\
**Rewards** & Deterministic & Stochastic\(*\) & Stochastic\(*\) & Deterministic (project attributes) \\ & 4 discrete values & & Stochastic (from communication) \\
**Execution** & Centralized & Centralized & Centralized & Shared aggregate rewards, decentralized \\
**Action Space** & 4 & 5 & 5 & till 100 \\ \hline \end{tabular}
*Rank of winner in the preference order of the voter (within an iteration).
\end{table}
Table 1: Comparison of this work with earlier multi-agent reinforcement learning approaches for collective decision making.
Prediction of complete PB ballots using machine learning classification is recently studied as a way to decrease information overload of voters using partial ballots [10]. This approach could complement MARL-PB to speed up the consensus process.
Existing approaches (see Table 1) do not incorporate inter-agent communication for large-scale information exchange in multi-winner voting systems. The rewards are fixed in centralized settings and do not model the preferences of the voters. Moreover, the feasibility of reaching a consensus via communication with other voters has not been studied. This is relevant to the problem of scaling up and automating deliberation in collective decision-making to reach more legitimate voting outcomes. These are some of the gaps addressed in this paper.
## 3 Consensus-based Iterative Participatory Budgeting
In this section, an iterative participatory budgeting process is introduced modeled by a multi-agent reinforcement learning approach. The voters (agents) maximize their self-interest but also compromise to reach a consensus in a multi-agent system, where the choices of others are initially only partially known.
### Multi-armed Bandit Formulation
In a participatory budgeting process, voters collectively choose multiple projects subject to a constraint that the total cost of the projects is within the total budget. To incorporate this knapsack constraint, a combinatorial model [2] is designed to formulate bundles from the available list of projects. So for three projects, there are seven possible bundles, out of which a subset fulfills the budget constraints. These are referred to as _valid knapsack bundles_ and they constitute the possible actions in a multi-arm bandit formulation (see Figure 2).
The bundles encode all possible multi-winner preferences the voters can collectively have. Learning valid knapsack bundles instead of individual project selections prevents early terminations by budget violations [3].
The iterative version of the PB process introduces a partial voters' communications at every iteration to exchange preferences with the aim they converge (compromise) to the same bundle (same preferences). This process models a
Figure 2: **Calculation of bundles**: They represent all possible combinations from the listed projects where the total cost of all the projects in the bundle is within the budget. In the example, three projects with their corresponding costs are listed for voting in a participatory budgeting process. The total budget is 700. Five out of the seven possible bundles are valid and satisfy the budget constraint.
large-scale automated deliberation process. The selection of an action by a voter depends only on the rewards associated with the bundles, hence, the problem is modeled as a multi-armed bandit reinforcement learning approach [14].
The multi-arm bandits are defined in the form of a tuple \(<A;R>\), where
\(A\) represents actions that are the possible bundles and \(R_{A}=P[r|A]\) is the probability distribution of rewards over the bundles (actions).
_Actions:_ For a participatory budgeting process with a set of projects and a total budget, the actions comprise of the _valid knapsack bundles_ formed from the projects and their associated costs.
_Rewards_: The preference modelling in the form of rewards plays an important role in reaching consensus in a large action space with multiple agents. The aggregate preferences of the voters over past and the current year are encoded for a region in the form of rewards that signify how the needs for public amenities evolve over the years and thus can help to predict the collective preference for the current participatory budgeting process. These are modeled and calculated as deterministic rewards for each bundle.
To reach a consensus, voters explore the action space of each other via information exchange. This exchange models a large-scale and automated deliberation process, which voters use to learn, compromise and adjust their choices.
_Deterministic Rewards_: A project is related to a type of public welfare amenities2 such as urban greenery, sports, culture, education, environmental protection etc., or a population group that benefits such as elderly, families with children, etc. The preference of citizens are mostly associated with these attributes and can be used to estimate collective preferences for the population of a region.
Footnote 2: It is assumed that preferences for such projects persist over the passage of time, in contrast to infrastructure projects that once they are implemented, they may not be preferred anymore.
The number of occurrence of such project attributes, which are put for voting and selected in the past years of a region is used as reward utilities:
\[R^{a}\ =\ \Sigma_{y\in Y}(\mathbb{C}(a)+\mathcal{C}(a)),\]
where \(a\) is a specific project attribute, \(\mathbb{C}\) and \(\mathcal{C}\) signify the normalized total count of occurrence of the project attribute across listed and selected projects respectively over \(Y\) years of participatory budgeting processes in a region.
The reward for a project is determined as follows:
\[R_{p}=\sigma(\Sigma_{i=1}^{\mathcal{A}}(R_{i}^{a}))+\tanh(\frac{c_{p}}{ \mathcal{B}}),\]
where \(\mathcal{A}\) is the total number of attributes associated with a project, \(c_{p}\) is the individual project cost and \(\mathcal{B}\) is the total budget of the PB process. The rewards for a bundle is the sum of the rewards of each of its projects.
_Rewards from inter-agent communication:_ At every iteration, we update a dynamic random bidirectional graph using a decentralized process such as the gossip-based peer sampling [8] for peer-to-peer communication. At each iteration, the connected agents send the bundle that has received the highest rewards
(other randomized schemes are supported by the code), together with the reward itself. As the neighbors are randomly decided, the accumulated rewards from information exchange are stochastic. The stateless variant of the Q learning approach is augmented to incorporate rewards obtained from information exchange:
\[Q(b_{t})\gets Q(b_{t})+\alpha(r+\delta(max_{b_{t}^{c}}(Q(b_{t}^{c})-Q(b_{t} ))),\]
where \(Q(b_{t}^{c})\) is the rewards obtained via agent communication for a bundle b at time t. The introduced learning rate for rewards from information exchange -\(\delta\) is set empirically to 0.1. The discount factor \(\gamma\) in the Q learning is set to zero as future rewards are not considered. Algorithm 1 outlines the learning process.
```
1:Populate project list and cost for the current participatory budgeting process
2:Initialize the fixed rewards for projects
3:Calculate rewards for all valid bundles
4:for each iteration \(i\geq\) 1 do
5:for each voter \(v\in\) V do
6:if\(i==\) 1 then
7: Assign the bundle with highest overlap to original individual preferences
8: Update random graph via the peer sampling service
9: Aggregate rewards of bundles from neighbors
10: Update total rewards for a bundle in the Q table
11: Select action (bundle) according to \(\epsilon\)-greedy policy, \(\epsilon\)\(\in\)[0,1]
```
**Algorithm 1** Augmented Q-Learning for consensus in participatory budgeting.
Initially, the agents select a bundle according to their preference (first iteration) and then they start communication with other agents during which they adjust their preferred bundle. The selection of the bundle at each iteration is based on the cumulative sum of both rewards, which the agents maximize using an \(\epsilon\) greedy exploration strategy. For a low number of projects and voters size, the initial preferences from the multi-winner approvals of the voters may result in a reduced action space for exploration.
### Assessment Model for Consensus
The following metrics are designed to assess the quality of the consensus (legitimacy) based on popularity, representation and budget utilization that can increase the satisfaction of the citizens, increase participation and improve the quality of the overall PB process [2, 13]. The level of compromise made by voters is assessed. These metrics also characterize how difficult it is to reach a consensus in an iterative voting process for participatory budgeting. The metrics that model the legitimacy are outlined as follows:
* mean overlap) of projects between the preferred bundle of the voters and the consensus bundle, calculated using the Jaccard Index [5].
* _Unfairness_: The coefficient of variation of the _compromise cost_ over all agents.
* _Popularity_ (fitness of consensus): The normalized ranking score of the projects in the consensus bundle, calculated using the number of votes of each project from the original voters' preferences.
* _Budget Utilization_: The cost of the projects in the consensus bundle divided by the total available budget in the participatory budgeting process.
## 4 Experimental Evaluation
This section illustrates the results obtained from the evaluation of the consensus-based participatory budgeting process, using real-world data. These results shed light on the the efficiency of reward models, the communication protocol and the exploration strategy to reach consensus.
_Dataset:_ The _pabulib_ PB dataset ([http://pabulib.org/](http://pabulib.org/)) is used for the evaluation. It contains the metadata related to projects and voters along with the voting records for multiple participatory budgeting instances, for various districts and cities of Poland. Each project is associated with multiple attributes such as urban greenery, education, relavance to children etc., along with information about the project costs. There are multiple participatory budgeting instances for every district or city for multiple years and different ballot designs such as k-approval, cumulative and score voting. Furthermore, the winners are calculated using various aggregation methods such as the method equal shares, phragmen, and utilitarian greedy [4] to assess the quality of the consensus bundles compared to the ones calculated by methods that promote fairness and inclusion. Three districts are selected - Ruda, Ursynow and Rembertow, whose valid bundles vary from a smaller set (12 for Ruda) to a larger one (90 for Rembertow).
_Design_: The framework is tested using various settings such as the numbers of combinations (bundles), number of agents, learning rate, decay rate, and the in-degree of the random graph updated at every iteration (see Table 2). For each of these settings, the projects selected in the consensus are analyzed and compared with winners selected using other aggregation methods such as the method of equal shares and greedy [4].
\begin{table}
\begin{tabular}{l l l l l} \hline \multicolumn{5}{l}{**Dataset**} & \multicolumn{1}{l}{**\# of Projects \# of Bundles In-degree \# of Agents**} \\ \hline \hline \multicolumn{1}{l}{Rembertow 20} & 5 to 90 & 2 to 26 & 50 to 100 \\ \multicolumn{1}{l}{Ursynow} & 18 & 5 to 75 & 2 to 26 & 50 to 100 \\ \multicolumn{1}{l}{Ruda} & 10 & 3 to 12 & 2 to 10 & 50 to 100 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Parameters for experimentation with each dataset**. The ranges signify that experiments are performed incrementally, for instance, 5, 6, 7,....90 for the # of bundles selected randomly in _Rembertow_. The maximum number of valid bundles extracted from 20 projects is 90. The available data is for 4 years for each district. The experiments are performed for the latest year and the aggregate preference (rewards) are calculated using all years. The decay rate and learning rates are set to 0.1 after empirical investigation.
### RQ1: Effectiveness to Reach Consensus
_Quality of Consensus_: The convergence to the consensus bundle depends on the in-degree of the random graph and the number of bundles (action space). Figure 3 shows the budget utilization as a function of the number of bundles and in-degree. Budget utilization of the consensus bundles (Figure 3) for Rembertow is low (0.40 to 0.45) in most cases, which could be attributed to considerable high costs for popular projects and fewer projects in the consensus bundle.
Figure 4 shows the popularity index (fitness of the consensus) for different number of bundles and in-degrees. In case of Ursynow, the consensus bundles have more projects and a higher percentage of popular projects (0.65 to 0.70), that also have medium costs, which results in higher overall budget utilization. The percentage of popular projects is low for Rembertow (0.40 to 0.45) and selected popular projects have considerably high costs too, as the overall budget utilization is also low (see Figure 3).
_Comparison with other aggregation methods_: The overlap of projects between the consensus bundle (using the maximum number of bundles for action space) and the winners from the aggregation methods are calculated (see Table 3). When a larger number of projects are listed, e.g. Rembertow, the overlap with consensus bundle is higher with equal shares (0.62) and Phragmen (0.61). Hence, these methods maximize fairness and representation and also produce more legitimate winners. For a lower number of projects, e.g. n Ruda, greedy has higher overlap (0.72) with the consensus bundle. The reward
Figure 4: Popularity index (fitness of consensus) for different number of valid bundle combinations and in-degrees.
Figure 3: Total budget utilization of the consensus bundles for different number of valid bundle combinations and in-degree. The budget utilization for the consensus bundles for Ursynow is the highest.
with communication can reach a consensus, which has a higher overlap with aggregation methods that promote fairness for a higher number of listed projects.
_Analysis of the reward modelling_: The top-3 amenities associated with all projects (listed and selected) over all years in Ursynow are public space (22%), education(17%), environmental protection (12%), impact on children (22%) adults (21.1%) and seniors (19%). Similarly, for Rembertow and Ruda the most popular ones are public space (24%) and education (22.7 %). These project attributes affect a large proportion of the population. Figure 5 shows the amenities selected via MARL-PB and the aggregation methods, as well as how they compare with the original aggregate preferences based on the past data. The projects selected using equal shares and MARL-PB correspond to similar public amenities (e.g. for Rembertow, projects related to public space, education and culture are selected in higher proportion). This also signifies that consensus projects in MARL-PB prioritize fairness and better representation. The public amenities selected in the greedy method do not correlate with the ones selected in the consensus for Rembertow and Ruda. It can be observed for Ursynow that the selected projects for any aggregation method and with consensus mostly conform. Collective preferences for these projects remain stable over time in this region.
Figure 5: The attributes of the project selections with the different methods.
\begin{table}
\begin{tabular}{l c c c c c c} \hline Dataset & \multicolumn{3}{c}{Size of Consensus Bundle} & \multicolumn{3}{c}{MARL-PB Overlap} \\ \hline & \multicolumn{3}{c}{MARL-PB ES PG G} & \multicolumn{3}{c}{ES PG G} \\ \hline Rembertow 8 & 9 & 9 & 6 & 0.62 & 0.61 & 0.49 \\ Ursynow & 8 & 8 & 8 & 6 & 0.72 & 0.75 & 0.73 \\ Ruda & 7 & 8 & 8 & 5 & 0.62 & 0.66 & 0.72 \\ \hline \multicolumn{7}{l}{* phragmen completion method used for the method of equal shares.} \\ \end{tabular}
\end{table}
Table 3: The overlap between the projects in the consensus bundle and three aggregation methods: greedy, equal shares and phragmen. The highest possible number of valid knapsack bundles are used. _G: Greedy, PG: Phragmen, ES\({}^{*}\): Equal Shares*, MARL-PB: Proposed approach_
### RQ2: Level of Flexibility to Compromise and Reach Consensus
Figure 6 compares the the mean voters' compromise of MARL-PB with the one of different aggregation methods. The mean cost of compromise among the three districts for greedy is 0.56, while for MARL-PB is 0.68, which is close to the one of equal shares and Phragmen with 0.66 and 0.62 respectively. These results show the following: Consensus requires compromise that is not observed in the standard greedy voting aggregation method, however, this compromise is attainable and comparable to the one observed with consensus-oriented voting aggregation methods.
### RQ3: Efficiency and Robustness
_Convergence_: An increase in the number of agents increase the converge time for any combination of in-degree or action (bundle) space (see Figure 7). The iterations increase by 5% on average for an increase of 50 voters. Although this increase is only based on the data from the three districts, the system demonstrates to be scalable by converging within finite time as the number of voters increases.
Figure 8 shows for each district the convergence time as function of in-degree and bundles size. Smaller bundles with higher in-degrees improve the speed.
_Robustness_: The influence of the randomness in the dynamic communication network on the stability of convergence is assessed by repeating the learning process multiple times, with different size of bundles. Figure 9 shows the required number of repetitions for a stable convergence speed. More repetitions are required for lower in-degrees due to limited information exchange for deliberation. A higher action space (number of bundles) results in a larger number of alternatives to explore and thus stability requires a higher number of repetitions.
Figure 6: Mean voters’ compromise cost of MARL-PB and the different voting aggregation methods. The coefficient of variation (COV) measures unfairness, which is how compromises spread within the voters’ population.
## 5 Conclusion and Future Work
This paper concludes that a consensus-based participatory budgeting process, with three use cases introduced, is feasible via a novel multi-agent reinforcement learning approach. The consensus process actually models a more systematic,
Figure 8: Convergence time decreases for smaller action (bundle) spaces and higher in-degrees: In-degree vs. iterations (significance values (p) from t-test): Remember-tow: p = 0.001, Ursynow: p = 0.002, Ruda: p = 0.03). Bundle size vs. iterations (significance values (p) from t-test): Rembertow: p = 0.01, Ursynow: p = 0.001, Ruda: p = 0.04.
Figure 7: Convergence time increases as the number of agents increases. Significance values (p) from t-test (iterations, agents): Remembertow: p = 0.04, Ursynow: p = 0.04, Ruda: p = 0.04. The values here are averaged over the three districts.
large-scale and automated deliberation process, which has so far remained decoupled from the collective choice of voting. The experimental evaluation with real-world data confirms that that the studied consensus is reachable, efficient and robust. The results also demonstrate that the consensus in MARL-PB requires compromises from voters, which are though comparable to the ones of existing voting aggregation methods that promote fairness and inclusion.
This is a key result with impact and significant implications: voters may not need in the future to rely anymore on a top-down arbitrary selection of the aggregation method. Instead, communities will be empowered to institutionalize and directly apply independently their own consensus-based decision-making processes. Moreover, city authorities may use the proposed method to filter out projects during the project ideation phase, which usually relies on subjective criteria with risks on legitimacy.
As part of future work, the agent communication may expand to different dynamic topologies that represent more closely social networks and proximity. The expansion of the multi-agent reinforcement learning approach with other preferential elicitation methods [7], beyond approval voting, is expected to further strengthen the accuracy and legitimacy of consensus-based participatory budgeting. A more advanced design of the rewards scheme will further expand the applicability of this ambitious approach.
## Acknowledgements
This work is supported by a UKRI Future Leaders Fellowship (MR/W009560-/1): '_Digitally Assisted Collective Governance of Smart City Commons-ARTIO_', and the SNF NRP77 'Digital Transformation' project "Digital Democracy: Innovations in Decision-making Processes", #407740_187249.
Figure 9: Number of repetitions (simulations) required to reach the same consensus for a certain set of parameters. In-degree vs. simulations (significance values (p) from t-test): Rembertow: p = 0.03, Ursynow: p = 0.03, Ruda: p = 0.04. Action (bundle) space vs. simulations (significance values (p) from t-test): Rembertow: p = 0.01, Ursynow: p = 0.02, Ruda: p = 0.04. The number of projects in a bundle does not have significant impact on the simulations (Rembertow: p = 0.09, Ursynow: p = 0.06, Ruda: p = 0.13). |
2310.19257 | A High-Resolution Dataset for Instance Detection with Multi-View
Instance Capture | Instance detection (InsDet) is a long-lasting problem in robotics and
computer vision, aiming to detect object instances (predefined by some visual
examples) in a cluttered scene. Despite its practical significance, its
advancement is overshadowed by Object Detection, which aims to detect objects
belonging to some predefined classes. One major reason is that current InsDet
datasets are too small in scale by today's standards. For example, the popular
InsDet dataset GMU (published in 2016) has only 23 instances, far less than
COCO (80 classes), a well-known object detection dataset published in 2014. We
are motivated to introduce a new InsDet dataset and protocol. First, we define
a realistic setup for InsDet: training data consists of multi-view instance
captures, along with diverse scene images allowing synthesizing training images
by pasting instance images on them with free box annotations. Second, we
release a real-world database, which contains multi-view capture of 100 object
instances, and high-resolution (6k x 8k) testing images. Third, we extensively
study baseline methods for InsDet on our dataset, analyze their performance and
suggest future work. Somewhat surprisingly, using the off-the-shelf
class-agnostic segmentation model (Segment Anything Model, SAM) and the
self-supervised feature representation DINOv2 performs the best, achieving >10
AP better than end-to-end trained InsDet models that repurpose object detectors
(e.g., FasterRCNN and RetinaNet). | Qianqian Shen, Yunhan Zhao, Nahyun Kwon, Jeeeun Kim, Yanan Li, Shu Kong | 2023-10-30T03:58:41Z | http://arxiv.org/abs/2310.19257v1 | # A High-Resolution Dataset for Instance Detection with Multi-View Instance Capture
###### Abstract
Instance detection (InsDet) is a long-lasting problem in robotics and computer vision, aiming to detect object instances (predefined by some visual examples) in a cluttered scene. Despite its practical significance, its advancement is overshadowed by Object Detection, which aims to detect objects belonging to some predefined classes. One major reason is that current InsDet datasets are too small in scale by today's standards. For example, the popular InsDet dataset GMU (published in 2016) has only 23 instances, far less than COCO (80 classes), a well-known object detection dataset published in 2014. We are motivated to introduce a new InsDet dataset and protocol. First, we define a realistic setup for InsDet: training data consists of multi-view instance captures, along with diverse scene images allowing synthesizing training images by pasting instance images on them with free box annotations. Second, we release a real-world database, which contains multi-view capture of 100 object instances, and high-resolution (6k\(\times\)8k) testing images. Third, we extensively study baseline methods for InsDet on our dataset, analyze their performance and suggest future work. Somewhat surprisingly, using the off-the-shelf class-agnostic segmentation model (Segment Anything Model, SAM) and the self-supervised feature representation DINOv2 performs the best, achieving \(>\)10 AP better than end-to-end trained InsDet models that repurpose object detectors (e.g., FasterRCNN and RetinaNet).
Dataset and open-source code
## 1 Introduction
Instance detection (InsDet) requires detecting specific object instances (defined by some visual examples) from a scene image [12]. It is practically important in robotics, e.g., elderly-assistant robots need to fetch specific items (_my_-cup vs. _your_-cup) from a cluttered kitchen [42], micro-fulfillment robots for the retail need to pick items from mixed boxes or shelves [4].
**Motivation**. InsDet receives much less attention than the related problem of Object Detection (ObjDet), which aims to detect all objects belonging to some predefined classes [30; 39; 31; 50]. Fig. 1 compares the two problems. _One major reason is that there are not large-enough InsDet datasets by today's standards._ For example, the popular InsDet dataset GMU (published in 2016) [16] has only 23 object instances while the popular ObjDet dataset COCO has 80 object classes (published in 2014) [30]. Moreover, _there are no unified protocols in the literature of InsDet._ The current InsDet literature mixes multiple datasets to simulate training images and testing scenarios [12]. Note that the training protocol of InsDet does not follow that of ObjDet, which has training images annotated with bounding boxes. Differently, for InsDet,2 its setup should have profile images of instances (cf. right
in Fig. 1) and optionally diverse background images not containing such instances [12]. We release a new dataset and present a unified protocol to foster the InsDet research.
**Overview of our dataset** is presented in Fig. 2. In our dataset, profile images (3072x3072) of object instances and testing images (6144x8192) are high-resolution captured by a Leica camera (commonly used in today's cellphones). This inexpensive camera is deployable in current or future robot devices. Hence, our dataset simulates real-world scenarios, e.g., robotic navigation in indoor scenes. Even with high-resolution images, objects in testing images appear small, taking only a tiny region in the high-res images. This demonstrates a clear challenge of InsDet in our dataset. Therefore, our dataset allows studying InsDet methods towards real-time operation on high-res (as future work).
**Preview of technical insights**. On our dataset, we revisit existing InsDet methods [28; 12; 18]. Perhaps the only InsDet framework is cut-paste-learn [12], which cuts instances from their profile images, pastes them on random background images (so being able to derive "free" bounding boxes annotations), and trains InsDet detectors on such data by following that of ObjDet (e.g., FasterRCNN [39]). We study this framework, train different detectors, and confirm that the state-of-the-art transformer-based detector DINO [50] performs the best, achieving 27.99 AP, significantly better than CNN-based detector FasterRCNN (19.52 AP). Further, we present a non-learned method that runs off-the-shelf proposal detectors (SAM [25] in our work) to generate object proposals and use self-supervised learned features (DINO\({}_{f}\)[8]3 and DINOv2\({}_{f}\)[35]) to find matched proposals to instances' profile images. Surprisingly, this non-learned method resoundingly outperforms end-to-end learning methods, i.e., SAM+DINOv2\({}_{f}\) achieves 41.61 AP, much better than DINO (27.99 AP) [50].
Footnote 3: We add subscript \({}_{f}\) to indicate that DINO\({}_{f}\)[8] is the self-supervised learned feature extractor; distinguishing it from a well-known object detector DINO [50].
**Contributions**. We make three major contributions.
1. We formulate the InsDet problem with a unified protocol and release a challenging dataset consisting of both high-resolution profile images and high-res testing images.
2. We conduct extensive experiments on our dataset and benchmark representative methods following the cut-paste-learn framework [12], showing that stronger detectors perform better.
3. We present a non-learned method that uses an off-the-shelf proposal detector (i.e., SAM [25]) to produce proposals, and self-supervised learned features (e.g., DINOv2\({}_{f}\)[35]) to find instances (which are well matched to their profile images). This simple method significantly outperforms the end-to-end InsDet models.
## 2 Related Work
**Instance Detection (InsDet)** is a long-lasting problem in computer vision and robotics [51; 12; 34; 3; 17; 23; 4], referring to detecting specific object instances in a scene image. Traditional InsDet methods use keypoint matching [36] or template matching [21]; more recent ones train deep neural networks to approach InsDet [34]. Some others focus on obtaining more training samples by rendering realistic instance examples [24; 23], data augmentation [12], and synthesizing training images by cutting instances as foregrounds and pasting them to background images [28; 12; 18]. Speaking of InsDet datasets, [16] collects scene images from 9 kitchen scenes with RGB-D cameras and defines 23 instances of interest to annotate with 2D boxes on scene images; [23] creates 3D models of 29 instances from 6 indoor scenes, and uses them to synthesize training and testing data; [4] creates 3D
Figure 1: **Object detection (ObjDet) vs. instance detection (InsDet). ObjDet aims to detect all objects belonging to some predefined classes, whereas InsDet requires detecting specific object instances defined by some visual examples. Loosely speaking, InsDet treats a single object instance as a class compared to ObjDet. Please refer to Fig. 2-right for the challenge of InsDet, which is the focus of our work.**
mesh models of 100 grocery store objects, renders 80 views of images for each instance, and uses them to synthesize training data.
As for benchmarking protocol of InsDet, [12] synthesizes training data from BigBird [45] and UW Scenes [27] and tests on the GMU dataset [16]; [23] trains on their in-house data and test on LM-O [5] and Rutgers APC [40] datasets. Moreover, some works require hardware-demanding setups [4], some synthesize both training and testing data [23; 28], while others mix existing datasets for benchmarking [12]. Given that the modern literature on InsDet lacks a unified benchmarking protocol (till now!), we introduce a more realistic unified protocol along with our InsDet dataset, allowing fairly benchmarking methods and fostering research of InsDet.
**Object Detection (ObjDet)** is a fundamental computer vision problem [13; 30; 39], requiring detecting all objects belonging to some predefined categories. The prevalent ObjDet detectors adopt convolutional neural networks (CNNs) as a backbone and a detector-head for proposal detection and classification, typically using bounding box regression and a softmax-classifier. Approaches can be grouped into two categories: one-stage detectors [38; 32; 37; 48] and two-stage detectors [19; 6]. One-stage detectors predict candidate detection proposals using bounding boxes and labels at regular spatial positions over feature maps; two-stage detectors first produce detection proposals, then perform classification and bounding box regression for each proposal. Recently, the transformer-based detectors transcend CNN-based detectors [7; 53; 50], yielding much better performance on various ObjDet benchmarks. Different from ObjDet, InsDet requires distinguishing individual object instances within a class. Nevertheless, to approach InsDet, the common practice is to repurpose ObjDet detectors by treating unique instances as individual classes. We follow this practice and benchmark various ObjDet methods on our InsDet dataset.
**Pretrained Models**. Pretraining is an effective way to learn features from diverse data. For example, training on the large-scale ImageNet dataset for image classification [10], a neural network can serve as a powerful feature extractor for various vision tasks [11; 44]. Object detectors trained on the COCO dataset [30] can serve as a backbone allowing finetuning on a target domain to improve detection performance [29]. Such pretraining requires human annotations which can be costly. Therefore, self-supervised pretraining has attracted increasing attention and achieved remarkable progress [9; 20; 8; 35]. Moreover, the recent literature shows that pretraining on much larger-scale data can serve as a foundation model for being able to perform well across domains and tasks. For example, the Segment Anything Model (SAM) pretrains a class-agnostic proposal detector on web-scale data and shows an impressive ability to detect and segment diverse objects in the wild [25]. In this work, with our high-res InsDet dataset, we explore a non-learned method by using publicly available pretrained models. We show that such a simple method significantly outperforms end-to-end learned InsDet detectors.
## 3 Instance Detection: Protocol and Dataset
In this section, we formulate a realistic unified InsDet protocol and introduce the new dataset. We release our dataset under the MIT License, hoping to contribute to the broader research community.
Figure 2: **Overview of our instance detection dataset. Left: It contains 100 distinct object instances. For each of them, we capture 24 profile photos from multiple views. We paste QR code images beneath objects to allow relative camera estimation (e.g., by COLMAP [43]), just like other existing datasets [22; 5]. Middle: We take photos in random scenes (which do not contain any of the 100 instances) as background images. The background images can be optionally used to synthesize training data, e.g., pasting the foreground instances on them towards box-annotated training images [28; 12; 18] as used in the object detection literature [30]. Right: high-resolution (6k\(\times\)8k) testing images of clutter scenes contain diverse instances, including some of the 100 predefined instances and other uninterested ones. The goal of InsDet is to detect the predefined instances in these testing images. From the zoom-in regions, we see the scene clutters make InsDet a rather challenging problem.**
### The Protocol
Our InsDet protocol is motivated by real-world indoor robotic applications. In particular, we consider the scenario that assistive robots must locate and recognize instances to fetch them in a cluttered indoor scene [42], where InsDet is a crucial component. Realistically, for a given object instance, the robots should see it only from a few views (_at the training stage_), and then accurately detect it _in a distance_ in _any_ scenes (_at the testing stage_). Therefore, we suggest the protocol specifying the training and testing setups below. We refer the readers to Fig. 2 for an illustration of this protocol.
* **Training**. There are profile images of each instance captured at different views and diverse background images. The background images can be used to synthesize training images with free 2D-box annotations, as done by the cut-paste-learn methods [28; 12; 18].
* **Testing**. InsDet algorithms are required to precisely detect all predefined instances from real-world images of cluttered scenes.
**Evaluation metrics**. The InsDet literature commonly uses average precision (AP) at IoU=0.5 [12; 2; 34]; others use different metrics, e.g., AP at IoU=0.75 [23], mean AP [3; 17], and F1 score [4]. As a single metric appears to be insufficient to benchmark methods, we follow the literature of ObjDet that uses multiple metrics altogether [30].
* **AP** averages the precision at IoU thresholds from 0.5 to 0.95 with the step size 0.05. It is the _primary metric_ in the most well-known COCO Object Detection dataset [30].
* **AP\({}_{50}\)** and **AP\({}_{75}\)** are the precision averaged over all instances with IoU threshold as 0.5 and 0.75, respectively. In particular, **AP\({}_{50}\)** is the widely used metric in the literature of InsDet.
* **AR** (average recall) averages the proposal recall at IoU threshold from 0.5 to 1.0 with the step size 0.05, regardless of the classification accuracy. AR measures the localization performance (excluding classification accuracy) of an InsDet model.
Moreover, we tag _hard_ and _easy_ scenes in the testing images based on the level of clutter and occlusion, as shown by the right panel of Fig. 2. Following the COCO dataset [30], we further tag testing object instances as _small_, _medium_, and _large_ according to their bounding box area (cf. details in the supplement). These tags allow a breakdown analysis to better analyze methods.
### The Dataset
We introduce a challenging real-world dataset of indoor scenes (motivated by indoor assistive robots), including high-resolution photos of 100 distinct object instances, and high-resolution testing images captured from 14 indoor scenes where there are such 100 instances defined for InsDet. Table 1 summarizes the statistics compared with existing datasets, showing that our dataset is larger in scale and more challenging than existing InsDet datasets. Importantly, object instances are located far from the camera in cluttered scenes; this is realistic because robots must detect objects in a distance before approaching them [1]. Perhaps surprisingly, only a few InsDet datasets exist in the literature. Among them, Grocery [4], which is the latest and has the most instances like our dataset, is not publicly available.
Our InsDet dataset contains 100 object instances. When capturing photos for each instance, inspired by prior arts [45; 22; 5], we paste a QR code on the tabletop, which enables pose estimation, e.g., using COLMAP [43]. Yet, we note more realistic scenarios can be hand-holding instances for capturing [26], which we think of as future work. Each instance photo is of 3072\(\times\)3072 pixel resolution. For each instance, we capture 24 photos from multiple views. The left panel of Fig. 2 shows some random photos for some instances. For the testing set, we capture high-resolution images (6144\(\times\)8192) in cluttered scenes, where some instances are placed in reasonable locations, as shown in the right panel of Fig. 2. We tag these images as _easy_ or _hard_ based on scene clutter and object occlusion levels. When objects are placed sparsely, we tag the testing images as _easy_; otherwise, we tag them as _hard_. Our InsDet dataset also contains 200 high-res background images of indoor scenes (cf. Fig. 2-middle). These indoor scenes are not included in testing images. They allow using
Figure 3: Imbalanced distribution of instances in test-set. Yet, instances have the same number of profile images in training and the metrics average over all instances. So, the evaluation is unbiased.
the cut-paste-learn framework to synthesize training images [28; 12; 18]. Following this framework, we segment foreground instances using GrabCut [41] to paste them on background images. It is worth noting that the recent vision foundation model SAM [25] makes interactive segmentation much more efficient. Yet, this work is made public after we collected our dataset. In Fig. 3, we plot the per-instance frequency in the testing set.
## 4 Methodology
### The Strong Baseline: Cut-Paste-Learn
**Cut-Paste-Learn** serves as a strong baseline that synthesizes training images with 2D-box annotations [12]. This allows one to train InsDet detectors in the same way as training normal ObjDet detectors, by simply treating the \(K\) unique instances as \(K\) distinct classes. It cuts and pastes foreground instances at various aspect ratios and scales on diverse background images, yielding synthetic training images, as shown in Fig. 4. Cut-paste-learn is model-agnostic, allowing one to adopt any state-of-the-art detector architecture. In this work, we study five popular detectors, covering the two-stage detector FasterRCNN [39], and one-stage anchor-based detector RetinaNet [31], and one-stage anchor-free detectors CenterNet [51], and FCOS [47]; and the transformer-based detector DINO [50]. There are multiple factors in the cut-paste-learn framework, such as the number of inserted objects in each background image, their relative size, the number of generated training images and blending methods. We conduct comprehensive ablation studies and report results using the best-tuned choices. We refer interested readers to the supplement for the ablation studies.
### The Simple, Non-Learned Method
We introduce a simple, non-learned InsDet method by exploiting publicly available pretrained models. This method consists of three main steps: (1) proposal generation on testing images, (2) matching proposals and profile images, (3) selecting the best-matched proposals as the detected instances.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline & for what task & publicly available & \#instances & \#scenes & published year & resolution \\ \hline BigBird [45] & recognition & ✓ & 100 & N/A & 2014 & 1280x1024 \\ RGBD [28] & scene label. & ✓ & 300 & 14 & 2017 & N/A \\ LM [22] & 6D pose est. & ✓ & 15 & 1 & 2012 & 480x640 \\ LM-O [5] & 6D pose est. & ✓ & 20 & 1 & 2017 & 480x640 \\ RU-APC [40] & 3D pose est. & ✓ & 14 & 1 & 2016 & 480x640 \\ \hline GMU [16] & InsDet & ✓ & 23 & 9 & 2016 & 1080x1920 \\ AVD [1] & InsDet & ✓ & 33 & 9 & 2017 & 1080x1920 \\ Grocery [4] & InsDet & ✗ & 100 & 10 & 2021 & unknown \\ \hline Ours & InsDet & ✓ & 100 & 14 & 2023 & 6144x8192 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of our dataset to existing ones**. Several datasets are used in the InsDet literature although they are designed for different tasks. For example, BigBird and LM are designed to study algorithms of object recognition and object pose estimation, hence they contain instances that are close to the camera. Naively repurposing them for InsDet leads to saturated performance, impervising the exploration space of InsDet. Instead, ours is more challenging as instances are placed far from the camera, simulating realistic scenarios where robots must detect instances at a distance. Importantly, our dataset contains far more instances than other publicly available InsDet datasets.
Figure 4: Synthetic training images for cut-paste-learn methods. We use different blending methods to paste object instances on the same background. We recommend that interested readers refer to the supplement for an ablation study using different blending methods.
**Proposal generation**. We use the recently released Segment Anything Model (SAM) [25] to generate proposals. For a proposal, we define a minimum bounding square box encapsulating the masked instance, and then crop the region from the high-resolution testing image. SAM not only achieves high recall (Table 3) on our InsDet dataset but detects objects not belonging to the instances of interest. So the next step is to find interested instances from the proposals.
**Feature representation of proposals and profile images**. Intuitively, among the pool of proposals, we are interested in those that are well-matched to any profile images of any instance. The well-matched ones are more likely to be predefined instances. To match proposals and profile images, we use off-the-shelf features to represent them. In this work, we study two self-supervised learned models as feature extractors, i.e. DINO\({}_{f}\)[8], and DINOv2\({}_{f}\)[35]. We feed a square crop (of a proposal) or a profile image to the feature extractor to obtain its feature representation. We use cosine similarity over the features as the similarity measure between a proposal and a profile image.
**Proposal matching and selection**. As each instance has multiple profile images, we need to design the similarity between a proposal and an instance. For a proposal, we compute the cosine similarities of its feature to all the profile images of an instance and use the maximum as its final similarity to this instance. We then filter out proposals and instances if they have similarities lower than a threshold, indicating that they are not matched to any instances or proposals. Finally, we obtain a similarity matrix between all remaining proposals and all remaining instances. Over this matrix, we study two matching algorithms to find the best match (hence the final InsDet results), i.e. Rank \(\&\) Select, and Stable Matching [14; 33]. The former is a greedy algorithm that iteratively selects the best match (highest cosine similarity) between a proposal and an instance and removes the corresponding proposal until no proposal/instance is left. The latter produces an optimal list of matched proposals and instances, such that there exist no pair of instances and proposals which both prefer each other to their current correspondence under the matching.
## 5 Experiments
**Synthesizing training images for cut-paste-learn baselines**. Our baseline method trains state-of-the-art ObjDet detectors on data synthesized using the cut-paste-learn strategy [12]. For evaluating on our InsDet dataset, we generate 19k training examples and 6k validation examples. For each example, various numbers of foreground objects ranging from 25 to 35 are pasted to a randomly selected background image. The objects are randomly resized with a scale from 0.15 to 0.5. We use four blending options [12], including Gaussian blurring, motion blurring, box blurring, and naive pasting. Fig. 4 shows some random synthetic images. The above factors have a notable impact on the final performance of trained models, and we have conducted a comprehensive ablation study. We refer interested readers to the supplement for the study.
**Implementation details.** We conduct all the experiments based on open-source implementations, such as Detectron2 [49] (for FasterRCNN and RetinaNet), CenterNet [52], FCOS [46] and DINO [50]. The CNN-based end-to-end detectors are initialized with pretrained weights on COCO [30]. We fine-tune CNN-based models using SGD and the transformer-based model using AdamW with a learning rate of 1e-3 and a batch size of 16. We fine-tune all the models for 5 epochs (which are enough for training to converge) and evaluate checkpoints after each epoch for model selection. The models are trained on a single Tesla V100 GPU with 32G memory.
If applied, we preprocess object instance profile images and proposals. Specifically, for a profile image, we remove the background pixels (e.g., pixels of QR code) using foreground segmentation (i.e., GrabCut). For each proposal, we crop its minimum bounding square box. We also study whether removing background pixels by using SAM's mask output performs better. We use DINO\({}_{f}\) and DINOv2\({}_{f}\) to compute feature representations.
### Benchmarking Results
**Quantitative results**. To evaluate the proposed InsDet protocol and dataset, we first train detectors from a COCO-pretrained backbone following the cut-past-learn baseline. Table 2 lists detailed comparisons and Fig. 5 plots the precision-recall curves for the compared methods. We can see that detectors with stronger architectures perform better, e.g. DINO (27.99% AP) vs. FasterRCNN (19.54% AP). Second, non-learned methods outperform end-to-end trained models, e.g., SAM+DINOv2
(41.61% AP) vs. DINO (27.99% AP). Third, all the methods perform poorly on _hard_ and _small_ instances, suggesting future work focusing on such cases.
Table 3 compares methods w.r.t the average recall (AR) metric. "AR@max10" means AR within the top-10 ranked detections. In computing AR, we rank detections by using the detection confidence scores of the learning-based methods (e.g., FasterRCNN) or similarity scores in the non-learned methods (e.g., SAM+DINO\({}_{f}\)). AR\({}_{s}\), AR\({}_{m}\), and AR\({}_{l}\) are breakdowns of AR for small, medium, and large testing object instances. Results show that (1) the non-learned methods that use SAM generally recall more instances than others, and (2) all methods suffer from small instances. In sum, results show that methods yielding higher recall achieve higher AP metrics (cf. Table 2).
**Qualitative results**. Fig. 6 visualizes qualitative results on two testing examples from the InsDet dataset. Stronger detectors, e.g., the non-learned method SAM+DINOv2\({}_{f}\), produce fewer false negatives. Even so, all detectors still struggle to detect instances with presented barriers such as heavy occlusion, instance size being too small, etc. As shown in Fig. 5, the non-learned method SAM+DINOv2\({}_{f}\) outperforms end-to-end learned methods in a wide range of recall thresholds.
### Ablation Study
Due to the space limit, we ablate the instance crop and stable matching in the main paper and put more (including ablation studies for the cut-paste-learn methods) in the supplement.
**Proposal feature extraction in the non-learned method.** Given a box crop (encapsulating the proposal) generated by SAM in the non-learned method, we study how to process the crop to improve InsDet performance. Here, we can either crop and feed its minimum bounding box to compute DINOv2\({}_{f}\) features, or we can use the mask to remove the background in the box. Table 4 shows the comparison. Clearly, the latter performs remarkably better in both "hard" and "easy" scenarios.
**Proposal-instance match in the non-learned method.** After generating proposals by SAM, we need to compare them with instance profile images to get the final detection results. We study the
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{4}{c}{**AP**} & \multicolumn{4}{c}{**AP\({}_{50}\)**} & \multicolumn{1}{c}{**AP\({}_{75}\)**} \\ \cline{2-7} & avg & hard & easy & small & medium & large & \\ \hline FasterRCNN [39] & 19.54 & 10.26 & 23.75 & 5.03 & 22.20 & 37.97 & 29.21 & 23.26 \\ RetinaNet [31] & 22.22 & 14.92 & 26.49 & 5.48 & 25.80 & 42.71 & 31.19 & 24.98 \\ CenterNet [51] & 21.12 & 11.85 & 25.70 & 5.90 & 24.15 & 40.38 & 32.72 & 23.60 \\ FCOS [47] & 22.40 & 13.22 & 28.68 & 6.17 & 26.46 & 38.13 & 32.80 & 25.47 \\ DINO [50] & 27.99 & 17.89 & 32.65 & 11.51 & 31.60 & 48.35 & 39.62 & 32.19 \\ SAM + DINO\({}_{f}\) & 36.97 & 22.38 & 43.88 & 11.93 & 40.85 & 62.67 & 44.13 & 40.42 \\ SAM + DINOv2\({}_{f}\) & **41.61** & **28.03** & **47.57** & **14.58** & **45.83** & **69.14** & **49.10** & **45.95** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Benchmarking results on our dataset. We summarize three salient conclusions. (1) End-to-end trained detectors perform better with stronger detector architectures, e.g., the transformer DINO (27.99 AP) outperforms FasterRCNN (19.54 AP). (2) Interestingly, the non-learned method SAM+DINOv2\({}_{f}\) performs the best (41.61 AP), significantly better than end-to-end learned detectors including DINO (27.99 AP). (3) All methods have much lower AP on hard testing images or small objects (e.g., SAM+DINOv2\({}_{f}\) yields 28.03 AP on hard vs. 47.57 AP on easy), showing that future work should focus on hard situations or small instances.**
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **AR@max10** & **AR@max100** & **AR\({}_{s}\)@max100** & **AR\({}_{m}\)@max100** & **AR\({}_{l}\)@max100** \\ \hline FasterRCNN [39] & 26.24 & 39.24 & 14.83 & 44.87 & 60.05 \\ RetinaNet [31] & 26.33 & 49.38 & 22.04 & 56.76 & 69.69 \\ CenterNet [51] & 23.55 & 44.72 & 17.84 & 52.03 & 64.58 \\ FCOS [47] & 25.82 & 46.28 & 22.09 & 52.85 & 64.11 \\ DINO [50] & 29.84 & 54.22 & **32.00** & 59.43 & 72.92 \\ SAM + DINO\({}_{f}\) & 31.25 & 63.05 & 31.65 & 70.01 & **90.63** \\ SAM + DINOv2\({}_{f}\) & **40.02** & **63.06** & 31.11 & **70.40** & 90.36 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Benchmarking results w.r.t average recall (AR). “AR@max10” means AR within the top-10 ranked detections. In computing AR, we rank detections by using the detection confidence scores of the learning-based methods (e.g., FasterRCNN) or similarity scores in the non-learned methods (e.g., SAM+DINO\({}_{f}\)). AR\({}_{s}\), AR\({}_{m}\), and AR\({}_{l}\) are breakdowns of AR for small, medium and large testing object instances. Results show that (1) the non-learned methods that use SAM generally recall more instances than others, and (2) all methods suffer from small instances. In sum, results show that methods yielding higher recall achieve higher AP metrics (cf. Table 2).**
InsDet performance of the two matching algorithms. Rank \(\&\) Select is a greedy algorithm that iteratively finds the best match between any proposals and instances until no instances/proposals are left unmatched; stable matching produces an optimal list of matched proposals and instances such that there does not exist a pair in which both prefer other proposals/instances to their current correspondence under the matching. Table 5 compares these two methods, clearly showing that stable matching works better.
### Discussions
**Societal Impact**. InsDet is a crucial component in various robotic applications such as elderly-assistive agents. Hence, releasing a unified benchmarking protocol contributes to broader communities. While our dataset enables InsDet research to move forward, similar to other works, directly applying algorithms brought by our dataset is risky in real-world applications.
**Limitations**. We note several limitations in our current work. First, while our work uses normal cameras to collect datasets, we expect to use better and cheaper hardware (e.g., depth camera and IMU) for data collection. Second, while the cut-paste-learn method we adopt does not consider geometric cues when synthesizing training images, we hope to incorporate such information to generate better and more realistic training images, e.g., pasting instances only on up-surfaces like tables, desks, and floors. Third, while SAM+DINOv2\({}_{f}\) performs the best, this method is time-consuming (see a run-time study in the supplement); real-world applications should consider real-time requirements.
**Future work**. In view of the above limitations, the future work includes: (1) Exploring high-resolution images for more precise detection on _hard_ situations, e.g., one can combine proposals generated from multi-scale and multi-resolution images. (2) Developing faster algorithms, e.g., one can use multi-scale detectors to attend to regions of interest for progressive detection. (3) Bridging
Figure 6: Visual results of FasterRCNN, DINO, and SAM+DINOv2\({}_{f}\) on our InsDet dataset. The top row illustrates the sparse placement of instances (i.e., easy scenario), while the bottom contains more cluttered instances (i.e., hard scenario). We drop predicted instance names for brevity. SAM helps localize instances with more precise bounding boxes, e.g., as arrows labeled in the upper row. DINOv2\({}_{f}\) provides more precise recognition of localized instances, e.g., five instances in the right of the bottom row. Compared with DINO, SAM+DINOv2\({}_{f}\) is better at locating occluded instances.
Figure 5: Precision-recall curves with IoU=0.5 (AP50 in the legend) on our InsDet dataset. Stronger detectors perform better, e.g., DINO, a transformer-based detector significantly outperforms FasterRCNN. Furthermore, even with a simple non-learned method, leveraging pretrained models, e.g., SAM+DINOv2\({}_{f}\), outperforms end-to-end learned methods.
end-to-end fast models and powerful yet slow pretrained models, e.g., one can train lightweight adaptors atop pretrained models for better InsDet.
## 6 Conclusion
We explore the problem of Instance Detection (InsDet) by introducing a new dataset consisting of high-resolution images and formulating a realistic unified protocol. We revisit representative InsDet methods in the cut-paste-learn framework and design a non-learned method by leveraging publicly-available pretrained models. Extensive experiments show that the non-learned method significantly outperforms end-to-end InsDet models. Yet, the non-learned method is slow because running large pretrained models takes more time than end-to-end trained models. Moreover, all methods struggle in hard situations (e.g., in front of heavy occlusions and a high level of clutter in the scene). This shows that our dataset serves as a challenging venue for the community to study InsDet.
|
2310.16602 | Parcel loss prediction in last-mile delivery: deep and non-deep
approaches with insights from Explainable AI | Within the domain of e-commerce retail, an important objective is the
reduction of parcel loss during the last-mile delivery phase. The
ever-increasing availability of data, including product, customer, and order
information, has made it possible for the application of machine learning in
parcel loss prediction. However, a significant challenge arises from the
inherent imbalance in the data, i.e., only a very low percentage of parcels are
lost. In this paper, we propose two machine learning approaches, namely, Data
Balance with Supervised Learning (DBSL) and Deep Hybrid Ensemble Learning
(DHEL), to accurately predict parcel loss. The practical implication of such
predictions is their value in aiding e-commerce retailers in optimizing
insurance-related decision-making policies. We conduct a comprehensive
evaluation of the proposed machine learning models using one year data from
Belgian shipments. The findings show that the DHEL model, which combines a
feed-forward autoencoder with a random forest, achieves the highest
classification performance. Furthermore, we use the techniques from Explainable
AI (XAI) to illustrate how prediction models can be used in enhancing business
processes and augmenting the overall value proposition for e-commerce retailers
in the last mile delivery. | Jan de Leeuw, Zaharah Bukhsh, Yingqian Zhang | 2023-10-25T12:46:34Z | http://arxiv.org/abs/2310.16602v1 | Parcel loss prediction in last-mile delivery: deep and non-deep approaches with insights from Explainable AI
###### Abstract
Within the domain of e-commerce retail, an important objective is the reduction of parcel loss during the last-mile delivery phase. The ever-increasing availability of data, including product, customer, and order information, has made it possible for the application of machine learning in parcel loss prediction. However, a significant challenge arises from the inherent imbalance in the data, i.e., only a very low percentage of parcels are lost. In this paper, we propose two machine learning approaches, namely, Data Balance with Supervised Learning (DBSL) and Deep Hybrid Ensemble Learning (DHEL), to accurately predict parcel loss. The practical implication of such predictions is their value in aiding e-commerce retailers in optimizing insurance-related decision-making policies. We conduct a comprehensive evaluation of the proposed machine learning models using one year data from Belgian shipments. The findings show that the DHEL model, which combines a feed-forward autoencoder with a random forest, achieves the highest classification performance. Furthermore, we use the techniques from Explainable AI (XAI) to illustrate how prediction models can be used in enhancing business processes and augmenting the overall value proposition for e-commerce retailers in the last mile delivery.
keywords: Last-mile Delivery, Parcel Loss Prediction, Machine learning, Anomaly Detection, Explainable AI +
Footnote †: journal:
## 1 Introduction
The e-commerce sector plays a vital role in today's global economy. In 2020, e-commerce accounted for 15.5% of the global retail sales (Statista, 2020) and was projected to reach 22% by the end of 2023. This growth is driven by technological innovation and shifting consumer behavior patterns (Clement, 2019). Data-driven approaches like machine learning allow retailers to gain insights into customer purchasing trends, optimize pricing and demand forecasting, and
enhance pre- and post-purchase customer service. Products sold on e-commerce platforms require physical delivery, which is considered as one of the fundamental factors determining consumers' decision in selecting e-commerce retailers (Vakulenko et al., 2019).
The last-mile delivery is defined as the final step in the supply chain as the parcel is shipped from the last distribution center to the end customer or collection point (Gevaers et al., 2011). The last-mile delivery has has emerged as a major cost center for e-commerce retailers, accounting for 30% of the total costs and 53% of total shipping costs (Dolan, 2021). The ability to meet the anticipated delivery dates and quantities (Bopage et al., 2019) is directly tied to customer satisfaction (Suguna et al., 2021). Failure to fulfill orders as promised can negatively impact the customer experience and brand perception. Therefore, optimizing last-mile logistics through improved planning and execution is strategically important. Unfortunately, parcel loss is often observed in last-mile delivery. To reduce the costs associated with lost shipments, especially for high-value items, many retailers opt to purchase insurance through their external logistics partners. These agreements typically stipulate that the delivery company will reimburse the economic damages from any parcels lost during last-mile transportation. However, selectively insuring only high-risk orders remains challenging without a systematic way to identify shipments more prone to getting lost.
The current approaches in parcel loss prediction tend to be non-data-driven and rule-based. The ever-increasing availability of data has made it possible for the application of machine learning (ML) in parcel loss prediction. Machine learning applications in similar research domains are sparse, although the potential is high. However, the parcel loss problem suffers from the class imbalance problem, as only a small percentage of delivered parcels is lost. For example, our industry partner observed only 0,25% of deliveries resulted in losses. This brings a challenge to developing accurate machine learning models, especially in predicting the minority class (Xu and Chow, 2006). In this work, we develop machine learning prediction models that are accurate, fast and generally applicable for current and future delivery service companies related to the last-mile delivery. Furthermore, model development and analyses seek to increase understanding of key drivers behind lost parcels. These actionable insights have the potential to enable strategic, data-driven improvements in operational planning and execution by retailers subsequently leading to a reduction in the incidence of parcel losses.
Our work contributes to both theory and practice in the following ways: From a theoretical perspective, we contribute to the nascent literature at the intersection of machine learning and last-mile delivery logistics by:
* Providing the first application of machine learning for predictive analytics of parcel loss risks within last-mile delivery logistics.
* DBSL and DHEL
- designed specifically to address the challenges posed by highly imbalanced
non-time series input data related to customer, order, and product attributes. The proposed deep hybrid ensemble learning method (DHEL) takes inspiration from deep anomaly detection techniques, transcending their conventional use in time series or image-centric data domains. The DHEL detects, on average 55,4% of all lost parcels, with an average balanced accuracy 0,701, which is more accurate than the current business rule (42,2% and 0,561, respectively).
Practically, we aim to benefit e-commerce retailers and delivery service providers by:
* Enabling more accurate identification of at-risk shipments to optimize insurance spending and reduce costs. Through our case study with a large e-commerce retailer, we demonstrate how leveraging machine learning techniques can enrich business process insights and drive enhanced business value through cost optimization. Specifically, with our proposed DBSL model, the company that we did a case study, could save upto EUR550.600,80 annually in insurance premium costs by selectively targeting only high-risk deliveries.
* Informing delivery process improvements to lower incident rates by identifying key predictive attributes such as customer, product, and contextual attributes associated with parcel losses. We leverage explainable AI techniques to reveal the relative importance of features. We also provide an elaborate overview of the tradeoff between prediction performance and interpretability of two proposed machine learning models. This deliberative analysis serves as a valuable guide for making informed decisions regarding the future adoption of these models within the e-commerce retail domain.
The rest of this paper is structured as follows. We discuss the related work of parcel loss prediction and classification models with highly imbalanced data in 2. We then present a case study in Section 3. Thereafter, the available data is explored in Section 4 and data insights are extracted and discussed. We introduce our modeling approaches to predict parcel loss in Section 5. In Section 6, we discuss the results of the predictive models. Subsequently, we present the business impact and additional process insights gained from prediction models in Section 7. Section 8 concludes our study.
## 2 Literature Review
We first describe the parcel loss problem. Then, we discuss the applications of machine learning to related problems in logistics. Subsequently, we introduce existing techniques to handle highly imbalanced data.
Parcel lossThe last-mile delivery starts at the distribution depot. Parcels are sorted and loaded into transportation vehicles, driving a predefined route of stops to deliver the loaded parcels. The logistics service modes for last-mile
delivery can be distinguished into two categories, i.e., direct delivery mode and indirect delivery mode (Li et al., 2020). In direct delivery, the parcel is directly delivered to the customer and handed to the customer face to face. This mode has the advantage of high safety, reliability, and customer satisfaction. However, it has relatively low efficiency and a high failure rate of first-time delivery. Moreover, with unattended deliver, i.e., delivery that is left unsecured at home, theft is the main problem and a form of fraud. McKinnon and Tallam (2003) study the relationship between unattended delivery and fraud, and demonstrate that crime risk in America depends on the size, value, and degree of concealment of the parcel and the nature of the neighborhood. A recent study by Stickle et al. (2020) states that visibility from the roadway and easily recognisable brands or other indicators of high-value parcels are critical identifiers for parcel fraud. In the indirect delivery mode, parcels are delivered to self-pick-up locations and the customer has to pick up the parcel from this location. This mode improves efficiency by reducing failure rates of first-time delivery, but comes at the cost of reduced customer satisfaction and potential fraud risk (Savelsbergh and Van Woensel, 2016). Researchers have also investigated different parcel tracking technologies over the past years by using RFID, barcodes, QR codes, or GPS. Systems automatically and continuously collect tracking data, such as the parcel's location, which can help decrease the amount of parcel loss (Ma et al., 2018). New methods, such as drones, autonomous vehicles, and trunk delivery, have been extensively studied, e.g., (Khoufi et al., 2019; Rojas Vloria et al., 2021). Chen et al. (2021) propose a blockchain-based intelligent anti-switch parcel tracing logistics system. In this system, logistics operators can continuously monitor the status query of parcels by attaching a sensor to the parcel. The abovementioned new technologies show promising opportunities for reduction of parcel loss, but currently lack reliability.
Machine learning in logistics.In the literature, machine learning has shown its promising performance in several applications within last mile logistics. For instance, the authors of (Pegado-Bardayo et al., 2023) use classical machine learning techniques, such as tree-based models and linear regression models, to predict how many delivery and pickup services will remain uncompleted on a given route with the working day of a courier. With a case study, they show the proposed approach achieves promising results. The studies of (de Araujo and Etemad, 2019) and (Gmira and A. Lodi, 2020) show that the deep learning models are better than non-deep ML models (i.e., Random Forests, XGBoost, and Support Vector Regression) in the task of estimating origin-destination travel time in parcel deliveries. Mo et al. (2023) propose a pair-wise attention-based pointer neural network to predict route trajectories, using drivers' historical delivery trajectory data. Mathew et al. (2021) present an end-to-end framework for detecting fraudulent transactions in food delivery based on self-generated weak labels. The proposed model is an ensemble of an autoencoder and an LSTM, using the reconstruction error of the autoencoder as input to a Multilayer Perceptron. Lorenc and Kuznar (2018) aim to predict the probability of cargo theft in road transport by using archive information about transport
tion, transport theft, type of transport, and transport value. In another related application area, i.e., baggage loss prediction at airports, Van Leeuwen et al. (2020) propose a gradient boosting machine to identify bags at risk, showing more accurate prediction compared to conventional decision rule methods.
To conclude, in the literature, some interesting relationships are observed between features and the amount of parcel loss from the literature, such as the size, stock value, and delivery method. However, there are currently no methods to do quantitative prediction of parcel loss prior to delivery. Yet, machine learning has been identified as useful techniques in other applications within last-mile logistics. In addition, for predicting parcel loss, an additional challenge is that data is highly imbalanced, i.e., only very few parcels are lost. This makes it hard to directly apply the standard ML approaches.
Anomaly detection for imbalanced dataAnomaly detection techniques can tackle the class imbalance problem by treating the minority class instances as outliers. Deep learning (DL) for anomaly detection utilizes neural networks, which have shown advantages over traditional algorithms (Chalapathy and Chawla, 2019). DL approaches are often well-adapted to jointly model the interactions between multiple variables beyond the specification of hyperparameters. Therefore, deep anomaly detection models require minimal tuning to obtain adequate results. Deep learning models can model complex, nonlinear relationships within the data. Moreover, in deep anomaly detection no feature engineering is required and the performance can scale with the availability of training data, making deep models suitable for data-rich problems (Choi et al., 2021). Semi-supervised deep anomaly detection techniques are widely adopted in the literature, because normal instances are easier to obtain. Semi-supervised techniques use existing labels of normal instances to separate them from outliers. Semi-supervised deep anomaly detection techniques use all training instances from the normal class to learn a discriminative boundary around these normal instances (Song et al., 2017). Test instances that do not belong to the normal class are then labeled as anomalous (Perera and Patel, 2019). Autoencoders are the most widely used approach for performing semi-supervised deep anomaly detection. Training of the autoencoder is done in a semi-supervised manner. The autoencoder is first fed with normal data, meaning only the majority class is presented to the autoencoder, which it learns to reproduce. Afterwards, the validation set containing both labeled classes, is used for supervised parameter tuning by setting a reconstruction error threshold, distinguishing anomalies from the normal behavior. Semi-supervised learning can produce considerably improved performance over unsupervised techniques (Chalapathy and Chawla, 2019).
Autoencoders have shown great potential for classification on highly skewed data. However, most handle either time-series or image input data. Hence, it remains unknown what the performance of deep anomaly detection techniques is on non-time series, tabular, data for parcel loss prediction.
## 3 Case Study
The case study was conducted with a large Dutch e-commerce retailer (referred to as _Company X_). Company X operates both online and through over ten physical retail stores located across countries in BeNeLux. Each day, thousands of online ordered products are picked from its warehouse and shipped to hubs, stores, and customers. Company X promises a guaranteed next-day delivery, seven days a week. To do so, it uses its own deliver network, which we name as OwnDN, and two external parcel delivery companies, named as ExtD1 and ExtD2, respectively, for the last-mile delivery, where more than 70 percent of all parcels are delivered via external delivery companies. Based on data in 2020, on average, 0,33% of all shipped parcels are lost in the process from the warehouse to the customer. There are various reasons for parcel loss, varying from process errors to fraud. However, Company X can only globally trace back the parcel, namely until the last received scan event. This last scan determines the phase in the supply chain where the parcel got lost. After analysis of parcel loss data in 2020, we notice that almost half of all parcels (around 45%) are lost at the customer, and around 15% of the lost parcels occur at the driver and around 15% at the depot. This indicates that around 75% of all parcel loss occurs in the last-mile delivery. In addition, the high percentage of loss at the customer and driver sparkles the idea that most parcel loss is due to fraudulent behavior of humans instead of process errors.
Company X attempts to reduce the costs of parcel loss in the last-mile delivery by using insurance on high-valued parcels delivered by external partners. The insurance is an agreement between Company X and the delivery partner, which states that the delivery partner covers the economic damage of parcel loss in the last-mile delivery. In the current situation, domain experts have established insurance rules based on stock value and product category. The rules determine each parcel's insurance value, which is the maximum amount of money Company X gets back from the delivery partner in case of parcel loss. We illustrate the insurance rule used for one deliver partner in Figure 1. It can be observed that the current insurance value is only based on the stock value of the order. Orders with a stock value below EUR250 are not insured. Given the insurance value of a parcel, the insurance cost is then determined. For con
Figure 1: Insurance rules used by an external delivery partner (ExtD2) of Company X
fidentiality reasons the exact costs are not shown. The insurance costs differ per delivery partner as the delivery partners differ in the amount of parcel loss. Company X is particularly interested in high predictive performance on the minority class, i.e., prediction of lost parcels, while remaining accurate predictions on the majority class. Therefore, ML models should be able to overcome the data-imbalance problem. In the current situation, Company X is inconclusive on individual parcel loss causes. Hence, understanding what feature attributes are related to parcel loss could be used to further optimize the last-mile delivery.
## 4 Data Description and Pre-processing
The collected data consists of shipments in Belgium in 2020, with products having a stock value above EUR100. The initial data contains about one million shipped products. The data contain order, customer, and product related attributes (or features). Order related data describes information about the carrier, delivery method, delivery address, route, etc. Customer related features contains the information of the customers who placed an order. This information is privacy sensitive, and hence is not used in further analysis and modelling. Product related features include size, weight, and stock value of each product.
To predict whether an order is lost, we create the target variable, _Is_lost_item_, which is true (1) if the product is lost in the last-mile delivery, and otherwise false (0). We observe the class imbalance problem in our use case, as the distribution of the created binary target variable is highly imbalanced, i.e., only 0,54% of products are lost, and 99,46% are normal parcel deliveries.
Figure 1(a) shows the correlation matrix of the numerical and boolean features. We notice that no feature is observed to have a weak linear correlation with the target feature _Is_lost_item_. Moreover, there exist some correlations between predictive numerical and boolean features. It can be observed that
Figure 2: Correlation matrix of features with target variable
the _size_ and _weight_ features are moderately correlating. Moreover, the _price-related_ features are weakly correlated. Similarly, in Figure 2b, it shows there are weak and moderate correlations between predictive features, such as _shipment_ features. The _address_ features are weakly correlating, namely street, pc4, distribution_depot, and city. Moreover, some _product_ properties, such as product type and brand, are moderately correlated. Similar to the numerical features, no linear correlation between the features and the target variable is observed. This suggests linear models might not be able to provide accurate predictions.
We perform an additional exploratory analysis to explore the predictive performance of different features and establish whether lost parcels have distinctive characteristics from normal parcels. For all numerical features, the boxplots and histograms are compared for two datasets, one containing lost products (LI) and the other exclusively normal products (not LI). In Fig. 3a, the boxplot of _size_length_, i.e. length of the product (cm) is shown for both groups. It can be observed that there are some differences in the distribution of this predictive feature. It is observed that lost products are, on average smaller than normal products. Similar differences are observed for other numerical features, which suggests that these features have some predictive power, despite no linear correlations were observed. The same analysis is performed for the boolean features. Both groups' mean and standard deviation slightly differ, indicating that the boolean features can be used for later predictive objectives. An example of this difference is visualized in Fig. 3b. The figure implicitly shows that more parcel loss occurs at B2B customers than regular customers.
The same exploratory analysis is performed for categorical features. We observe that the percentage of products that belongs to the team 'Telefoon, Tablets & Accessoires', which are phones, tablets, and accessories, is significantly higher (42,73%) for LI's, compared to the percentage in the group of normal (not LI) products (24,20%). It seems thus that relatively many phones, tablets, and accessories are lost. Similar observations are made for other categorical features. For instance, there is a difference between the relative amount of parcels lost
Figure 3: Distribution of predictors LI’s vs. not LI’s
in area Wallonia and Flanders. The features, province and region, seem to be important predictors of LI's. In addition, the payment method appears to be an important predictor of LI's. On the other hand, gender, type of order, and type of outlet seem to have less predictive power because no differences are observed.
Several sequential data preprocessing steps are performed on the collected raw dataset, including data cleaning and transformation. Data cleaning involves handling missing values and reducing data noise by handling data errors and outliers. The data transformation includes the encoding of features to the correct data format and data generation. The output of the data preparation phase is a dataset that can be used in the next phase, modeling.
We generate an additional feature named DateDeliveryISOMonth, representing the month of delivery, as it was observed that the month seemed to impact parcel loss quantity. Moreover, it was observed that orders placed on Saturday resulted more often in parcel loss. Hence, the feature weekday is generated, representing the day of the week on which the order is placed.
As many machine learning models require numerical values as inputs, we use one-hot encoding to transform all categorical features to numerical ones. This, however, results in high-dimensional datasets, especially if many different feature values are observed. Due to the large dataset size, it is chosen to relabel the categorical features into a maximum of 20 categories. This data transformation results in a loss of information but decreases the probability of overfitting.
After data preprocessing, the dataset consists of 854.898 rows with 226 predictive features and one target variable. Each row represents one parcel, that can contain one or multiple products. This dataset used for modeling contains 2174 lost parcels, which represents 0,25% of the total dataset. In Table 1 a snapshot of the dataset after preprocessing is presented to provide an indication of the dataset that is used for modeling. The table shows the first parcel (index 0) was lost (Is_lost_item = 1), while the next two parcels were not (Is_lost_item = 0).
Finally, it should be noted that the continuous features are log-transformed to improve the predictive performance in the modeling phase. Nevertheless, these feature values should be transformed back during business evaluation to draw relevant conclusions.
## 5 Prediction Methods
In this section, we propose two ML approaches for predicting parcel loss in last-mile delivery with imbalanced dataset. First, we study commonly used data
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \begin{tabular}{c} **size-** \\ **width** \\ \end{tabular} & \begin{tabular}{c} **stock-** \\ **value** \\ \end{tabular} & **quantity** & \begin{tabular}{c} **.....** \\ **same** \\ \end{tabular} & \begin{tabular}{c} **team-name-** \\ **Team Audio** \\ \end{tabular} & \begin{tabular}{c} **brand-name** \\ **-Acer** \\ \end{tabular} & \begin{tabular}{c} **has-** \\ **phone** \\ \end{tabular} &
\begin{tabular}{c} **Is-lost** \\ **item** \\ \end{tabular} \\ \hline
0 & 2.230 & 6,561 & 1 &.... & 0 & 1 & 1 & 1 \\ \hline
1 & 1.705 & 4,667 & 1 &.... & 0 & 0 & 1 & 0 \\ \hline
2 & 2.501 & 4,976 & 1 &.... & 0 & 0 & 1 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Example of dataset after preprocessing.
imbalance techniques applied to standard supervised techniques. We name this method Data Balance Techniques with Supervised Learning (DBSL). Then, we introduce our novel Deep Hybrid Ensemble Learning (DHEL) method which utilizes an autoencoder and ensemble classifiers to more effectively handle imbalanced data and improve predictive performance.
A brief introduction to the standard supervised learning algorithms and autoencoders used in our study, as well as their variants, is provided in the Appendix. For further background information on these techniques, we refer the reader to a seminal book by Goodfellow et al. (2016). Details of DBSL and DHEL are provided in subsequent subsections.
### Data Balance Techniques with Supervised Learning (DBSL)
We study the performance of various data imbalance correction techniques combined with standard supervised learning algorithms for the parcel loss prediction task. Specifically, we experiment with four different resampling methods: Random Undersampling (RU), Near-Miss Undersampling, Random Undersampling Boosting (RUSBoost), and UnderBagging (UB). Each technique is used to balance the class distribution of the imbalanced parcel loss dataset.
**Random Undersampling (RU)** is the most naive method of undersampling, which tries to balance the class distributions by randomly selecting fewer examples from the majority class (Kotsiantis and Pintelas, 2003). RU is an efficient method; however, it can lead to the random removal of informative samples from the data.
**Near-Miss Undersampling** undersamples majority class instances based on the distance of the majority class to minority instances (Mani and Zhang, 2003). There are three versions of NM. In this paper, we uses NearMiss-1 because the other two versions are computationally expensive. NearMiss-1 (hereafter as NM) selects samples from the majority class close to three of the closest examples from the minority class and removes them. NM not only determines the most representative instances from the majority class but also covers the most easily misclassified ones (Bao et al., 2016).
**Random Undersampling Boosting (RUSBoost)** is a technique that combines random undersampling with boosting. Random undersampling balances class distributions by randomly removing majority class samples (Seiffert et al., 2009). Boosting then trains sequential weak learners that focus on misclassified samples from previous models. The combined votes from all weak learners produce an ensemble model. Random undersampling reduces overfitting to the majority class, while boosting combines weak learners to produce a strong final model.
**UnderBagging (UB)** incorporates the strength of RU with bagging. It first balances class distributions through undersampling the majority class (Raghuwanshi and Shukla, 2018). It then trains multiple models on random subsets of the balanced data, as in bagging. The goal is to benefit from both undersampling to balance classes and bagging to reduce variance through combining diverse models.
The balanced samples generated via these resampling strategies are then used to train five different supervised learning models: Decision Tree (DT), Random Forest (RF), Extreme Gradient Boosting (XGB), Logistic Regression (LR), and Support Vector Machine (SVM). These represent both linear and non-linear classification algorithms that are commonly applied. Figure 4 provides an overview of DBSL training and testing procedure.
### Deep Hybrid Ensemble Learning (DHEL)
Autoencoders (AE) can learn representations of data by attempting to reconstruct their inputs. The reconstruction error measures how well each sample was represented in the latent space. Samples with higher errors may indicate anomalies or minorities that deviate from the majority. In the proposed Deep Hybrid Ensemble Learning (DHEL) approach, we employ an AE to extract features from the imbalanced parcel loss dataset consisting of only normal data (i.e. not lost). Rather than using the encoded features directly, we leverage the reconstruction error vector output by the decoder layer. These error measurements are then provided as inputs to an ensemble of supervised learning models. By utilizing the reconstruction failures highlighted by the AE, the classifiers aim to better identify hard to represent minority parcel losses. We examine DHEL using different autoencoder architectures, including variational and denoising variants, to determine the most effective configuration. A brief explanation of each autoencoder type is provided in the Appendix. The developed hybrid approach proposed in this research is inspired from Lin and Jiang (2021), which uses it for credit card fraud detection and study on random forests with AE.
Figure 4: An overview of Data Balance Techniques with Supervised Learning(DBSL)
Figure 5 illustrates that in the deep hybrid approach with reconstruction error vector the training set contains only normal data instances and is solely used to train the AE network. The autoencoder reproduces the validation set as reconstruction error vectors \(E_{v}\), seen as the supervised classifier's input and 'training' data. The actual training data \(D_{train}\) is only used to train the AE to prevent data leakage. Hence, the validation data \(D_{validate}\) is used for training and hyperparameter optimization of the supervised classifier \(c\). Random search is combined with repeated stratified 5-fold cross-validation to retrieve the optimal hyperparameters for each supervised classifier \(c\). The stratified 5-fold cross-validation ensures that all five folds contain sufficient anomalous data instances. Stratification reduces the estimate's variance, making each fold representative of the whole dataset. Once the final hyperparameters are found, the complete validation set \(D_{validate}\) is afterwards used to train the classifier \(c\) as this is expected to improve the model's generalization ability. The test set \(D_{test}\) is only used to evaluate the final performance of the deep hybrid model.
## 6 Performance of prediction models
The developed models are evaluated on the retrieved and preprocessed dataset described in Section 4. The autoencoders are implemented using Keras Tensorflow (Chollet, 2018). The supervised classification models are implemented using the sci-kit learn library (Pedregosa et al., 2011). The training and evaluation of the models is performed on a Processor Intel(R) Core i7-4710MQ.
We use the confusion matrix, shown in Table 2, to visualize the performances of different prediction models. As we have a binary classification problem, the outcomes of the prediction models are either positive (Class 1) or negative (Class 0). In our case, a positive prediction indicates the parcel being lost. We use the following evaluation measures to assess and compare prediction models. _Precision_, defined by \(\frac{TP}{TP+FP}\) (see Table 2), measures the relative amount of parcels predicted as lost that is actually lost. _Recall_, computed as \(\frac{TP}{TP+FN}\), measures the relative amount of lost parcels that are detected. Moreover, we adopt the
Figure 5: An overview of Deep Hybrid Ensemble Learning (DHEL) with reconstruction error
_Balanced Accuracy_ (BA) metric to evaluate the performance of prediction models across both classes in the presence of imbalanced data. BA is computed as \(\frac{1}{2}(\frac{TP}{P}+\frac{TN}{N})\). In addition, the _Receiver Operating Characteristics (ROC) curve_ is used to obtain the Area Under Curve score (AUC). The ROC-AUC summarizes the True Positive Rate (TPR) and False Negative Rate (FNR) of all different AUC's from both classes' ROC. It can take a value between 0 and 1, where an AUC of 0,5 suggests the ROC falls on the diagonal and the prediction has no discriminatory ability to classify. An AUC of 1,0 means that the classifier is perfect at discrimination. Finally, we also show the True Negative Rate (TNR) value to measure how the model accurately detects the normal (not lost) parcels. These metrics align well with our use case, characterized by highly imbalanced data. We use the Balanced Accuracy (BA) and AUC (ROC_AUC) to choose the best models as they provide a more comprehensive representation of the model's performance. Other measures serve as additional explanations of the model's performance.
### Performance of the Business Rules
We use the current business insurance rules at Company X as the baseline method to compare to various machine learning methods. The predictive performance of the business rules model is established using the 10% test set containing 85,273 normal parcels and 217 lost parcels. Table 3 presents the predictive performance of the business rules model and Table 4 shows the corresponding confusion matrix. In total, 42,4% of the lost parcels are correctly detected. The balanced accuracy (BA) and ROC-AUC score show identical values 0,562 because the defined decision rules cannot predict probabilities but only binary outcomes. Therefore, no different thresholds can be obtained, making the ROC-AUC score identical to the balanced accuracy score. The ROC-AUC score is just above 0,5 indicating that the performance is slightly better than random guessing.
### Data Balance Techniques with Supervised Learning (DBSL) Results
We developed multiple supervised models and evaluated their predictive performance. Different strategies such as sampling, and ensemble learning were evaluated as improvements to traditional classifiers to address the class imbalance. The sampling strategy is defined as the distribution of majority samples against minority samples. _Random Undersampling (RU)_ and _Near-Miss Undersampling (NM)_ are used as sampling methods whereas _RUSBoost_ and _Understandinging (UB)_ are examined as ensemble methods, which incorporate random undersampling in the algorithm. The data is scaled for the LR and SVM models, as scaling is unnecessary for tree-based methods. Initially, stratified 5-fold cross-validation combined with sk-learn randomized search is performed on the training set to perform hyperparameter tuning for all models. The best hyperparameters are selected based on the balanced accuracy score. These optimized models are trained again on the entire 90% training set and evaluated on the 10% test set.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Positive (Class 1) & Negative (Class 0) \\ \hline Positive & 92 & 125 \\ \hline Negative & 25567 & 59706 \\ \hline \end{tabular}
\end{table}
Table 4: Confusion matrix of business rules model classification model on the 10% test set.
Figure 6: **ROC-AUC scores of optimised models on 10% test set**. Random sampling (RU), Underbagging (UB). Near miss (NM) undersampling, Random Undersampling Boosting (RUSBoost)
Figure 7: **Balanced accuracy scores of optimised models on 10% test set**. Random sampling (RU), Underbagging (UB). Near miss (NM) undersampling, Random Undersampling Boosting (RUSBoost)
Figure 6 and 7 reports the ROC-AUC and balanced accuracy scores of tree-based models, along with LR and SVM. Note that we did not combine underbagging (UB) and RUSBoost with random forest and XGBoost as they are ensemble methods of the decision tree itself. It can be observed that the LR model shows good predictive performance while being a relatively simpler model. The DT model consistently performs better than the business rules, which indicates that machine learning improves decision-making on parcel loss. With random sampling (RU), all the models, except for the business rule, show comparable performance with RF model securing 0,778 ROC-AUC and 0,7 balanced accuracy score. The UB-LR model performs best, detecting 65,9% of lost parcels with ROC-AUC equals 0,774 and balanced accuracy of 0,698. The undersampling approaches, both NM-undersampling and RUS-Boost, do not improve the predictive performance of any model. NM is likely overfitting, whereas RUS-Boost focuses on the wrongly misclassified normal parcels, instead of the wrongly misclassified lost parcels. The majority voting systems in UnderBagging seems to prevent the classifiers from overfitting, causing an increase in performance in the ensemble model over the standard classifiers.
Table 5 shows the predictive performance of optimized models. Remarkably, all different machine learning techniques offer better performance in combination with RU. In general, it can be observed that parcel loss prediction is a complex problem, as the precision score is for all models below 0,02, indicating that out of all predicted lost parcels, less than 2% is lost. In general, missing a TP is more costly than missing a TN. Therefore, a relatively low precision seemed less critical. The UB-SVM model shows the worst performance, resulting in a relatively low ROC-AUC 0,744, and balanced accuracy score 0,652. However, the precision is slightly higher than the other classifiers 0,021.
### Deep Hybrid Ensemble Learning (DHEL) results
We analyzed the performance of our proposed Deep Hybrid Ensemble Learning (DHEL) approach for predicting parcel losses. A key aspect of an effective AE model is appropriately selecting hyperparameters and thresholds. Therefore, we first detail our process for hyperparameter tunning and threshold setting followed by a performance evaluation of different models.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Model** & **TNR** & **Precision** & **Recall** & **ROC-AUC** & **BA** \\ \hline BR & 0,700 & 0,004 & 0,424 & 0,562 & 0,562 \\ UB-DT & 0,888 & 0,011 & 0,470 & 0,765 & 0,679 \\ RU-RF & 0,912 & 0,014 & 0,488 & 0,778 & 0,700 \\ RU-XGB & 0,930 & 0,015 & 0,433 & 0,766 & 0,681 \\ UB-LR & 0,743 & 0,006 & 0,659 & 0,774 & 0,698 \\ RU-SVM & 0,958 & 0,021 & 0,346 & 0,744 & 0,652 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of optimized models on 10% test set.
#### 6.3.1 Hyperparameter tunning and threshold setting of AEs
We used a validation set to optimize the hyperparameters of AEs. The validation set contains both normal and anomalous instances. The optimal architecture and hyperparameters per autoencoder are determined using a random search. Table 6 provides an overview of optimal parameter configuration of AE, VAE and DAE.
To classify instances as anomalous, a threshold value is required. The threshold value is obtained using the MSE of the reconstruction error from the validation set. As a rule of thumb, the score of 3,5 is used as a cut-off value, meaning all instances having a modified Z-score above 3,5 are classified as anomalies. However, manually adjusting the threshold per model can overestimate the models' performance. Finding a suitable threshold value requires a tradeoff between precision and recall. Improving recall is obtained by lowering the threshold while recovering precision requires the opposite.
The balanced accuracy metric inherits the tradeoff between precision and recall. Therefore, different threshold values can be evaluated on the balanced accuracy score obtained on the validation set and thus can be used to determine the optimal threshold value. Table 7 presents the predictive performance of the regular AE on the validation set for different threshold values. It can be observed that higher threshold values result in lower recall but higher precision scores and vice-versa. Furthermore, is is observed that the ROC-AUC score is equal for all different threshold values, which was expected because the ROC-AUC score is calculated by the area under the precision recall curve.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **AE** & **VAE** & **DAE** \\ \hline Optimizer & Adam & Adam & Adam \\ Loss function & MSE & MSE & MSE \\ Activation function input/output layer & ReLU & Sigmoid & ReLU \\ Activation function hidden layers & Sigmoid & eLU & Sigmoid \\ Nodes per hidden layer & 8-4-8 & 8-4-8 & 8-4-2-4-8 \\ Dropout rate & 0.2 & 0.5 & 0.1 \\ Amount of input features & 25 & 15 & 25 \\ Latent dimensions & - & 4 & - \\ Standard deviation input noise & - & - & 0,2 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Optimal network architecture and hyperparameters for three types of autoencoders, found during parameter optimization on the 10% validation set
For the regular AE, the threshold value of 0,0391 is set based on the validation set outcomes, resulting in a precision of 0,040, recall of 0,406, ROC-AUC score of 0,611, and balanced accuracy of 0,592. Following the same approach, the threshold value of 0,0015 and 0,0350 is set for VAE and DAE, respectively.
#### 6.3.2 Results
Table 8 provide the performance of deep hybrid models with optimized hyperparameters on the unseen 10% test set. The deep hybrid models using the logistic regression (LR) and SVM classifier show relatively poor predictive performance. The SVM algorithm was already observed to be underperforming during supervised classification, and the poor performance of LR can be due to the interdependence of the reconstruction errors and an absence of multi-collinearity. The reconstruction error vector obtained from the autoencoder causes the input data to be correlated. Therefore, it is expected that the relatively low performance of the deep hybrid LR model is due to the transformation caused by AE models. The XGBoost and RF classifiers seem to outperform the DT model. Boosting shows slightly lower performance compared to bagging, which was also observed during supervised classification. The best performance is obtained with a regular autoencoder and random forest (AE-RF). The AE-RF model shows promising results, with an ROC-AUC score equals to 0,759, a balanced accuracy score of 0,704, and in total, correct detection of 51,3% of all lost parcels. Moreover, it can be observed that all deep hybrid models significantly outperform the business rules.
The more robust VAE and DAE do not improve the predictive performance, as all deep hybrid models show lower predictive performance than the regular AE ensembles. It is expected that the added random noise in the reconstruction error of the DAE and VAE causes the classifier to focus on the wrong features during training. Moreover, the VAE was intended to make the deep hybrid model more robust by sampling from the Gaussian distribution of the latent representations. However, it is expected that similar to the DAE this additional noise in the input data causes the model to focus more on random noise instead of the noise in the minority class instances.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Threshold** & **TNR** & **Precision** & **Recall** & **ROC-AUC** & **BA** \\ \hline
0,0200 & 0,000 & 0,022 & 1,000 & 0,611 & 0,500 \\
0,0300 & 0,012 & 0,023 & 0,992 & 0,611 & 0,502 \\
0,0350 & 0,397 & 0,026 & 0,709 & 0,611 & 0,554 \\
0,0380 & 0,697 & 0,034 & 0,467 & 0,611 & 0,582 \\
**0,0391** & **0,777** & **0,040** & **0,406** & **0,611** & **0,592** \\
0,0420 & 0,925 & 0,068 & 0,239 & 0,611 & 0,582 \\
0,0500 & 0,995 & 0,226 & 0,061 & 0,611 & 0,528 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The 10% validation set performance of the regular autoencoder for different threshold values
### Comparison of models from DBSL and DHEL
Table 9 shows the average performance on a 10% test set of the best models of different learning approaches. It can be noted that all models improve classification performance over the current business rules, which validates the choice for the company to use machine learning in parcel loss prediction. The semi-supervised VAE shows the lowest classification performance. It was concluded that a rigid threshold value on the MSE of the reconstruction does not provide enough flexibility to perform accurate classification with autoencoders.
Random forest is observed to be the most optimal machine learning algorithm for parcel loss prediction as both the supervised and deep hybrid model uses the algorithm to perform classification. The deep hybrid AE-RF model uses the reconstruction error vector as input to the random forest model to perform classification. It can be observed that the predictive performance is similar for the RU-RF and AE-RF models. The supervised RU-RF model shows the highest average ROC-AUC performance (0,768). However, the average recall and balanced accuracy are higher for the deep hybrid AE-RF model. Hence, the results shows that the most optimal model for parcel loss prediction is the deep hybrid AE-RF model. This model detects on average \(55,4\%\) of all lost parcels, the average precision equals 0,010, average ROC-AUC equals 0,763 and the average balanced accuracy equals 0,701. Table 10 and Table 10 show the confusion matrix obtained from the 10% test set for both RU-RF and deep hybrid AE-RF models. It can be observed that the differences in actual predictions are minimal. The AE-RF model detects more true negatives, which comes at the cost of slightly more false negatives.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Autoencoder** & **Classifier** & **TNR** & **Precision** & **Recall** & \begin{tabular}{c} **ROC\_** \\ **AUC** \\ \end{tabular} &
\begin{tabular}{c} **Balanced** \\ **Accuracy** \\ \end{tabular} \\ \hline AE & RF & 0,878 & 0,010 & 0,513 & **0,759** & **0,704** \\ AE & XGB & 0,984 & 0,051 & 0,330 & 0,757 & 0,657 \\ AE & LR & 0,736 & 0,005 & 0,505 & 0,669 & 0,620 \\ AE & SVM & 0,877 & 0,007 & 0,304 & 0,644 & 0,592 \\ AE & DT & 0,860 & 0,009 & 0,486 & 0,733 & 0,673 \\ VAE & RF & 0,093 & 0,015 & 0,395 & 0,732 & 0,665 \\ VAE & XGB & 0,984 & 0,048 & 0,321 & 0,732 & 0,652 \\ VAE & LR & 0,784 & 0,006 & 0,482 & 0,671 & 0,633 \\ VAE & SVM & 0,993 & 0,012 & 0,032 & 0,631 & 0,512 \\ VAE & DT & 0,952 & 0,019 & 0,358 & 0,684 & 0,655 \\ DAE & RF & 0,913 & 0,012 & 0,432 & 0,728 & 0,672 \\ DAE & XGB & 0,985 & 0,048 & 0,298 & 0,725 & 0,642 \\ DAE & LR & 0,799 & 0,005 & 0,417 & 0,650 & 0,608 \\ DAE & SVM & 0,994 & 0,012 & 0,198 & 0,608 & 0,580 \\ DAE & DT & 0,922 & 0,013 & 0,399 & 0,704 & 0,660 \\ Business rules & 0,700 & 0,004 & 0,424 & 0,562 & 0,562 \\ \hline \hline \end{tabular}
\end{table}
Table 8: 10% Test set results (85490 instances) of all deep hybrid models using the reconstruction error as input
## 7 Business Value and Managerial Implications
We discuss the business value and insights from the developed predictive models in the section.
### Insurance decision-making with predictive models
The developed predictive model is intended to be used at Company X as an alternative to the current insurance rules. That is, for each parcel to be delivered, the model predicts whether it will be lost during last-mile delivery. If the model predicts that the parcel is going to be lost, the insurance is used and vice-versa.
In this way, we can evaluate the monetary impact of the predictive models using misclassification costs. In addition, the company would like to increase understanding of parcel loss in the last-mile delivery and explore how models can be used to prevent parcel loss. Hence, we define the second business evaluation metric as the interpretability of the model, which relates to the business information that can be extracted. we use the methods from Explainable AI (XAI) for this purpose.
### Misclassification costs
The misclassification costs are computed based on the number of incorrectly classified parcels: false positives (FP) and false negatives (FN). FP are predictions of parcels that are predicted to become lost, while in reality, these parcels were not lost. Translated to the business process, these parcels would have been unnecessarily insured. The corresponding insurance cost function _IC(\(S_{i}\),\(P_{i}\))_ was explained in Section 3, which is dependent on the stock value \(S_{i}\) of parcel \(i\) and the delivery partner used \(P_{i}\). FN are parcels predicted as being normal (not lost), which in reality were lost. Hence, the misclassification costs \(MC_{i}\) for parcel \(i\) of false negative predictions are equal to the parcel's stock value \(S_{i}\). Note that a true negative prediction \(TN_{i}\) or true positive prediction \(TP_{i}\) have no misclassification costs, as the model correctly classify these parcels. \(MC_{i}\) for
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Positive (Class 1) & Negative (Class 0) \\ \hline Positive & 106 & 111 \\ \hline Negative & 7476 & 77797 \\ \hline \end{tabular}
\end{table}
Table 10: Confusion matrix of RU-RF model on the test set.
\begin{table}
\begin{tabular}{c c c c c} \hline Model & **Precision** & **Recall** & **ROC-AUC** & **BA** \\ \hline Business rules & 0,004 \(\pm\) 0,000 & 0,422 \(\pm\) 0,018 & 0,561 \(\pm\) 0,009 & 0,561 \(\pm\) 0,009 \\ RU-RF & 0,010 \(\pm\) 0,001 & 0,478 \(\pm\) 0,026 & 0,768 \(\pm\) 0,036 & 0,695 \(\pm\) 0,035 \\ AE-RF & 0,010 \(\pm\) 0,000 & 0,554 \(\pm\) 0,039 & 0,763 \(\pm\) 0,031 & 0,701 \(\pm\) 0,027 \\ \hline \end{tabular}
\end{table}
Table 9: Evaluation of the average performance of the best-performing models per learning approach on ten different 10% test sets with 85,490 instances
parcel \(i\) can be determined by \(MC_{i}=FP\cdot IC(S_{i},P_{i})+FN\cdot S_{i}\). The total misclassifcation cost of a decision-making model is computed by taking the sum of the misclassification costs of parcels.
We compare the misclassifcation costs for the developed models with the Business Rules and the scenarios where all parcels are insured or not insured. Table 12 presents the costs, computed on the 10% test set.
The misclassification costs of the current business rules are equal to EUR116.008,13. It can be observed that if all parcels are insured the total misclassification costs are significantly higher, i.e. EUR177.299,05. However, if no insurance would be used the total stock value of these parcels equals the misclassification costs, which equals EUR92.157,57. This confirms that the current business rules are too simple. The current business rules insure one third of all parcels above 100 euros, while only 0,25% is actually lost. The RU-RF model shows moderately higher misclassification costs (EUR60.948,05). Furthermore, the AE-RF model results in the lowest misclassification costs, which equals EUR55.140,82. The difference between the AE-RF model and the current situation is equal to EUR60.867,31 on 10% of the data. Therefore, the expected decrease in costs is yearly equal to EUR608.673,10 if this model would be used. We can conclude that using machine learning models for insurance decision-making can reduce operational costs significantly.
### Insights into process
Besides the monetary savings, additional process insights could be obtained from the machine learning models using techniques from XAI, among which one of the most commonly used method is SHAP (Lundberg & Lee, 2017). SHAP calculates the impact of each feature in the final prediction, which can be used for feature interpretability determination and model explainability. SHAP can be applied to black-box models such as random forests and neural networks.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Positive (Class 1) & Negative (Class 0) \\ \hline Positive & 112 & 105 \\ \hline Negative & 10443 & 74830 \\ \hline \end{tabular}
\end{table}
Table 11: Confusion matrix of AE-RF model on the test set.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Autoencoder** & **TN** & **FP** & **FN** & **TP** & **Costs** \\ \hline Business rules & 59706 & 25567 & 125 & 92 & EUR116.008,13 \\ Insure nothing & 85273 & 0 & 218 & 0 & EUR92.157,57 \\ Insure all & 0 & 85273 & 0 & 218 & EUR177.299,05 \\ RU-RF & 77797 & 7476 & 111 & 106 & EUR60.948,05 \\ AE-RF & 74830 & 10443 & 105 & 112 & EUR55.140,82 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Total misclassification costs on the 10% test set (85490 instances) of different decision-making models
However, for the best performing model, i.e., AE-RF, we are not aware of any existing techniques from XAI that can be used to interpret the input features, as AE and RF are separately trained on different datasets. More specifically, in AE-RF, an autoencoder adapts the input data, and then the reconstruction errors obtained by the autoencoder serve as inputs to a random forest classifier. Therefore, in the following, we only analyze the interpretability of the developed random forest model, RU-RF.
SHAP values are obtained from the training dataset and used to establish the feature importances of the random undersampled random forest model. Figure 8 shows the obtained summary plot, which visualizes the feature's importance, impact, and original value. The summary plot lists the 20 most important features of the model obtained during the model's training. The figure is vertically ordered on feature importance, decreasing from top to bottom. The horizontal position on the axis illustrates the impact of a feature. High impact, i.e. high SHAP values, push the prediction towards a positive one and thus parcel loss. The color in the plot provides information regarding the original value of the sample. Red means that the value of the sample is relatively high, while blue indicates a relatively low value of the sample. Combining the impact and the original value provides insight into the correlation of a feature with the target variable. Most interesting are the top features, as these provide the most information during prediction. Generally, the impact and values for features are separated well, which means that the SHAP values provide insight into the relation between parcel loss and feature values. However, it should be noted that the values do not serve as causal relationships but only provide insight into the associations between the features and the target variable learned by the model.
Figure 8 shows that higher values for the feature quantity push the binary prediction towards a positive one, representing parcel loss. This suggests that multi-product parcels are associated with parcel loss. Similar associations are observed for the stock value and size dimensions, which suggests that high-valued and relatively large parcels are expected to be more often lost. Another interesting observation relates to the carrier that is used to deliver parcels. Only OwnDN is listed among the 20 most important features, while ExtD1 and ExtD2 are observed to be less important predictors for parcel loss. It should be noted that this does not mean that there is no relationship between these carriers and parcel loss. The outcomes only give insight into the decision making of the model for final classification. Another remarkable observation is that the delivery method is not listed among the 20 most important features.
The model seems to associate various features related to the delivery address with parcel loss. The transfer area with ID = 3985, the city of Brussels, the province of Wallonia, and the delivery depot 'Anderlecht Mail' are associated with more positive parcel loss predictions. On the contrary, parcels delivered by OwnDN are associated with less parcel loss. Furthermore, the delivery moment seems to be taken into account in the prediction model. It can be observed that higher values for the created feature DateDeliveryISOMonth are associated with lost parcels. Domain experts state that this is probably due to the high amount of lost parcels during Christmas and Black Friday. Finally, the features representing product attributes are associated with parcel loss. Mobile phones, tablets, and Apple products are often predicted as lost by the model. It seems therefore that a relatively large amount of lost parcels is related to fraud instead of human errors in the process. It should be noted that some of the samples are centered around zero, indicating that the relationship is less strong if the values are not extreme. Overall this leads to the hypothesis that the model's strength lies in the combination of multiple feature values, which suggests that product and order characteristics are combined for prediction.
The existing literature indicates that shipment methods relate to parcel loss. Parcel lockers were expected to be reliable, while drop-off deliveries at pick-up points were prone to parcel loss. Nevertheless, during data exploration it was observed that the absolute amount of deliveries via parcel lockers is low. It is expected that due to the skewed distribution of these features, these binary features are not listed among the 20 most important features and thus primarily ignored. In terms of actions that the company can take to prevent parcel loss, product and customer features cannot be altered, while operational deci
Figure 8: SHAP values for the RU-RF model
sions such as the carrier selection and the shipment method can be optimized. Therefore, to improve business process, we analyze these features within the prediction model. Figure 9 shows the partial dependence plots of the four features related to the shipment method and carrier. The partial dependence plot shows the marginal effect of one feature on the model's predicted outcome. Moreover, this plot shows the feature it interacts most with, depicted by the color. For example, Figure 9c shows solely positive SHAP values (y-axis) for drop-off deliveries with value 0 (x-axis). The value zero in the binary feature indicates that no drop-off is used. This means that the model relates no drop-off with higher probabilities of parcel loss. On the contrary, the value 1 on the x-axis mostly relates to a negative SHAP value on the y-axis. Hence, drop-off deliveries are related to fewer parcel loss. Figure 9c shows that this observed relationship is strengthened by the binary feature province_brussel. Drop-off deliveries in Brussels are seen by the model as more reliable deliveries. The three other figures are analysed in the same manner. It can be observed that the carrier ExtD1 seems to have no clear relationship with the predicted outcome of the model. On the other hand, ExtD2 shows a marginal negative effect; ExtD2 deliveries are associated with normal parcel deliveries, while deliveries with other carriers are more susceptible to parcel loss. Finally, it can be observed that parcel lockers show a marginal positive effect. Parcel locker deliveries are expected to become more often lost. These outcomes are very interesting because it conflicts with the hypothesis made in the literature, which suggests that parcel lockers were reliable and drop-off not.
### Validation with domain experts
We discussed the established correlations between the target variable and predictive features with several domain experts at Company X. The stock value is known to be related to parcel loss, as the business rules were fully focused on this feature for insurance decision making. Furthermore, the quantity of products in the parcel is related to the stock value because more products represent a higher stock value. In addition, the relationship between the parcel size is found interesting by domain experts. Figure 8 shows that either relative small or large parcels are related to positive parcel loss predictions. It is expected that fraudsters recognize small parcels from Company X as valuable phones, triggering potential fraud. On the other hand, small parcels are more easily lost due to process errors or could be delivered in the mailbox instead of to the door, which could trigger additional loss. The exact relationship between the parcel size and parcel loss needs to be further analysed by Company X to determine potential process improvements. Large parcels are more easily recognized and fraudsters could expect high valued TV's or computers to be inside, triggering further fraudulent loss. Similar, the product category is known to be related to parcel loss. The current business rules use the product categories: phones, tablets and laptops in the decision making. The model shows that besides these product categories also brands are related to parcel loss, such as Apple. Domain experts state that the delivery depot and location-based features that showed
associations to parcel loss were suspected but never empirically supported. Depots seem to be very valuable predictors for parcel loss and can help to improve business processes. Parcel loss was mainly occurred in Brussels. Loss prevention specialists at Company X were familiar with this observed problem but could not explain these differences. It is expected that Brussels is prone to fraudulent behavior. The feature Month provided insight into the relationship between time and parcel loss. It was observed that the later months in the year showed relatively more parcel loss. However, as only one year of data is used in this study it is hard to generalize the outcomes. Domain experts state that the observed relation is probably due to the total amount of parcels delivered. During Christmas and Black Friday higher volumes cause more process failures and less overview on processes, which explains the larger amount of observed parcel loss. Finally, it was observed that parcel lockers were susceptible to parcel loss, while drop-off deliveries show the opposite relationship. Domain experts state that drop-off deliveries are more reliable due to the extra person and used proof of delivery upon delivery and pick-up by customers. Moreover, parcel lockers were known to be susceptible to fraud in 2020, as this delivery method was at that moment in time only recently in use. Domain experts state that the expectation is that in the near future parcel lockers will reduce the amount of parcel loss.
Figure 9: Dependence plots of four binary features.
### Managerial implications
Looking at both the misclassificaton cost and the interpretability, we conclude that the most feasible model is the random under-sampled random forest model (RU-RF) of DBSL, which is expected to save EUR550.600,80 euros yearly for Company X.
Our results confirm that the current business rules of Company X on insurance are outdated and the use of predictive models could save a significant amount of costs. Based on our results, we recommend Company X to implement the random under-sampled random forest model (RU-RF) to change insurance decision-making towards a more data-driven method. As shown in our case study, this implementation will decrease the costs related to parcel loss. To further improve the business operations, we suggest that process engineers should analyze the predictions of the model. We have demonstrated valuable insights into drivers of parcel loss can be gained from the predictions. For instance, our analysis show that the quantity, size, and stock value are expected to be positively related to parcel loss, and in contrast, drop-off deliveries at pick-up points and deliveries with ExtD2 are expected to be negatively associated with parcel loss. Moreover, parcel lockers are currently likely to be prone to parcel loss. Finally, several product- and delivery address-related features are expected to affect the amount of parcel loss.
In addition, including expected costs of parcel loss in the carrier selection process can improve processing efficiency and reduce costs related to parcel loss. For instance, the carrier OwnDN was observed to decrease the amount of parcel loss. Instead of using insurance on 'high-risk' parcels, it should be explored what the effect is of changing the carrier to OwnDN. We expect that this will reduce the amount of parcel loss and increase customer satisfaction. Furthermore, our results suggest Company X should enhance drop-off deliveries, which are expected to decrease parcel loss. On the contrary, parcel locker deliveries should be avoided. Finally, Company X is recommended to do further analyses on parcel packaging and labeling, as our models show that stock value, quantity, and size are important factors of parcel loss. Fraudsters are expected to recognize high-valued parcels on appearance, which increases parcel loss.
## 8 Conclusion
In this paper, we present a first study on using machine learning to predict parcel losses in the last mile delivery. We did a case study with Company X, one of the largest online retailers in the Netherlands. Our results show that all proposed machine learning methods are able to predict parcel losses much better than the current conventional decision rules used by Company X. In particular, the proposed deep hybrid model (DHEL) gives the best performance among all tested machine learning approaches, but lacks interpretability. On the other hand, the proposed sampling with supervised learning method (DBSL), especially random undersmapling with random forests, is able to generate useful insights to improve the operational decisions for Company X.
We want to emphasize that while this study utilizes data from Company X specifically, we believe that the developed Machine Learning methods hold potential for broad applicability across various e-commerce retailers. This is based on the shared characteristics of delivery data within this domain, notably the presence of substantial datasets characterized by imbalanced class distribution.
|
2305.03907 | Listen to Look into the Future: Audio-Visual Egocentric Gaze
Anticipation | Egocentric gaze anticipation serves as a key building block for the emerging
capability of Augmented Reality. Notably, gaze behavior is driven by both
visual cues and audio signals during daily activities. Motivated by this
observation, we introduce the first model that leverages both the video and
audio modalities for egocentric gaze anticipation. Specifically, we propose a
Contrastive Spatial-Temporal Separable (CSTS) fusion approach that adopts two
modules to separately capture audio-visual correlations in spatial and temporal
dimensions, and applies a contrastive loss on the re-weighted audio-visual
features from fusion modules for representation learning. We conduct extensive
ablation studies and thorough analysis using two egocentric video datasets:
Ego4D and Aria, to validate our model design. We demonstrate the audio improves
the performance by +2.5% and +2.4% on the two datasets. Our model also
outperforms the prior state-of-the-art methods by at least +1.9% and +1.6%.
Moreover, we provide visualizations to show the gaze anticipation results and
provide additional insights into audio-visual representation learning. The code
and data split are available on our website
(https://bolinlai.github.io/CSTS-EgoGazeAnticipation/). | Bolin Lai, Fiona Ryan, Wenqi Jia, Miao Liu, James M. Rehg | 2023-05-06T02:53:13Z | http://arxiv.org/abs/2305.03907v3 | # Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation
###### Abstract
Egocentric gaze anticipation serves as a key building block for the emerging capability of Augmented Reality. Notably, gaze behavior is driven by both visual cues and audio signals during daily activities. Motivated by this observation, we introduce the first model that leverages both the video and audio modalities for egocentric gaze anticipation. Specifically, we propose a Contrastive Spatial-Temporal Separable (CSTS) fusion approach that adopts two modules to separately capture audio-visual correlations in spatial and temporal dimensions, and applies a contrastive loss on the re-weighted audio-visual features from fusion modules for representation learning. We conduct extensive ablation studies and thorough analysis using two egocentric video datasets: Ego4D and Aria, to validate our model design. We also demonstrate improvements over prior state-of-the-art methods. Moreover, we provide visualizations to show the gaze anticipation results and provide additional insights into audio-visual representation learning. More details can be found in our website ([https://bolinlai.github.io/CSTS-EgoGazeAnticipation](https://bolinlai.github.io/CSTS-EgoGazeAnticipation)).
## 1 Introduction
A person's eye movements during their daily activities are reflective of their intentions and goals (see [23] for a representative cognitive science study). The ability to predict the future gaze targets of the camera-wearer from egocentric video, known as _egocentric gaze anticipation_, is therefore a key step towards modeling and understanding cognitive processes and decision making. Furthermore, this capability could enable new applications in Augmented Reality and Wearable Computing, especially in social scenarios - for example, providing memory aids for patients with cognitive impairments. Attention anticipation could make it possible to reduce the latency of content delivery in such AR systems. However, forecasting the gaze fixations of a camera-wearer using only the egocentric view (i.e. without eye tracking at testing time) is very challenging due to the complexity of egocentric scene content and the dynamic nature of gaze behaviors.
We argue that audio signals can serve as an important auxiliary cue for egocentric gaze forecasting. This is illustrated in Fig. 1. During the input sequence which is given to the gaze forecasting model, the camera view shifts from the paper the subject is holding to the standing speaker, who asks a question, which the sitting speaker on the far right be
Figure 1: Egocentric gaze anticipation problem setting. \(\tau_{o}\) denotes the observation time, and \(\tau_{a}\) denotes the anticipation time. Given the video frames and audio of the Input Video Sequence, we aim to predict the gaze fixation distribution for the time steps in the Gaze Anticipation Sequence. Green dots indicate the gaze targets in future frames and the heatmap shows the gaze anticipation result from our model.
gins to answer. In the anticipation clip, the camera wearer's gaze shifts towards the sitting person's head. The influence of audio signals on eye movements is also evidenced by neuroscience research [56]. Therefore, we address the problem of forecasting the gaze fixation of the camera-wearer in unseen future frames using a short egocentric video clip and corresponding audio. As shown in Fig. 1, the model's ability to fuse the audio and visual cues enables it to correctly predict the future attention to the seated subject.
Though several works have addressed egocentric gaze estimation [29, 30, 28, 41, 42, 60, 40], the problem of egocentric gaze _anticipation_ is largely understudied [67]. Moreover, no prior works on egocentric gaze modeling have explored leveraging the audio modality for egocentric gaze behavior modeling. Intuitively, in the spatial dimension, the visual region (_e.g_. sound source) that has a stronger correlation with audio signals is more likely to be the potential future gaze target. In the temporal dimension, events in the audio signal may drive both egocentric visual capture (via head movement) and gaze movements as the camera wearer responds to new sounds. Prior approaches for audio-visual representation learning [27, 26, 53, 7, 10, 1, 59, 6, 39, 48] directly feed visual and audio embeddings into one multi-modal fusion layer, and may run into the pitfall of capturing spurious correlations between audio and visual tokens. In contrast, we propose to separately model the spatial correlation and temporal correlation between the visual and audio representations for gaze anticipation.
Formally, we propose a novel **C**ontrastive **S**patial-**T**emporal **S**eparable (**CSTS**) audio-visual fusion method for egocentric gaze anticipation. Specifically, we input the egocentric video frames and the corresponding audio spectrograms into a video encoder and an audio encoder respectively. Then we leverage self-attention to develop a spatial fusion module and a temporal fusion module in parallel for modeling the spatial and temporal audio-visual correlation separately, strategically highlighting important audio-visual correlations. The output representations from the two branches are merged by channel-wise reweighting and fed into a visual decoder to predict the future gaze target. We use a multi-modal contrastive loss [1] on the reweighted representations from the fusion modules to facilitate audio-visual correspondence learning. We demonstrate the benefits of our approach on two egocentric video datasets that capture social scenarios and everyday activities: Ego4D [21] and Aria [34]. Our proposed model achieves state-of-the-art performance on egocentric gaze anticipation. Our contributions are summarized as follows:
* We introduce the first computational model that utilizes both visual and audio signals for modeling egocentric gaze behaviors.
* We propose a novel contrastive spatio-temporal separable fusion strategy to facilitate audio-visual representation learning for both egocentric gaze anticipation and estimation (see Supplement).
* We present comprehensive experimental results on the two egocentric video datasets: Ego4D [21] and Aria [34]. We first validate our model design through ablation studies, and then show our model outperforms prior works by at least 1.6% and 1.4%, in F1 score, respectively on the egocentric gaze anticipation task.
## 2 Related Work
Our work relates to three main streams of research: egocentric gaze modeling, sound source localization in video, and contrastive audio-visual representation learning. It is worth noting that no existing egocentric gaze modeling studies leverage audio signals. In contrast, our method draws inspiration from audio-visual sound source localization and representation learning research to propose a multimodal approach for egocentric gaze anticipation.
**Egocentric Gaze Modeling**. Modeling human gaze behavior in egocentric videos is an important topic in egocentric vision. Most prior efforts target egocentric gaze estimation [42, 40, 29, 30, 41, 28]. Huang _et al_. [29] propose learning temporal attention transitions from video features that reflect drastic gaze movements. Li _et al_. [42] and Huang _et al_. [28] utilize the correlation of gaze behaviors and actions, modeling them jointly with a convolutional network. Lai _et al_. [40] encode global scene context into a single global token and explicitly model the global-local correlations in the visual embedding for gaze estimation. In contrast, egocentric gaze anticipation, which seeks to predict future gaze targets from past video frames, addresses an understudied dimension of modeling gaze. Zhang _et al_. [67] introduce this task and utilize a convolutional network and a discriminator to generate future video frames, which are further used to anticipate future gaze targets. They enhance their model by adding an additional branch for gaze forecasting [66]. All previous efforts on both egocentric gaze estimation and anticipation model gaze behavior from only the visual properties of the video stream, and do not consider the relationship between audio signals and gaze behavior. In this work, we introduce the first model that leverages both visual and audio signals for egocentric gaze anticipation.
**Sound Source Localization in Video**. Several works have addressed spatially and temporally localizing sources of sound in video. Early works use statistical correlation between low level features of the audio and visual signals to determine visual sound source regions [24, 16, 37], expanding to more complex features that correlate motion of visual objects with audio [8, 31]. More recently, deep learning approaches are used to correlate audio features with visual appearance features, often employing a contrastive learning framework to align audio and visual features from the same source [7, 57, 51, 62, 17, 18]. Chen _et al_. [10] improve upon
this paradigm with a hard-negative sample mining mechanism, and Senocak [58] propose a method for mining hard-positives from semantically similar features from different sources. Hu find improvements from clustering audiovisual entities [25] and using class-aware methods [26]. Recently, Hu address separating and localizing sound sources in mixtures [27]. Tsiami consider the related problem of audiovisual saliency estimation, augmenting sound source localization with a visual saliency module [63]. A more specific, but well-studied variant of sound source localization is recognizing active speakers in a scene. [11, 55, 9, 22, 54, 61, 38, 47, 65, 3, 4, 5]. Most recently, Jiang proposed an architecture for localizing speakers in egocentric video. Though gaze targets may relate to the sound sources, our goal is to anticipate the gaze behavior; unlike these works, we do not explicitly supervise the localization of sound sources and instead use the correlations between the video and audio to facilitate future gaze target prediction. Additionally, our model is trained and evaluated on naturalistic egocentric data with rapidly changing gaze targets and diverse sound sources, demanding a more careful design for multimodal fusion.
**Contrastive Audiovisual Representation Learning.** Our work draws from a rich literature on leveraging contrastive learning to learn audiovisual feature representations [6, 39, 50, 48, 49, 52, 46, 2, 1]. These works learn correspondences between audio and visual signals in an self-supervised manner, constructing positive pairs from matching video frames and audio segments, and negative pairs from all other pairwise combinations. We employ a similar contrastive loss to learn correspondences between co-occurring audio and visual features. However, while prior methods calculate contrastive loss on the raw embedding from each modality, we propose to apply contrastive loss on re-weighted audio and visual representations from our proposed spatial and temporal fusion mechanism.
## 3 Method
The egocentric gaze anticipation problem is illustrated in Fig. 1. Given egocentric video and audio from time \(t-\tau_{o}\) to \(t\), the goal is to predict the future gaze from \(t\) to \(t+\tau_{a}\) seconds. We denote the input video and audio as \(x\) and \(a\), respectively, and model the gaze fixation as a probabilistic distribution on a 2D image plane (following [42, 40]).
Notably, video and audio signals have correlations in both spatial and temporal dimensions: Spatially, the audio signal will have the strongest correlation with the regions of the video frames that correspond to the sound source. Temporally, in egocentric video, audio stimuli in the scene drive the wearer's head movement and how they orient the camera's field of view. A simple solution for capturing these correlations in a transformer architecture is to directly feed all video and audio tokens into a fusion layer that models all pairwise comparisons. However, we argue this strategy may capture spurious correlations across tokens and dilute information from the most important correlations. Our key insight is thus to design an audio-visual fusion mechanism that separately models the essential spatial and temporal correlations between the two modalities, and to leverage the contrastive learning to facilitate learning correspondences between the modalities.
Fig. 2 demonstrates the overview of our model. We ex
Figure 2: Overview of the proposed model. The video embeddings \(\phi(x)\) and audio embeddings\(\psi(a)\) are obtained by two transformer-based encoders. We then model the correlations of visual and audio embeddings using two separate branches – (1) spatial fusion, which learns the spatial co-occurence of audio signals and visual objects in each frame, and (2) temporal fusion, which captures the temporal correlations and possible gaze movement. A contrastive loss is adopted to facilitate audio-visual representation learning. We input the fused embeddings into a decoder for final gaze anticipation results.
ploit the transformer-based backbone encoders \(\phi(x)\) and \(\psi(a)\) to extract the representations of the video frames \(x\) and audio signals \(a\). We then employ a **C**ontrastive **S**patial-**T**emporal **S**eparable (CSTS) audio-visual fusion approach. Specifically, a spatial fusion module captures the correlation between audio embeddings and spatial appearance-based features; a temporal fusion module captures the temporal correlation between the visual and audio embeddings; and a contrastive learning schema is adopted to facilitate audio-visual representation learning. Finally, spatially and temporally fused audio-visual features are merged and fed into a decoder for future gaze anticipation.
### Audio and Visual Feature Embedding
**Visual Feature Embedding**. We adopt the multi-scale vision transformer (MViT) architecture [14] as the video encoder \(\phi(x)\). \(\phi(x)\) splits the 3D video tensor input into multiple non-overlapping patches, and thereby extracts \(T\times H\times W\) visual tokens with feature dimension \(D\) from \(x\).
**Audio Feature Embedding**. To address the possible misalignment between audio and video streams, we follow [36] to adopt a sliding window approach for audio signal pre-processing. Specifically, for a video frame at time step \(t_{i}\), the corresponding audio segment has a range of \([t_{i}-\frac{1}{2}\Delta t_{w},t_{i}+\frac{1}{2}\Delta t_{w}]\). We then use STFT to convert all audio segments into log-spectrograms and feed the processed audio segments into a transformer-based audio encoder \(\psi(a)\). Since the audio stream has more sparse information than video stream, we follow [19, 12] to adopt a light-weighted transformer-architecture for the audio encoder \(\psi(a)\). In this way, \(\psi(a)\) extracts \(T\times M\) tokens with feature dimension \(D\) from the audio inputs \(a\).
### Spatial-Temporal Separable Fusion
**Spatial Audio-Visual Fusion**. The Spatial Fusion branch identifies correlations between the audio signal corresponding to a video frame and its visual content in space. We first use convolutional operations to generate the spatial fusion audio embedding \(z_{a,s}\) with dimensions \(T\times 1\times D\) from the audio embedding \(\psi(a)\). This allows the model to extract a holistic audio embedding within each sliding window. We then input the visual embedding \(\phi(x)\) and pooled audio embedding \(z_{a,s}\) into an in-frame self-attention layer \(\sigma\). In this layer, we masked out all cross-frame connections and only calculate the correlations among visual tokens within each frame and the corresponding single audio token.
Therefore, the input to the spatial fusion consists of \(T\) groups of visual tokens, and \(T\) single audio embeddings. Formally, we have:
\[\phi(x)=\left[\phi(x)^{(1)},...,\phi(x)^{(T)}\right], \tag{1}\]
\[z_{a,s}=\left[z_{a,s}^{(1)},...,z_{a,s}^{(T)}\right], \tag{2}\]
where \(\phi(x)^{(i)}\in\mathbb{R}^{1\times N\times D}\), \(z_{a,s}^{(i)}\in\mathbb{R}^{1\times 1\times D}\) with \(i\in\{1,...,T\}\), and \(N=H\times W\). Hence, the input from each time step is denoted as:
\[z_{s}^{(i)}=\left[\phi^{(i)}(x),z_{a,s}^{(i)}\right]\in\mathbb{R}^{1\times(N+ 1)\times D} \tag{3}\]
The in-frame self-attention operation for time step \(i\) can be written as:
\[\sigma(z_{s}^{(i)})=Softmax\left(\mathbf{Q}_{s}^{(i)}\mathbf{K}_{s}^{(i)^{T}}/\sqrt{D }\right)\mathbf{V}_{s}^{(i)}\in\mathbb{R}^{1\times(N+1)\times D}, \tag{4}\]
where \(\mathbf{Q_{s}^{(i)}},\mathbf{K}_{s}^{(i)},\mathbf{V}_{s}^{(i)}\) refer to query, key, and value of the spatial self-attention at time step \(i\), respectively. We apply Eq. 4 independently for each time step \(i\) and have the following overall in-frame self-attention:
\[\sigma(z_{s})=\left[\sigma(z_{s}^{(i)}),...,\sigma(z_{s}^{(T)})\right]\in \mathbb{R}^{T\times(N+1)\times D}. \tag{5}\]
In practice, we input all tokens to the in-frame self-attention layer simultaneously, mask out cross-frame correlations and calculate Eq. 4 in one-shot to speed up training. We further add two linear layers after the self-attention outputs \(\sigma(z_{s})\), following the standard self-attention layer. The output of the spatial module is finally denoted as \(u_{s}\in\mathbb{R}^{T\times(N+1)\times D}\).
**Temporal Audio-Visual Fusion**. The Temporal Fusion branch models relationships between audio and visual content across time. We apply two convolutional layers to integrate the embedding from each modality at each time step into a single token. The resulting visual and audio tokens are denoted as \(z_{v,t}\in\mathbb{R}^{T\times 1\times D}\) and \(z_{a,t}\in\mathbb{R}^{T\times 1\times D}\), respectively. Then we feed \(z_{t}=[z_{v,t},z_{a,t}]\in\mathbb{R}^{2T\times 1\times D}\) into a cross-frame self-attention layer \(\pi\):
\[\pi(z_{t})=Softmax\left(\mathbf{Q}_{t}\mathbf{K}_{t}^{T}/\sqrt{D}\right)\mathbf{V}_{t}\in \mathbb{R}^{2T\times 1\times D}, \tag{6}\]
where \(\mathbf{Q}_{t},\mathbf{K}_{t},\mathbf{V}_{t}\) are query, key and value matrices with dimension \(2T\times 1\times D\). Similar to the spatial fusion, two additional linear layers are added after \(\pi(z_{t})\) and result in the final temporal fusion output \(u_{t}\in\mathbb{R}^{2T\times 1\times D}\)
**Merging of Two Fusion Modules**. After obtaining audio-visual representations from the two fusion modules, we merge the two branches by reweighting the output from spatial fusion with the temporal weights from temporal fusion in each channel to obtain a new representation for each modality that has been refined by multimodal spatial and temporal correlation. Specifically, we break down the output from spatial fusion \(u_{s}\in\mathbb{R}^{T\times(N+1)\times D}\) into \(u_{v,s}\in\mathbb{R}^{T\times N\times D}\) and \(u_{a,s}\in\mathbb{R}^{T\times 1\times D}\), and the output from temporal fusion \(u_{t}\in\mathbb{R}^{2T\times 1\times D}\) into \(u_{v,t}\in\mathbb{R}^{T\times 1\times D}\) and \(u_{a,t}\in\mathbb{R}^{T\times 1\times D}\). The reweighted visual representation is formulated as
\[u_{v}=u_{v,s}\otimes u_{v,t}\in\mathbb{R}^{T\times N\times D}, \tag{7}\]
where \(\otimes\) denotes element-wise multiplication with broadcast mechanism. \(u_{v}\) is then fed into a decoder to generate
final prediction for future gaze target. We follow [40] to add skip connections from the video encoder to the decoder and optimize the network with a KL-Divergence loss \(\mathcal{L}_{kld}\).
### Contrastive Learning for Audio-Visual Fusion
In addition to using KL-Divergence loss to supervise gaze anticipation, we propose to leverage the intrinsic alignment of visual and audio modalities to learn a more robust audio-visual representation by using a contrastive learning scheme. Multi-modal contrastive loss has been proved to be effective in self-supervised learning [2, 1]. Rather than calculating the contrastive loss directly on embedded features, we propose to use the reweighted video and audio representations from the spatial and temporal fusion modules. In our experiments, we show this is a more effective representation learning method for egocentric gaze anticipation.
To this end, we reweight the raw audio embedding \(\psi(a)\in\mathbb{R}^{T\times M\times D}\) from the audio encoder by temporal weights \(u_{a,t}\) from the temporal fusion module in a similar way to Eq. 7. We then get the reweighted audio feature as:
\[u_{a}=\psi(a)\otimes u_{a,t}\in\mathbb{R}^{T\times M\times D} \tag{8}\]
We don't use an additional learnable token to aggregate information from other tokens as prior works did [2, 1, 43]. We instead average all tokens of \(u_{v}\) and \(u_{a}\) respectively to obtain the single-vector representations \(u^{\prime}_{v},u^{\prime}_{a}\in\mathbb{R}^{1\times D}\) and then map them to a common space using linear layers followed by L2-norm. It can be formulated as \(w_{v}=Norm\left(f_{1}(u^{\prime}_{v})\right)\) and \(w_{a}=Norm\left(f_{2}(u^{\prime}_{a})\right)\), where \(f_{1}(\cdot),f_{2}(\cdot)\) are linear layers. The resulting visual vector and audio vector are denoted as \(w_{v},w_{a}\in\mathbb{R}^{1\times D^{\prime}}\), where \(D^{\prime}\) is the new dimension of the common space. Within each mini-batch, corresponding audio and visual embeddings are considered as positive pairs, and all other pairwise combinations are considered as negative pairs. Following [43], we calculate video-to-audio loss and audio-to-video loss separately. The video-to-audio contrastive loss is defined as
\[\mathcal{L}_{cntr}^{v2a}=-\frac{1}{|\mathcal{B}|}\sum_{i=1}^{|\mathcal{B}|} \log\frac{\exp({w_{v}^{(i)}}^{T}{w_{a}^{(i)}}/{\mathcal{T}})}{\sum_{j\in \mathcal{B}}\exp({w_{v}^{(i)}}^{T}{w_{a}^{(j)}}/{\mathcal{T}})}, \tag{9}\]
where \(\mathcal{B}\) is the training batch \(\mathcal{B}=\{1,2,\dots,n\}\) and \(\mathcal{T}\) is the temperature factor. Superscripts \((i)\) and \((j)\) denote the \(i\)-th and \(j\)-th samples in the batch. The audio-to-video loss is defined in a symmetric way. Finally, the contrastive loss is defined as \(\mathcal{L}_{cntr}=\mathcal{L}_{cntr}^{v2a}+\mathcal{L}_{cntr}^{azv}\). \(\mathcal{L}_{kld}\) and \(\mathcal{L}_{cntr}\) are linearly combined with a parameter \(\alpha\) for the final training loss, \(\mathcal{L}=\mathcal{L}_{kld}+\alpha\mathcal{L}_{cntr}\).
### Implementation Details
In our experiments, we set observation time \(\tau_{o}\) as \(3\) seconds and anticipation time \(\tau_{a}\) as \(2\) seconds. For video input, we sample 8 frames from the observable segment and resize to a spatial size of \(256\times 256\). For audio input, following [36], we first resample the audio signal to 24kHz and set a time window with \(\Delta t_{w}=1.28s\) to crop the audio segment corresponding to each video frame. We then convert it to a log-spectrogram using a STFT with window size 10ms and hop length 5ms. The number of frequency bands is set as 256 resulting in a spectrogram matrix of size \(256\times 256\). The output of the decoder is the gaze distribution on 8 frames uniformly sampled from the 2-second anticipation time. More details about model architecture and training hyper-parameters can be found in the supplementary.
## 4 Experiments
We first introduce the datasets and evaluation metrics used in our experiments. We then present detailed ablation studies to demonstrate the contribution of each component in our method, and demonstrate the performance improvement over prior SOTA methods for gaze anticipation as well as gaze estimation models applied to the gaze anticipation task. Finally, we visualize the predictions and weights of our model to provide qualitative insight into our method.
### Experiment Setup
**Datasets**. We conduct experiments on two egocentric datasets1 that contain aligned video and audio streams and gaze tracking data - Ego4D [21] and Aria [34]. The Ego4D eye-tracking subset is collected in social settings and contains 31-hour egocentric videos from 80 participants. All video have a fixed 30 fps frame rate and spatial resolution of \(1088\times 1080\), and audio streams are recorded with a sampling rate of 44.1kHz. We use the train/test split released in [40] in our experiments, _i.e_., 15310 video segments for training and the other 5202 video segments for testing.
Footnote 1: Note that another widely used gaze estimation benchmark EGTEA Gaze+ [42] did not release audio data and thus is not feasible for our study.
The Aria dataset contains 143 egocentric videos (totaling 7.5 hours) collected with Project Aria glasses. It covers a variety of everyday activities including cooking, exercising and spending time with friends. All videos have a fixed 30 fps frame rate and spatial resolution of \(1408\times 1408\). A sliding window is used to trim long videos into 5-second video segment with a stride of 2 seconds. We use 107 videos (10456 segments) for training and 36 videos (2901 segments) for testing. We will release our split to facilitate future studies in this direction.
**Evaluation Metrics.** As suggested in recent work on egocentric gaze estimation [40], AUC score can easily get saturated due to the long-tailed distribution of gaze on 2D video frames. Therefore, we follow [40, 42] to adopt F1 score, recall and precision as our evaluation metrics.
### Experimental Results
#### 4.2.1 Ablation Study
We first quantify the performance contribution of each key module from our proposed method. Specifically, we denote the model with only spatial fusion module as _S-fusion_, the model with only temporal fusion as _T-fusion_, the model uses our proposed spatial-temporal fusion module without the contrastive learning schema as _ST-fusion_.
As demonstrated in Table 1, compared with models trained solely on RGB frames (Vision only), S-fusion and F-fusion boost the F1 score by \(+1.4\%\) and \(+1.5\%\) on Ego4D and \(+1.1\%\) and \(+1.1\%\) on Aria. This suggests the benefits of explicitly incorporating the audio signals for modeling gaze behaviors. Moreover, the ST-fusion model further achieves a F1 score of 38.9% on Ego4D and 59.0% on Aria. This results support our claim that jointly modeling the spatial and temporal correlation between video and audio signal plays a vital role for egocentric gaze anticipation. Contrastive loss further improves F1 by +0.5% and +0.9% on Ego4D and Aria suggesting its notable contributions to audio-visual representative learning.
We provide additional analysis of our fusion strategy by considering three additional fusion strategies: (1) concatenating audio and visual embeddings channelwise (denoted as _Concat._) as in [36]; (2) feeding all embedded video and audio tokens together into a standard self-attention layer (denoted as _Vanilla SA_), which is inspired by [44]; and (3) a variant of CSTS which connects the two fusion modules sequentially, _i.e_. feeding embeddings to the temporal fusion module first and then feeding its output to the spatial fusion module (denoted as _Seq. ST-fusion_). Further details of the above fusion strategies are provided in the supplementary.
As shown in Table 2, both Concat. and Vanilla SA bring moderate improvement over the vision-only baseline. However, our proposed fusion strategy yields to larger performance boost, even without using the contrastive loss (ST-Fusion). We speculate that this is because Concat. and Vanilla SA have limitations in effectively fusing the modalities. Concat. restricts fusion to channel-wise alignment and does not fully capture spatial and temporal correlations between the modalities, while Vanilla SA may dilute important spatial and temporal correlations by considering all pairwise correlations. Our CSTS fusion module instead guides attention-based fusion using intuition about the spatial and temporal alignment between the modalities. Furthermore, our model also consistently surpasses its sequential variant (Seq. ST-fusion) on both datasets. We believe this is because in the sequential version, the two modalities are already mixed temporally before being passed to the spatial fusion module, and breaking the separability in our design. This results further indicate that our separable spatial and temporal fusion models learn a better representation by independently considering correlations between the modalities over space and time.
#### 4.2.2 Analysis on Fusion and Contrastive Learning
We also evaluate the benefits of our contrastive learning scheme. Here, we consider another baseline (denoted as _Vanilla Contr_) that calculates the contrastive loss using raw video and audio embeddings (_i.e_. \(\phi(x)\) and \(\psi(a)\)), as is typical in prior work. Our method instead calculates the contrastive loss using the video and audio representations obtained from the multimodal fusion modules, leading to a performance improvement of +0.4% on Ego4D and +1.0% on Aria. This finding suggests that refining the representation for each modality using our spatial and temporal fusion module and reweighting strategy learns more informative representations for contrastive learning.
#### 4.2.3 Comparison with State-of-the-art Methods
Most existing works on egocentric gaze modeling target egocentric gaze estimation rather than anticipation. In order to provide a thorough comparison, in addition to comparing against SOTA egocentric gaze anticipation models (DFG[67], DFG+[66]), We also adapt recent SOTA egocentric gaze estimation model GLC[40] and all baselines from [40] (I3D-Res50 [64], MViT[14], GazeMLE[42] and AttnTrans[29]) to the anticipation task. Detailed results are summarized in Table 3.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Ego4D} & \multicolumn{3}{c}{Aria} \\ \cline{2-7} & F1 & Rec. & Prec. & F1 & Rec. & Prec. \\ \hline Vision only & 37.2 & 54.1 & 28.3 & 57.5 & 62.4 & 53.3 \\ S-fusion & 38.6 & 54.1 & 30.1 & 58.6 & **67.1** & 52.0 \\ T-fusion & 38.7 & 53.8 & 30.1 & 58.6 & 65.9 & 52.8 \\ ST-fusion & 38.9 & 54.2 & 30.3 & 59.0 & 66.4 & 53.1 \\ \hline
**CSTS** & **39.4** & **54.9** & **30.7** & **59.9** & 66.8 & **54.3** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablations on each key component of our proposed model. The best results are highlighted with **boldface**. See Section 4.2.1 for further discussion.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Ego4D} & \multicolumn{3}{c}{Aria} \\ \cline{2-7} & F1 & Rec. & Prec. & F1 & Rec. & Prec. \\ \hline Vision only & 37.2 & 54.1 & 28.3 & 57.5 & 62.4 & 53.3 \\ Concat. & 38.1 & 53.6 & 29.5 & 58.0 & 66.8 & 51.2 \\ Vanilla SA & 38.5 & 53.3 & 30.1 & 58.0 & 67.2 & 51.0 \\ Seq. ST-fusion & 38.5 & 53.6 & 30.1 & 58.6 & **67.3** & 52.0 \\ ST-fusion & 38.9 & 54.2 & 30.3 & 59.0 & 66.4 & 53.1 \\ Vanilla Contr & 39.0 & 53.7 & 30.6 & 58.9 & 66.5 & 52.9 \\ CSTS & **39.4** & **54.9** & **30.7** & **59.9** & 66.8 & **54.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Analysis on proposed fusion strategies and contrastive learning schema. The best results are highlighted with **boldface**. See Section 4.2.2 for further discussion.
Our method outperforms its direct competitor DFG+, which is the previous SOTA model for egocentric gaze anticipation, by +1.9% F1 on Ego4D and +2.3% F1 on Aria. Note that the original DFG and DFG+ used a less powerful backbone encoder, so for fair comparison, we reimplement their method using the same MViT backbone as our method. We also observe that methods originally designed for egocentric gaze estimation still work as strong baselines for the egocentric gaze anticipation task. Our proposed CSTS model also outperforms these methods, surpassing the recent SOTA for egocentric gaze estimation, GLC, by +1.4% F1 on Ego4D and +1.6% F1 on Aria.
In addition, we evaluate gaze anticipation on each anticipation time step independently and compare with previous methods in Fig. 3. Unsurprisingly, the anticipation problem becomes more challenging as the anticipation time step increases farther into the future. Our CSTS method consistently outperforms all baselines at all future time steps. We also note that our model produces new SOTA results on egocentric gaze estimation, demonstrating the generalizability and robustness of our approach across gaze modeling tasks. We include these results in the supplementary.
#### 4.2.4 Visualization of Predictions
We visually showcase the anticipation results of CSTS and the baselines in Fig. 6. We can see that GazeMLE [42] and AttnTransit [29] produce more ambiguity in prediction heatmaps. Other methods fail to anticipate the true gaze target, and are likely misled by other salient objects. Our CSTS approach produces the best gaze anticipation results among all methods. We attribute this improvement to our novel model design that effectively leverages both audio and visual cues for forecasting the gaze targets.
Figure 4: Visualization of the spatial correlation weights. All video frames are sorted in a chronological order indexed by the numbers on the top-right corner.
Figure 3: The performance of gaze anticipation in each frame. Our model consistently outperforms all prior methods by a notable margin.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Ego4D} & \multicolumn{3}{c}{Aria} \\ \cline{2-7} & F1 & Rec. & Prec. & F1 & Rec. & Prec. \\ \hline Center Prior & 13.6 & 9.4 & 24.2 & 24.9 & 17.3 & 44.4 \\ GazeMLE [42] & 36.3 & 52.5 & 27.8 & 56.8 & 64.1 & 51.0 \\ AttnTrans [29] & 37.0 & **55.0** & 27.9 & 57.4 & 65.6 & 51.0 \\ I3D-R50 [15] & 36.9 & 52.1 & 28.6 & 57.4 & 63.6 & 52.2 \\ MViT [14] & 37.2 & 54.1 & 28.3 & 57.5 & 62.4 & 53.3 \\ GLC [40] & 37.8 & 52.9 & 29.4 & 58.3 & 65.4 & 52.6 \\ DFG [67] & 37.2 & 53.2 & 28.6 & 57.4 & 63.6 & 52.3 \\ DFG+ [66] & 37.3 & 52.3 & 29.0 & 57.6 & 65.5 & 51.3 \\ CSTS & **39.4** & 54.9 & **30.7** & **59.9** & **66.8** & **54.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison with previous state-of-the-art models on egocentric gaze anticipation. We also adapt previous egocentric gaze estimation approaches to the anticipation setting for a more thorough comparison. The best results are highlighted with **boldface**. See Section 4.2.3 for further discussion.
#### 4.2.5 Visualization of Learned Correlations
We provide further insight on our model by visualizing the audio-visual correlation from the spatial fusion module. For each time step \(t\), we calculate the correlation of each visual token with the single audio token and map it back to the input frames. The correlation heatmaps are shown in Fig. 4. In the first example, the speaker in the middle speaks, then turns her head around to talk with a social partner in the background (frame 1-3). We observe that our model captures that the audio signal has the highest correlation with spatial region of the speaker while she is speaking. Then, when she stops talking and turns her head towards the background speaker, the correlation are highest in the background regions, indicating the potential location of her social partner. The second example illustrates a similar phenomenon; the model captures the speaker at the beginning when she is talking, then attend to background locations when she stops. These examples suggest our model has the capability to model the audio-visual correlations in spatial dimension to learn a robust audio-visual representation.
## 5 Conclusion
In this paper, we propose a novel contrastive spatial-temporal separable fusion approach (CSTS) for egocentric
Figure 5: Egocentric gaze anticipation results from our model and other baselines. We show the results of four future time steps uniformly sampled from the anticipation segments. Green dots indicate the ground truth gaze location.
-gaze anticipation. Our key contribution is to break down the fusion of the audio and visual modalities into a separate spatial fusion module for learning the spatial co-occurrence of salient visual features and audio signals, and a temporal fusion module for modeling the audio-visual correlations across different time steps. We further adopt a contrastive loss on the reweighted audio-visual representations from the fusion modules to facilitate multimodal representation learning. We demonstrate the benefits of our proposed model design on two egocentric video datasets: Ego4D and Aria. Our work is a key step for probing into human cognitive process with computational models, and provides important insights into multimodal representation learning, visual forecasting, and egocentric video understanding.
|
2301.00721 | Tori Approximation of Families of Diagonally Invariant Measures | We approximate any portion of any orbit of the full diagonal group $A$ in the
space of unimodular lattices in $\RR^n$ using
a fixed proportion of a compact $A$-orbit.
Using those approximations for the appropriate sequence of orbits, we prove
the existence of
non-ergodic measures which are also weak limits of compactly supported
$A$-invariant measures. In fact, given any countably many $A$-invariant ergodic
measures, our methods show that there exists a sequence of compactly supported
periodic $A$-invariant measures such that the ergodic decomposition of its weak
limit has these measures as factors with positive weight. Using the same
methods, we prove that any compactly supported $A$-invariant and ergodic
measure is the weak limit of the restriction of different compactly supported
periodic measures to a fixed proportion of the time. In addition, for any $c\in
(0,1]$ we find a sequence of compactly supported periodic $A$-invariant
measures that converge weakly to $cm_{X_n}$ where $m_{X_n}$ denotes the Haar
measure on $X_n$. In particular, we prove the existence of partial escape of
mass for compact $A$-orbits. These results give affirmative answers to
questions posed by Shapira in ~\cite{ShapiraEscape}. Our proofs are based on a
modification of Shapira's proof in ~\cite{ShapiraEscape} and on a
generalization of a construction of Cassels, as well as on effective
equidistribution estimates of Hecke neighbors by Clozel, Oh and Ullmo, and a
number theoretic construction of a special number field. | Omri Nisan Solan, Yuval Yifrach | 2023-01-02T15:28:30Z | http://arxiv.org/abs/2301.00721v3 | # Tori Approximation of Families of Diagonally Invariant Measures
###### Abstract
We approximate any portion of any orbit of the full diagonal group \(A\) in the space of unimodular lattices in \(\mathbb{R}^{n}\) using a fixed proportion of a compact \(A\)-orbit. Using those approximations for the appropriate sequence of orbits, we prove the existence of non-ergodic measures which are also weak limits of compactly supported \(A\)-invariant measures. In fact, given any countably many \(A\)-invariant ergodic measures, our methods show that there exists a sequence of compactly supported periodic \(A\)-invariant measures such that the ergodic decomposition of its weak limit has these measures as factors with positive weight. Using the same methods, we prove that any compactly supported \(A\)-invariant and ergodic measure is the weak limit of the restriction of different compactly supported periodic measures to a fixed proportion of the time. In addition, for any \(c\in(0,1]\) we find a sequence of compactly supported periodic \(A\)-invariant measures that converge weakly to \(cm_{X_{n}}\) where \(m_{X_{n}}\) denotes the Haar measure on \(X_{n}\). In particular, we prove the existence of partial escape of mass for compact \(A\)-orbits. These results give affirmative answers to questions posed by Shapira in [17]. Our proofs are based on a modification of Shapira's proof in [17] and on a generalization of a construction of Cassels, as well as on effective equidistribution estimates of Hecke neighbors by Clozel, Oh and Ullmo, and a number theoretic construction of a special number field.
## 1 Introduction
Let \((X_{n},d_{X_{n}})\) denote the metric space of unimodular lattices in \(\mathbb{R}^{n}\) and let \(A<\mathrm{SL}_{n}(\mathbb{R})\) denote the subgroup of diagonal matrices with positive diagonal inside the group of real matrices with determinant \(1\). Among the ergodic \(A\)-invariant probability measures on \(X_{n}\), those which are supported on compact orbits stand out due to their connection to number theory. Indeed, there is a surjective map between the set of full \(\mathbb{Z}\)-modules inside totally real number fields and the set of compact \(A\)-orbits in \(X_{n}\). For any totally real degree \(n\) number field \(K\), let \(\sigma_{i}:K\hookrightarrow\mathbb{R};i=1,\ldots,n\) be some ordering of the natural embeddings of \(K\) and let \(\sigma=(\sigma_{1},\ldots,\sigma_{n}):K\rightarrow\mathbb{R}^{n}\) denote their concatenation. The compact orbit corresponding to a full module \(M\subset K\) is then the \(A\)-orbit of the
unimodular lattice \(\operatorname{cov}(\sigma(M))^{-1/n}\sigma(M)\) where \(\operatorname{cov}(\Lambda)\) denotes the co-volume of a lattice \(\Lambda\). This orbit is indeed compact, and every compact \(A\)-orbit is given in this manner. Denote:
\[\mathcal{M}(X_{n})=\{\text{finite measures on }X_{n}\};\]
\[\mathcal{M}(X_{n})_{e}^{A}=\{\text{ergodic $A$-invariant probability measures on }X_{n}\};\]
\(\mathcal{M}(X_{n})_{c}^{A}=\{\text{ergodic $A$-invariant probability measures on }X_{n}\text{ supported on compact orbits}\}\).
We endow \(\mathcal{M}(X_{n})\) with the weak-\(*\) topology, and study the closure \(\overline{\mathcal{M}(X_{n})_{c}^{A}}\) in it. It is known that apart from measures of compact orbits, \(m_{x_{n}},0\in\mathcal{M}(X_{n})\). The Haar measure is obtained by applying ergodicity to the result of Benoist and Oh [2], or more directly using Shapira and Zheng's [18]. The trivial measure is obtained by Shapira [17]. In this paper, we show that \(\mathcal{M}(X_{n})\) contains several families of natural and important measures, thus answering positively the discussion in [10, SS1.5] and negatively Conjecture [10, Conjecture 1.11].
### Measures with Predetermined Ergodic Factors
The first family of measures we consider contains non-ergodic measures. Given any finitely many elements \(\mu_{1},\dots,\mu_{k}\in\mathcal{M}(X_{n})_{e}^{A}\), we find a measure in \(\mu\in\overline{\mathcal{M}(X_{n})_{c}^{A}}\) such that the ergodic decomposition of \(\mu\) has positive weight on each of the \(\mu_{i}\)'s for \(i=1,\dots,k\). In [17, p.4 Q1, Q2], Shapira asks whether non-ergodic measures and measures with a given periodic \(A\)-invariant ergodic factor can be obtained as weak limits of \(\mathcal{M}(X_{n})_{c}^{A}\) elements. The following Theorem shows in particular that this is indeed the case:
**Theorem 1.1**.: _Let \(\mu_{1},\mu_{2},\dots,\mu_{k}\in\mathcal{M}(X_{n})_{e}^{A}\) be a finite sequence of \(A\)-invariant ergodic measures. Then there exists \(\mu\in\overline{\mathcal{M}(X_{n})_{c}^{A}}\) such that the ergodic decomposition of \(\mu\) has positive weights on the \(\mu_{i}\)'s. In particular for any \(\mu\in\mathcal{M}(X_{n})_{c}^{A}\) there exists \(\tilde{\mu}\in\overline{\mathcal{M}(X_{n})_{c}^{A}\setminus\{\mu\}}\) such that the ergodic decomposition of \(\tilde{\mu}\), contains \(\mu\) with positive weight._
**Remark 1.2** (The weights in the ergodic decomposition).: For a single measure \(\mu_{1}\), we get an explicit lower bound on the weight, namely \(\mu\geq\left(\frac{\lfloor n/2\rfloor}{4n^{2}(n^{2}-1)}\right)^{n-1}\mu_{1}\). More generally, for a finite sequence of measures \(\mu_{1},\dots,\mu_{k}\) we get \(\mu\geq\Theta(k^{1-n})\mu_{i}\).
**Remark 1.3** (Countably many measures).: Using the same technics one can prove the same result for countably many measures. For possibly infinite sequence \(\mu_{1},\mu_{2},\dots\) of measures and every sequence of positive constants \(c_{1}+c_{2}+\dots=1\) we can we can construct \(\mu\) with \(\mu\geq\Theta(c_{i}^{n-1})\mu_{i}\).
### Partial Escape of Mass and Entire Mass Approximations
The next measures we will find in \(\overline{\mathcal{M}(X_{n})_{c}^{A}}\) are measures with total mass strictly between \(0\) and \(1\). In his paper [17, p.4, Q2], Shapira proves that the zero measure is contained in \(\overline{\mathcal{M}(X_{n})_{c}^{A}}\) and poses the question of whether measures with total mass strictly between \(0\) and \(1\) also lie there. In [8] David and Shapira find a sequence of orbits with a lower bound on the escape of mass, but did not prove that the limit is not zero. We answer Shapira's question in the affirmative in two ways. The first is the following corollary of Theorem 1.1:
**Corollary 1.4**.: _For every sequence \(\mu_{1},\mu_{2},\ldots,\mu_{k},\cdots\in\mathcal{M}(X_{n})_{e}^{A}\) there exists \(\mu\in\overline{\mathcal{M}(X_{n})_{c}^{A}}\) such that \(\mu(X_{n})\in(0,1)\) and the ergodic decomposition of \(\mu\) has positive weights on \(\mu_{i}\) for all \(i\)._
The next family of measures we find inside the closure of the compactly supported periodic measures is the family \(\{cm_{X_{n}}:c\in(0,1]\}\) where \(m_{X_{n}}\) is the Haar probability measure on \(X_{n}\). This family of measures is substantially different from the measure families we approximated so far since we can describe their entire ergodic decomposition (which is simply \(cm_{X_{n}}\)) rather than describe some of the factors and have no control over the remaining factors. In [18], they show that \(m_{X_{n}}\in\overline{\mathcal{M}(X_{n})_{c}^{A}}\). Here we prove a more general theorem.
**Theorem 1.5**.: _Let \(c\in(0,1]\). Then \(cm_{X_{n}}\in\overline{\mathcal{M}(X_{n})_{c}^{A}}\)._
In particular, the above theorem gives another answer to Shapira's question regarding the partial escape of mass.
**Remark 1.6** (Volume of the compact orbits).: Every compact orbit \(Ax\) has a volume \(\operatorname{Vol}(Ax)=\operatorname{Vol}(\mathbb{R}^{n-1}/\log\operatorname{ stab}_{A}(x))\) and a corresponding order \(R_{x}\). The compact orbits we construct for the proof of Theorem 1.1 have volume \(O(\log\operatorname{Disc}(R_{x}))\). As written, our proof of Theorem 1.5 doesn't provide estimates on the volumes. In Remark 5.13, we argue that similarly to the proof of Theorem 1.5, one can choose the approximating compact orbits \(Ax\) to have volume \(O(\operatorname{Disc}(R_{x})^{c^{\prime}})\) for some \(c^{\prime}>0\).
The estimate of the regulator in Remark 1.6 that provides a sequence of orbits with volume polynomial in the discriminant of the corresponding order and partial escape of mass, contradicts [10, Conjecture 1.11], which states that every such sequence of orbits must converge to an algebraic measure. The spirit of the conjecture is not disproved, and one could hope that the conjecture can be corrected by considering only partial escape of mass. We do not believe this is the case for general orbits. Instead, we provide the following conjecture:
**Conjecture 1.7**.: _Let \(Ax_{i}\subset X_{n}\) be a sequence of compact orbits corresponding to orders \(R_{i}\). Suppose that \(\operatorname{Vol}(Ax_{i})>\operatorname{disc}(Ax_{i})^{\varepsilon}\) for some \(\varepsilon>0\). Here \(\operatorname{disc}(Ax_{i})\) is the discriminant of the orbit
\(A_{x_{i}}\). It is defined as in [10]. Then \(\lim_{i\to\infty}\mu_{Ax_{i}}\) is a combination of countably many periodic algebraic orbits of groups \(H_{j}\supsetneq A\), with total mass bounded from below as a function of \(\varepsilon\). Moreover, if \(K_{i}=R_{i}\otimes\mathbb{Q}\), and \((\operatorname{disc}(R_{i})/\operatorname{disc}K_{i})<\operatorname{disc}(R_{i })^{\delta}\) for some \(\delta=\delta(\varepsilon)\), then \(\lim_{i\to\infty}\mu_{Ax_{i}}\) is algebraic. We suppose the same will hold for general arithmetic \(G/\Gamma\) with the appropriate generalization of \(\operatorname{disc}(R_{i})/\operatorname{disc}K_{i}\). The reader may compare this discussion to a similar discussion carried out in [10, Corollary 1.7]._
### Method of Proof
We will first show a sketch of the proof of Theorem 1.1, for the particular case \(k=1\) where we approximate a single measure \(\mu\). We will first show how to do that depending on a conjecture. Then we will show how to overcome the need to use the conjecture.
**Conjecture 1.8**.: _Let \(K\) be a number field. For every \(I\leq\mathcal{O}_{K}\), we get a measure \(\mu_{I}\) on a compact orbit associated to \(I\) (See Definition 2.6). The measure \(\mu_{I}\) depends only on the class of \(I\) in the class group \(\operatorname{Cl}_{\mathcal{O}_{K}}\). It is believed that_
\[\mu_{K}:=\frac{1}{\#\operatorname{Cl}_{\mathcal{O}_{K}}}\sum_{I\in \operatorname{Cl}_{\mathcal{O}_{K}}}\mu_{I}\xrightarrow{\operatorname{disc}( K)\to\infty}m_{X_{n}}, \tag{1.1}\]
_where \(m_{X_{n}}\) is the Haar measure on \(X_{n}\), and \(\operatorname{disc}(K)\) is the discriminant of \(K\). Moreover, we expect that \(d(\mu_{K},m_{X_{n}})=O(\operatorname{disc}(K)^{-\star})\), for \(\star>0\)._
Another ingredient of the proof is a construction of a number field with very small units compared to its discriminant. A construction of Shapira [17] following Cassels [4] is a series of number fields \(K\) whose unit groups are generated by elements \(u\in\mathcal{O}_{K}^{\times}\) such that \(\log|u|=O(\log\operatorname{disc}(K))\) (which is exceptionally small compared to the bound given by the class number formula: \(\log|u|=O(\operatorname{disc}(K)^{1/d-1})\), provided that the unit lattice is sufficiently balanced). Applying Conjecture 1.8 to these number fields, we get that that the different measures \(\mu_{I}\) for \(I\in\operatorname{Cl}_{\mathcal{O}_{K}}\) are equidistributed. Using this equidistribution we can approximate a generic point \(x\in X_{n}\) of our desired ergodic measure \(\mu\), namely there is \(I\in\operatorname{Cl}_{\mathcal{O}_{K}}\) such that \(d(x,x^{\prime})=O(\operatorname{disc}^{-\star}(K))\) for some \(x^{\prime}\in\operatorname{supp}\mu_{I}\). Hence for \(r=\theta(\log(\operatorname{disc}(K)))\) ball in \(A\), \(a\in B_{A}(r)\) we have that \(ax\) is close to \(ax^{\prime}\in\operatorname{supp}(\mu_{I})\). On the other hand, the orbit \(ax^{\prime}\) is not much larger. The periods are controlled by the size of the units, which are again \(O(\log(\operatorname{disc}(K)))\). This would imply that we can approximate every ball in an \(A\)-orbit with measures \(\mu_{I}\) for \(I\) in the class group of the special number field \(K\).
To overcome the need to use Conjecture 1.8, we prove a weaker form on a special collection of orders. We take a measure \(\mu_{I}\) as above and apply to it a \(p\)-Hecke operator \(T_{p}\). On the one hand, Hecke operators are known to have well-behaved equidistribution properties (See Clozel, Oh and Ullmo [5] for an effective equidistribution result on Hecke operators and [1, 2] for other results using
equidistribution of Hecke operations applied to closed orbits). We expect to have \(f(T_{p}\mu_{I},m_{X_{n}})=O(p^{-\star})\). On the other hand, the resulting measure \(T_{p}\mu_{I}\) is the average of measures on compact orbits, corresponding to modules of a sub-order \(R=\mathbb{Z}+p\mathcal{O}_{K}\subset\mathcal{O}_{K}\) of index \([\mathcal{O}_{K}:R]=p^{d-1}\).
While this guarantees that the orbits would be equidistributed, to bound the sizes of the orbits we tweak the construction of \(K\). Not only \(\mathcal{O}_{K}\) should have units of logarithmic size, but also \(R\). This guarantees that \(T_{p}\mu_{I}\) is an average of measures of logarithmically small compact orbits, and enables the proof to work without Conjecture 1.8.
**Remark 1.9**.: Conjecture 1.8 seems very complicated. It was proven in case \(n=2\) in [9]. A non-effective version of it is proven for \(n=3\) under some splitting restriction in [11]. The proof uses a deep and nontrivial number theoretic result, namely, the subconvexity estimate of \(L\) functions of cubic fields.
The proof of Theorem 1.5 goes along similar lines. We start with a compact orbit \(Ax_{0}\), constructed using the techniques used in [17]. As in [17], this orbit will lie in the cusp for a density \(1\) proportion of its lifetime. To achieve a measure with partial escape of mass, we apply two Hecke operators. The first Hecke operator is responsible for pulling an appropriate proportion of the orbit out of the cusp. The second operator is used to get equidistribution from the section of the orbit outside of the cusp (this can be done with one operator, see Remark 5.13). However, this is not a single \(A\)-orbit. Instead, we prove by ergodicity that at least one of the \(A\)-orbits that constitute the Hecke operators applied to the orbit has to be almost equidistributed as well.
**Remark 1.10** (Comparison to [2] and [18]).: Plugging \(c=1\) to Theorem 1.5 yield a constructions identical to the one in [2] (with extra steps which are irrelevant for the \(c=1\) case), the Hecke operator of a compact orbit. We obtain its equidistribution and then use ergodicity to find one compact orbit in the Hecke operator which equidistributes. The construction in [18] is of the same kind, however, they show a way to sample a specific compact orbit that equidistributes, not using ergodicity in the same fashion.
### Further Research
The first natural improvement of Theorem 1.1 is the full approximation.
**Open Question 1.11**.: _Let \(\mu\in\mathcal{M}(X_{n})_{c}^{A}\). Can it be the limit measure of other measures in \(\mathcal{M}(X_{n})_{c}^{A}\)? In other words, is \(\mu\in\overline{\mathcal{M}(X_{n})_{c}^{A}\setminus\{\mu\}}\)?_
It can be seen that simply improving the bounds we give in this paper cannot give a positive answer to this question, for the following reason. If \(n=3\), \(Ay\) is a compact orbit and \(Ax\) is an orbit. We can consider the set \(B=\{a\in A:d_{X_{3}}(ax,Ay)<\delta\}\). The connected components of
are roughly hexagons. It can be seen that two such hexagons \(H_{1},H_{2}\) cannot have \(R\) long parts of the boundaries which are \(\delta R\) close to one another, for some \(\delta>0\) and all \(R>0\) sufficiently large.
Focusing of a different aspect of this work, we would like to find even less trivial limits of large \(A\)-orbits in the following since:
**Open Question 1.12**.: _What are the possible limits \(\lim_{i\to\infty}\mu_{Ax_{i}}\) where \(Ax_{i}\) is a compact orbit corresponding to an order \(R_{i}\) satisfying that \(\operatorname{reg}(R_{i})>\operatorname{disc}(R_{i})^{\varepsilon}\)? Can it be a non-ergodic \(A\)-invariant probability measure? Can it contain as an ergodic component a compact orbit uniform measure \(\mu_{Ay}\)?_
Another possible direction for improvement is by working in different homogenous spaces. Many questions can be asked, and here we choose to focus on a single homogenous space:
**Open Question 1.13**.: _Let \(B\) be an \(n^{2}\)-dimensional central simple algebra over \(\mathbb{Q}\) which splits over \(\mathbb{R}\). Let \(\mathcal{O}_{B}\) be an order in \(B\). Let \(\Gamma=\operatorname{SL}_{1}(\mathcal{O}_{B})\subseteq\operatorname{SL}_{n}( \mathbb{R})\) a lattice. There are many compact orbits in \(\operatorname{SL}_{n}(\mathbb{R})/\Gamma\), coming from degree-\(n\) totally real number fields \(K\subset B\). What are the possible limits of these compact orbits? Can Haar measure be obtained? Can you extend Theorem 1.1 to this setting? Can you compute an explicit non-Haar limit measure (in contrast to Theorem 1.1 where we have a control only over a small portion of the measure)?_
### Acknowledgments
The second author would like to express his deep gratitude to Uri Shapira for his support and encouragement. Without them, this paper would not exist. We thank Uri Shapira for bringing the main questions answered in this paper into our attention and for many intriguing discussions with him which contributed a lot to this paper. Moreover, the first author thanks Andreas Wieser and Elon Lindenstrauss for many fruitful discussions. The second author acknowledges the support of ISF grants number 871/17. This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 754475). This work is part of the first author's Ph.D. thesis.
## 2 Notation and Preliminaries
**Definition 2.1** (\(O\)-notations).: _For two real functions \(f,g\) on a set \(A\) we write \(f\ll g\) if there exists a constant \(C\) independent on the parameters of \(f\) and \(g\) such that \(|f|\leq Cg\) on \(A\). The notation \(O(g)\) will refer to some implicit function \(f\) which satisfies \(f\ll g\). The notation \(\Theta(g)\) will refer to some implicit function \(f\) which satisfies \(g\ll f\ll g\). Whenever \(r\) is a parameter going to \(0\) or \(\infty\), the notation \(o_{r}(g)\) will refer to some implicit function \(f\) which satisfies \(f\ll g\cdot h\), for some implicit function \(h\to 0\) as \(r\) goes to \(0\) or \(\infty\) respectively. In this case, we sometimes write \(f=O(g)\). Whenever \(r\) is a parameter going to either \(\infty\) or \(0\) and
Fix \(\|\cdot\|\) to denote the supremum norm on \(\mathbb{R}^{n}\). Given a lattice \(\Lambda\subset\mathbb{R}^{n}\) we use \(\operatorname{cov}(\Lambda)\) to denote the co-volume of \(\Lambda\). Let \(X_{n}\) denote the space of unimodular lattices in \(\mathbb{R}^{n}\) and let \(d(\cdot,\cdot)\) denote the metric on \(X_{n}\) coming from the operator norm on linear operators on \((\mathbb{R}^{n},\|\cdot\|)\). Let \(\mathbb{R}^{n-1}_{0}=\{v\in\mathbb{R}^{n}:\sum_{i}v_{i}=0\}\) and we abuse notations and define \(\exp=\exp\circ\operatorname{diag}:\mathbb{R}^{n-1}_{0}\to A\) the standard parametrization. We denote by \(m_{X_{n}}\) probability measure on \(X_{n}=\operatorname{SL}_{n}(\mathbb{R})/\operatorname{SL}_{n}(\mathbb{Z})\) coming from the Haar measure on \(\operatorname{SL}_{n}(\mathbb{R})\). We now present formally the relation between compact \(A\)-orbits in \(X_{n}\) and subgroups of number fields. Denote by \(L,U\subset\operatorname{SL}_{n}(\mathbb{R})\) the subgroups of lower and upper triangular matrices with diagonal \(1\).
**Definition 2.2** (Space of Measures).: _Let \(\mathcal{M}(X_{n})\) denote the space of finite measures on \(X_{n}\) endowed with the topology induced by \(\mu_{k}\to\mu\) if for any \(f\in C_{c}(X_{n})\) it holds that \(\mu_{k}(f)\to\mu(f)\). We define a metric on \(X_{n}\) which induces this topology by letting, for any \(\mu_{1},\mu_{2}\in\mathcal{M}(X_{n})\):_
\[d(\mu_{1},\mu_{2})=\sup_{\epsilon>0}\sup\{\epsilon|\!\int fd\mu_{1}-\int fd\mu _{2}|:f:\mathcal{X}_{n}\to\mathbb{R}:\,\text{f is $1$-Lipschitz and is supported on $K_{\epsilon}$}\} \tag{2.1}\]
_where \(\mathcal{K}_{\epsilon}=\{x\in X_{n}:\lambda_{1}(x)>\epsilon\}\)._
The following definition is relevant to SS4.
**Definition 2.3** (Special subgroups of \(\operatorname{SL}_{n}(\mathbb{R})\)).: _Let_
\[w_{0}=\left(\frac{n+1-2i}{2}\right)_{i=1}^{n}\in\mathbb{R}^{n-1}_{0},\qquad a _{t}=\exp(tw_{0})\text{ for all }t\in\mathbb{R}, \tag{2.2}\]
_and note that_
\[L=\{g\in\operatorname{SL}_{n}(\mathbb{R}):a_{t}ga_{-t}\xrightarrow{t\to\infty} I\},\quad U=\{g\in\operatorname{SL}_{n}(\mathbb{R}):a_{-t}ga_{t}\xrightarrow{t\to\infty} I\},\]
_that is, \(L,U\) are the contracting and expanding horospheres with respect to \(a_{t}\)._
**Definition 2.4**.: _For every degree \(n\), totally real number field \(K\) denote by \(\operatorname{Lat}_{K}\) the set of free \(\mathbb{Z}\)-modules of rank \(n\) in \(K\), where we identify two lattices \(\Lambda_{1},\Lambda_{2}\subset K\) if \(\Lambda_{1}=k\Lambda_{2}\) for some \(k\in K^{\times}\). For every rank \(n\), \(\mathbb{Z}\)-module \(\Lambda\subseteq K\) consider the lattice \(x_{\Lambda}:=\sigma(\Lambda)/\text{cov}(\sigma(\Lambda))\in X_{n}\), where \(\sigma_{i}:K\hookrightarrow\mathbb{R};i=1,\ldots,n\) is some ordering of the natural embeddings of \(K\) and let \(\sigma=(\sigma_{1},\ldots,\sigma_{n}):K\to\mathbb{R}^{n}\) denote their concatenation. Denote by \(\mathcal{O}_{\Lambda}=\{k\in K:k\Lambda\subseteq\Lambda\}\). This is a ring. Denote by \(\mathcal{O}_{\Lambda}^{\times,>0}=\{u\in\mathcal{O}_{\Lambda}^{\times}:\sigma _{i}(u)>0:i=1,\ldots,n\}.\) For every \(U\subseteq\mathcal{O}_{K}^{\times,>0}\) denote \(A_{U}=\{\operatorname{diag}(\sigma_{1}(u),\sigma_{2}(u),...,\sigma_{n}(u)):u \in U\}\)._
**Theorem 2.5**.: _Fix \(d\geq 0\). For every totally real number field \(K\), \([\Lambda]\in\operatorname{Lat}_{K}\), and choice of the ordering of the real embeddings of \(K\), the orbits \(Ax_{\Lambda}\) is compact, independent of the representative \(\Lambda\subset K\) of \([\Lambda]\in\operatorname{Lat}_{K}\) and this is a one to one parametrization of all compact \(A\)-orbits in \(X_{n}\)._
This theorem is equivalent to [16], apart from the part of this parametrization being one-to-one, whsich is Folklore and we do not use in this paper.
**Definition 2.6**.: _For every point \(x\in X_{n}\) such that the orbit \(Ax\) is compact, denote \(\mu_{Ax}\) the \(A\)-invariant measure probability on \(Ax\). For every \([\Lambda]\in\operatorname{Lat}_{K}\) denote \(\mu_{\Lambda}=\mu_{Ax_{\Lambda}}\)._
## 3 Effective Approximations
In this section, we prove an effective approximation theorem of points in \(X_{n}\) by points of compact orbits. Formally, the goal is to prove the following lemma.
**Lemma 3.1**.: _For every compact set \(K\subset X_{n}\) there exists a sequence \(t_{k}\xrightarrow{k\to\infty}\infty\) such that for every \(y\in X_{n}\) there exists a sequence \((y_{k})_{k}\subset X_{n}\) such that \(d(y_{k},y)<\exp\left(-\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}t_{k}\right)\) such that \(y_{k}\) is stabilized by a subgroup \(\exp(\Lambda_{k})\subseteq A\) where \(\Lambda_{k}\subseteq\mathbb{R}_{0}^{n-1}\) is generated by vectors \((v_{k}^{(i)})_{i=1}^{n-1}\) such that \(\left|\left(v_{k}^{(i)}\right)_{j}-(1-\delta_{ij}n)t_{k}\right|=O(1)\)._
**Remark 3.2**.: Lemma 3.1 implies in particular that \(Ax_{k}\) is compact with total volume \(n^{n-3/2}t_{k}^{n-1}+O(t_{k}^{n-2})\) in the \(\mathbb{R}_{0}^{n-1}\) parametrization, and in addition that for every \(v\in\mathbb{R}_{0}^{n-1}\) and for \(y,y_{k}\) as in the lemma, we have:
\[d((\exp v).y_{k},(\exp v).y)\leq O\left(\exp\left(\max_{1\leq i,j\leq n}|v_{i} -v_{j}|-\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}t_{k}\right)\right). \tag{3.1}\]
Since the set \(B_{0}=\{v\in\mathbb{R}_{0}^{n}:\max_{1\leq i,j\leq n}|v_{i}-v_{j}|\leq 1\}\) has volume \(\operatorname{vol}(B_{0})=\frac{1}{\sqrt{n}}\), we deduce that a portion of \(\left(\frac{1}{n}\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}\right)^{n-1}+O(1/t_{k})\) of the orbit \(Ay_{k}\) approximates the the orbit \(Ay\).
The proof of Lemma 3.1 will be composed of two components. The first is an approximation of lattices via Hecke neighbors. We will define Hecke neighbors in Definition 3.4 and obtain a good approximation by them in Lemma 3.6. The second component generates infinitely many points in \(X_{n}\) with compact \(A\)-orbits, with a control on the geometry of their stabilizers in \(A\) and the stabilizers of their sublattices of a given index. This gives us the arsenal of points on which we apply Lemma 3.6 to deduce Lemma 3.1.
### Hecke Density
The source of our good approximation comes from the quantitative version of the equidistribution of Hecke neighbors. There are many references to the equidistribution of Hecke neighbors such as [5, Theorem 1.1], [13, Theorem 3.7], [12, Theorem 1.2]. In this section we will cite the result of Laurent Clozel, Hee Oh and Emmanuel Ullmo [5] and deduce the approximation result we need.
**Definition 3.3** (Function space).: _Let \(L^{2}_{0}(X_{n})=\{f\in L^{2}(X_{n},m_{x_{n}}):\int_{X_{n}}fdm_{X_{n}}=0\}\) be the Hilbert space of \(L^{2}\) function on \(X_{n}\) with \(0\) mean._
**Definition 3.4** (Definition of \(p\)-Hecke neighbors and Hecke operator).: _For every sequence of integers \(k_{1}\leq k_{2}\leq...\leq k_{n}\) consider:_
\[a=\frac{1}{p^{(k_{1}+\cdots+k_{n})/n}}\operatorname{diag}(p^{k_{1}},p^{k_{2}}, \ldots,p^{k_{n}})\in\operatorname{SL}_{n}(\mathbb{R}).\]
_For every \(x=g\operatorname{SL}_{n}(\mathbb{Z})\in X_{n}\) denote \(T_{a}(x)=g\operatorname{SL}_{n}(\mathbb{Z})a\operatorname{SL}_{n}(\mathbb{Z})\). This set is is finite since \(a\operatorname{SL}_{n}(\mathbb{Z})a^{-1}\) is commensurable to \(\operatorname{SL}_{n}(\mathbb{Z})\). The size \(\#T_{a}(x)=\#(\operatorname{SL}_{n}(\mathbb{Z})a\operatorname{SL}_{n}(\mathbb{ Z})/\operatorname{SL}_{n}(\mathbb{Z}))\) depends only on \(k_{1},...,k_{n}\) and not \(x\). Equivalently,_
\[T_{a}(x)=\left\{\frac{1}{\sqrt[n]{\text{cov}(x^{\prime})}}x^{\prime}:x^{\prime }\subseteq x\text{ with }x/x^{\prime}\cong\mathbb{Z}/p^{k_{1}}\mathbb{Z}\oplus\cdots\oplus \mathbb{Z}/p^{k_{n}}\mathbb{Z}\right\}.\]
_For every function \(f\in L^{2}(X_{n})\) define the Hecke action on functions_
\[T_{a}^{\mathrm{F}}(f)(x)=\frac{1}{\#T_{a}(x)}\sum_{x^{\prime}\in T_{a}(x)}f(x ^{\prime})\in L^{2}_{0}(X_{n}).\]
_For every point \(x\in X_{n}\), define the Hecke action on measures_
\[T_{a}^{\mathrm{M}}(x):=\frac{1}{\#T_{a}(x)}\sum_{x^{\prime}\in T_{a}(x)}\delta _{x^{\prime}}\]
_and for every measure \(\mu\) on \(X_{n}\) define_
\[T_{a}^{\mathrm{M}}(\mu):=\int_{X_{n}}T_{a}^{\mathrm{M}}(x)d\mu(x).\]
The following theorem is a particular case of [5, Theorem 1.1], specialized for \(\operatorname{SL}_{n}\) as in [5, Example 5.1].
**Theorem 3.5**.: _For every prime \(p\) and \(k_{1}\leq k_{2}\leq...\leq k_{n},a\in\operatorname{SL}_{n}(\mathbb{R})\) as in Definition 3.4, the operator norm of \(T_{a}^{\mathrm{F}}\big{|}_{L^{2}_{0}(X_{n})}\) is bounded by:_
\[\left\|T_{a}^{\mathrm{F}}\big{|}_{L^{2}_{0}(X_{n})}\right\|\leq\prod_{i\leq n /2}\frac{1}{p^{(k_{n+1-i}-k_{i})/2}}\frac{(k_{n+1-i}-k_{i})(p-1)+(p+1)}{p+1} \leq p^{-\frac{1}{2}\sum_{i=1}^{n/2}(k_{n+1-i}-k_{i})}\cdot C(k_{1},...,k_{n}),\]
_where \(C(k_{1},...,k_{n})\) depends polynomially on \(k_{1},...,k_{n}\)._
**Lemma 3.6**.: _For every compact subset \(K\subset X_{n}\) there exists \(C=C(K)>0\) such that for any
\(x,y\in K\) and \(p,k_{1}\leq k_{2}\leq...\leq k_{n},a\in\mathrm{SL}_{n}(\mathbb{R})\) as in Definition 3.4, there exists a Hecke neighbor \(x^{\prime}\in T_{a}(x)\) such that:_
\[d\left(x^{\prime},y\right)\leq C(K)\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0 }^{2}(X_{n})}\right\|^{1/(n^{2}-1)}, \tag{3.2}\]
_where \(\alpha>0\) is some constant depending only on \(n\)._
Proof.: Recall the right invariant Riemannian metric \(d_{\mathrm{SL}_{n}(\mathbb{R})}\) on \(\mathrm{SL}_{n}(\mathbb{R})\), and its descend to \(X_{n}\), the metric \(d_{X_{n}}\). Let \(r_{0}<\min(\mathrm{inj}(y),\mathrm{inj}(y))\), where \(\mathrm{inj}(x)\) is the injectivity radius of \(x\), that is, the maximal radius \(r\) such the translation map \(g\mapsto gx\) is injective on \(B_{\mathrm{SL}_{n}(\mathbb{R})}(I;r)\), itself being the radius \(r\) ball around the identity in \(\mathrm{SL}_{n}(\mathbb{R})\). Let \(f_{x}=\chi_{B_{X_{p}}(x;r_{0})}\) be the indicator of a radius \(r_{0}\) ball of \(x\). Then \(\int_{X_{n}}f_{x}dm_{X_{n}}=\mathrm{vol}(B_{\mathrm{SL}_{n}(\mathbb{R})}(I;r_{ 0}))=\Theta(r_{0}^{n^{2}-1})\), and a similar equality holds for \(f_{y}=\chi_{B_{X_{n}}(y;r_{0})}\). Denote by \(v_{r_{0}}=\mathrm{vol}(B_{\mathrm{SL}_{n}(\mathbb{R})}(I;r_{0}))\). Then \(\tilde{f}_{x}=f_{x}-v_{r_{0}}\in L_{0}^{2}(x_{n})\) and \(\tilde{f}_{y}=f_{y}-v_{r_{0}}\in L_{0}^{2}(x_{n})\) have norm \(\|f_{x}\|^{2}=\|f_{y}\|^{2}=v_{r_{0}}(1-v_{r_{0}})^{2}+(1-v_{r_{0}})v_{r_{0}}^{ 2}=(1-v_{r_{0}})v_{r_{0}}\). Consider \(\left\langle\tilde{f}_{x},T_{a}^{\mathrm{F}}(\tilde{f}_{y})\right\rangle\). On the one hand, it is at most \(\left\|T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\|f_{x}\|\|f_{y}\|= \left\|T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|(1-v_{r_{0}})v_{r_ {0}}\). On the other hand, consider the set:
\[T_{a}^{-1}(B_{X_{n}}(y,r_{0}))=\{x^{\prime}\in X_{n}:T_{a}(x^{\prime})\cap B_ {X_{n}}(y,r_{0})\neq\emptyset\}.\]
and note that \(T_{a}^{\mathrm{F}}(\tilde{f}_{y})|_{(T_{a}^{-1}(B_{X_{n}}(y,r_{0})))^{c}} \equiv-v_{r_{0}}\). Assume that \(T_{a}^{-1}(B_{X_{n}}(y,r_{0}))\cap B_{X_{n}}(x,r_{0})=\emptyset\), and get that \(T_{a}^{\mathrm{F}}(f_{y})|_{B_{X_{n}}(x,r_{0})}\equiv-v_{r_{0}}\). It follows that:
\[\left\langle\tilde{f}_{x},T_{a}^{\mathrm{F}}(\tilde{f}_{y})\right\rangle =\int_{X_{n}}\tilde{f}_{x}T_{a}^{\mathrm{F}}(\tilde{f}_{y})dm_{X_ {n}}=\int_{X_{n}}f_{x}T_{a}^{\mathrm{F}}(\tilde{f}_{y})dm_{X_{n}}=\int_{B_{X_{ n}}(x,r_{0})}T_{a}^{\mathrm{F}}(\tilde{f}_{y})dm_{X_{n}}\] \[=v_{r_{0}}\cdot(-v_{r_{0}})=-v_{r_{0}}^{2}.\]
We deduce that
\[v_{r_{0}}^{2}\leq\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})} \right\|\cdot\|\tilde{f}_{x}\|\cdot\|\tilde{f}_{y}\|=\left\|\left.T_{a}^{ \mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|(1-v_{r_{0}})v_{r_{0}},\]
and hence \(v_{r_{0}}\leq\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\). Using this logic in reverse, we deduce that if \(v_{r_{0}}>\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\) then \(T_{a}^{-1}(B_{X_{n}}(y,r_{0}))\cap B_{X_{n}}(x,r_{0})\neq\emptyset\), that is, there exist \(x^{\prime}=g_{0}x,y^{\prime}=g_{1}y\) such that \(y^{\prime}\in T_{a}(x^{\prime})\) and \(g_{0},g_{1}\in B_{\mathrm{SL}_{n}(\mathbb{R})}(I,r_{0})\). Thus \(g_{0}^{-1}y^{\prime}=g_{0}^{-1}g_{1}y\in g_{0}^{-1}T_{a}(x^{\prime})=T_{a}(x)\). On the other hand, \(g_{0}^{-1}y^{\prime}=g_{0}^{-1}g_{1}y\) satisfies \(d_{X_{n}}(g_{0}^{-1}g_{1}y,y)\leq 2r_{0}\).
Altogether, we have proved that if \(\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|<r_{0}\leq \min(\mathrm{inj}(x),\mathrm{inj}(y))\) then there exists \(x^{\prime\prime}=g_{0}^{-1}g_{1}y\) with \(x^{\prime\prime}\in T_{a}(x)\) and \(d_{X_{n}}(x^{\prime\prime},y)\leq 2r_{0}\). Now, let \(K\subset X_{n}\) be a compact set. Denote the minimum of the injectivity radius on \(K\) by \(r_{K}>0\). If \(v_{r_{K}}>\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\), then we can find \(r_{0}=\Theta(\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|^{ 1/(n^{2}-1)})\) with \(r_{0}<r_{K}\) and \(v_{r_{0}}>\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\), and the desired follows. If \(v_{r_{K}}\leq\left\|\left.T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\), then the desired follows for \(C(K)=\frac{\mathrm{diam}(K)}{(v_{r_{K}})^{1/(n^{2}-1)}}\).
**Remark 3.7** (Composition of Hecke operators).: The composition of Hecke operators is a linear combination of different Hecke operators. This follows from the double-coset description, as the multiplication of two finite double-cosets is again a double-coset. We mention explicitly the case of the operators \(a,a^{\prime}\) defined with \((-k,0,0,\ldots,0),(-l,0,0,\ldots,0)\) respectively, where \(0\leq k\leq l\). The composition is
\[T_{a}^{\mathrm{F}}\circ T_{a^{\prime}}^{\mathrm{F}}=T_{a^{\prime}}^{\mathrm{F} }\circ T_{a}^{\mathrm{F}}=\frac{p^{n-1}(p-1)}{p^{n}-1}T_{a_{k+l}}^{\mathrm{F}} +\sum_{i=1}^{l-1}\frac{p^{n-1}-1}{p^{n}-1}\cdot\frac{p-1}{p^{i}}\cdot T_{a_{k+l -i,i}}^{\mathrm{F}}+\frac{p^{n-1}-1}{p^{n}-1}\cdot\frac{1}{p^{l-1}}T_{a_{k,l}}^ {\mathrm{F}}\]
Where \(a_{k^{\prime},l^{\prime}}\) corresponds to \(-k^{\prime}\leq-l^{\prime}\leq 0\leq 0\leq\cdots\leq 0\). A similar equality hold for \(T^{\mathrm{M}}\).
### Construction of special number field
**Theorem 3.8**.: _For every prime number \(p\equiv 1\mod 2n\) sufficiently big as a function of \(n\), one can find a totally real number field \(K\) with the following properties:_
1. _The unit group_ \(\mathcal{O}_{K}^{\times}\) _contains units_ \(u_{1},...,u_{n}\) _with_ \(u_{1}u_{2}\cdots u_{n}=1\)_._
2. _One can order the real embeddings_ \(\sigma_{1},...,\sigma_{n}:K\to\mathbb{R}\) _such that_ \(\sigma_{i}(u_{j})>0\) _for all_ \(1\leq i,j\leq n\)_, and_ \[\log\sigma_{i}(u_{j})=\begin{cases}-2d(d-1)\log p+O(1),&\text{if }i=j,\\ 2d\log p+O(1),&\text{if }i\neq j.\end{cases}\] (3.3)
3. _The units_ \(u_{i}\) _lie in the ring_ \(\mathbb{Z}+p\mathcal{O}_{K}\)_._
Proof.: Since \(p\equiv 1\mod 2n\), the polynomial \(x^{n}+1\) has a \(n\) different solutions mod \(p\). By Hensel's Lemma (See [7]), it has \(n\) solutions mod \(p^{n}\), call them \(a_{0}^{\prime},...,a_{n-1}^{\prime}\in\{1,...,p^{n}-1\}\). Let \(a_{i}=a_{i}^{\prime}+2ip^{n}\) for \(i=0,\ldots,n-1\), and note that \(p^{n}+1\leq a_{i+1}-a_{i}\leq 3p^{n}\) for all \(0\leq i\leq d-2\). consider now the polynomial \(R(x)=\frac{(px-a_{1})(px-a_{2})\cdots(px-a_{n})-1}{p^{n}}\). It has integer coefficients since by assumption, \(\sum_{i_{1}<...<i_{k}}a_{i_{1}}\cdots a_{i_{k}}=0\mod p^{n}\) for all \(k=1,...,n-1\) and \(a_{1}a_{2}\cdots a_{n}=(-1)^{n}\mod p^{n}\).
To control the real embeddings of the resulting number field, we will approximate the real roots of \(R\).
By Taylor's theorem (see [14, SS20.3]) applied at \(a_{i_{0}}/p\) we have
\[R(x)=R(a_{i_{0}}/p)+R^{\prime}(a_{i_{0}}/p)(x-a_{i_{0}}/p)+\frac{1}{2}R^{ \prime\prime}(x^{\prime})(x-a_{i_{0}}/p)^{2},\]
for some \(x^{\prime}\in[a_{i_{0}}/p,x]\) (this interval notation does not assume \(a_{i_{0}}/p\leq x\)). Note that
\[R(a_{i_{0}}/p)=-\frac{1}{p^{n}},\qquad R^{\prime}(a_{i_{0}}/p)=\prod_{j\neq i _{0}}(a_{i_{0}}/p-a_{j}/p)=(-1)^{n-i_{0}}\alpha_{n-i_{0}}p^{n^{2}-2n+2},\]
for some \(\alpha_{i_{0}}\geq 1\), with \(\alpha_{i_{0}}=\Theta(1)\), and for all \(x^{\prime}\in[-p^{n-1},2np^{n-1}]\) we have
\[|R^{\prime\prime}(x^{\prime})|=\left|\sum_{1\leq k<l\leq n}\prod_{j\neq k,l}(x-a _{k}/p)\right|=O(p^{n^{2}-3n+2}).\]
That implies that for all \(x\in[-p^{n-1},2np^{n-1}]\) we have
\[R(x)=-\frac{1}{p^{n}}+(-1)^{n-i_{0}}\alpha_{i_{0}}p^{n^{2}-2n+1}(x-a_{i_{0}}/p )+O(p^{n^{2}-3n+2})(x-a_{i_{0}}/p)^{2}, \tag{3.4}\]
Choose \(x^{\prime}_{i_{0}}\) with \(x^{\prime}_{i_{0}}-a_{i_{0}}/p=2\frac{(-1)^{n-i_{0}}}{p^{n^{2}-n+1}\alpha_{i_{ 0}}}\). Note that \(|x^{\prime}_{i_{0}}-a_{i_{0}}/p|<1\), and hence \(x^{\prime}_{i_{0}}\in[-p^{n-1},2np^{n-1}]\), Then
\[R(x^{\prime}_{i_{0}})=-\frac{1}{p^{n}}+(-1)^{n-i_{0}}\alpha_{i_{0}}p^{n^{2}-2n +1}(x^{\prime}_{i_{0}}-a_{i_{0}}/p)+O(p^{-n^{2}-n})=\frac{1}{p^{n}}+O(p^{-n^{2 }-n}).\]
If \(p\) is sufficiently large as a function of \(n\) then \(R(x^{\prime}_{i_{0}})>0\). Consequently, due to the mean value theorem, there is \(x_{i_{0}}\in[a_{i_{0}}/p,x^{\prime}_{i_{0}}]\) with \(R(x_{i_{0}})=0\). By Eq. (3.4) we deduce that:
\[0=R(x_{i_{0}})=-\frac{1}{p^{n}}+(-1)^{n-i_{0}}\alpha_{i_{0}}p^{n^{2}-2n+1}(x_{ i_{0}}-a_{i_{0}}/p)+O(p^{n^{2}-3n+2})(x_{i_{0}}-a_{i_{0}}/p)^{2}. \tag{3.5}\]
This implies:
\[\frac{1}{p^{n}} =(x_{i_{0}}-a_{i_{0}}/p)\left((-1)^{n-i_{0}}\alpha_{i_{0}}p^{n^{2 }-2n+1}+O(p^{n^{2}-3n+2})(x_{i_{0}}-a_{i_{0}}/p)\right)\] \[=(x_{i_{0}}-a_{i_{0}}/p)(-1)^{n-i_{0}}\alpha_{i_{0}}p^{n^{2}-2n+1 }\left(1+O\left(p^{-n^{2}}\right)\right),\]
and hence,
\[x_{i_{0}}=a_{i_{0}}/p+\frac{(-1)^{n-i_{0}}}{p^{n^{2}-n+1}\alpha_{i_{0}}}+O \left(\frac{1}{p^{2n^{2}-n+1}}\right).\]
Since \(|a_{i_{0}}/p-x_{i_{0}}|<1\) we deduce that all \(x_{i}\) are different, and hence these are the distinct roots of \(R\). Since \(|a_{i}-a_{j}|\geq p^{n}\) for all \(i\neq j\) it follows that \(|a_{i}/p-x_{j}|\geq p^{n-1}-1>1\). It follows that \(R=\prod_{i=1}^{d}(x-x_{i})\) is irreducible, as if \(R_{0}=\prod_{i\in I}(x-x_{i})\) is an integer polynomial \(i_{0}\notin I\) then in the ring \(\mathbb{Z}[x]/(R_{0})\) the element \(px-a_{i_{0}}\) is a unit, but it's norm in \(\mathbb{Z}[x]/(R_{0})\) is \(\prod_{i\in I}(px_{i}-a_{i_{0}})\), which is greater the \(1\) in absolute value.
The number field \(K=\mathbb{Q}[\alpha]\) where \(R(\alpha)=0\) has units \(u^{\prime}_{i}=p\alpha-a_{i}\). Let \(u_{i}=(u^{\prime}_{i})^{2}\). The real embeddings of \(K\) are \(\sigma_{i}:\alpha\mapsto x_{i}\), sends has \(\log\sigma_{i}(u_{j})=2\log|px_{i}-a_{j}|\). Now Eq. (3.3) follows from properties of \(x_{i}\).
### Application of Hecke operator to compact orbits
In this section, we introduce the following lemma, which describes how to verify that a Hecke operator splits a compact orbit into sufficiently many compact diagonal orbits.
**Lemma 3.9**.: _Let \(K\) be a totally real number field of degree \(n\), let \(x_{\mathcal{O}_{K}}\in X_{n}\) the point with compact orbit corresponding to the \(\mathbb{Z}\)-module \(\mathcal{O}_{K}\) and let \(p,k_{1}\leq\cdots\leq k_{n},a\in\mathrm{SL}_{n}(\mathbb{R})\) as in Definition 3.4. Let \(U\subseteq\mathcal{O}_{K}^{\times,>0}\) be a subgroup and \(A_{U}\subset A\) the corresponding diagonal subgroup as in Definition 2.4. If \(U\subseteq\mathbb{Z}+p^{k}\mathcal{O}_{K}\) and \(0\leq k_{1}\leq k_{n}\leq k\) then \(A_{U}\subseteq\mathrm{stab}_{A}(x^{\prime})\) for every \(x^{\prime}\in T_{a}(x_{\mathcal{O}_{K}})\)._
Proof.: Let \(u\in U\). Since \(u\in\mathbb{Z}+p^{k}\mathcal{O}_{K}\), there is \(m\in\mathbb{Z}\) such that \(u\equiv m\mod p^{k}\). Thus for every \(\bar{b}\in\mathcal{O}_{K}/(p^{k})\) we have \(u\bar{b}=m\bar{b}\), that is, the multiplication by \(u\) action on \(\mathcal{O}_{K}/(p^{k})\) is in fact a multiplication by a scalar. Hence, the element \(a_{u}=\mathrm{diag}(\sigma_{1}(u),\sigma_{2}(u),\ldots,\sigma_{k}(u))\in A_{U}\) preserves \(x_{\mathcal{O}_{K}}\), \(p^{k}x_{\mathcal{O}_{K}}\) and acts on the quotient by multiplication by the scalar \(m\). This implies that \(a_{u}\) acts on the set of mid-groups \(\{\Lambda:p^{k}x_{\mathcal{O}_{K}}\subseteq\Lambda\subseteq x_{\mathcal{O}_{ K}}\}\) trivially. Indeed, there is an isomorphism
\[\{\Lambda:p^{k}x_{\mathcal{O}_{K}}\subseteq\Lambda\subseteq x_{\mathcal{O}_{ K}}\}\cong\{\bar{\Lambda}\subseteq x_{\mathcal{O}_{K}}/p^{k}x_{\mathcal{O}_{K}}\},\]
and \(u\) acts on the right-hand side as a multiplication by a scalar, which preserves all groups. This implies that \(a_{u}\) preserves all Hecke neighbors, by the second construction (See Definition 3.4).
**Remark 3.10**.: Lemma 3.9 remains true if we replace \(x_{\mathcal{O}_{K}}\) by \(x_{\Lambda}\) for some \(\Lambda\in\mathrm{Lat}_{K}\) and \(U\) by a subgroup of \(\mathcal{O}_{\Lambda}\) satisfying \(U\subseteq\mathbb{Z}+p^{k}\mathcal{O}_{\Lambda}\). The proof is similar.
**Corollary 3.11**.: _There is a compact subset \(C\subset X_{n}\) such that for any prime \(p\equiv 1\mod 2n\) large enough as a function of \(n\), there exists \(x_{p}\in C\), and a lattice \(\Lambda\subset\mathbb{R}_{0}^{n-1}\) such that,_
1. _For every Hecke operator_ \(T_{a}\) _with_ \(0\leq k_{1}\leq\cdots\leq k_{n}\leq 1\) _and for every element_ \(y\in T_{a}(x)\) _we have that_ \(\exp(\Lambda)\subseteq A\) _stabilize_ \(y\)_._
2. _The lattice_ \(\Lambda\) _is generated by vectors_ \((v^{(i)})_{i=1}^{n-1}\) _satisfying_ \(\left|\left(v^{(i)}\right)_{j}-(2n-2n^{2}\delta_{ij})\log p\right|=O(1)\)_._
Proof.: Fix \(p\equiv 1\mod 2n\) sufficiently big for Theorem 3.8. Let \(K\) be the number field constructed in Theorem 3.8, and \(u_{1},\ldots,u_{n}\) the corresponding units. Let \((\sigma_{i})_{i=1}^{n}\) be the real embeddings of \(K\). Let \(\Lambda\) be the subgroup of \(\mathbb{R}_{0}^{n-1}\) generated by \((\log\sigma_{i}(u_{j}))_{i=1}^{n}\) for \(j=1,\ldots,n-1\). The bounds on \(u_{j}\) in Theorem 3.8 imply Part 2. Consider the point \(x_{\mathcal{O}_{K}}\in X_{n}\). It follows from [19, Proposition A.1] that there is a compact set \(C\subseteq X_{n}\) which intersects every \(A\)-orbit. Hence there is \(x_{p}=mx_{\mathcal{O}_{K}}\in C\) for some \(m\in A\). Since the \(\mathrm{SL}_{n}(\mathbb{R})\) action on \(X_{n}\) commutes with \(T_{a}\), to prove Part 1 it is sufficient to prove that \(\exp(\Lambda)\subseteq A\) stabilize \(y\) for every \(y\in T_{a}(x_{\mathcal{O}_{K}})\). This follows from Lemma 3.9 and Property 3 of the unit \(u_{1},...,u_{k}\).
### Proof of Lemma 3.1
The points we construct are indexed by primes \(p\equiv 1\mod 2n\). Fix such a prime \(p\). Denote \(t_{p}^{\prime}=2n\log p\) and let \(x_{p}\) and \(\Lambda\) be as in Corollary 3.11 with \(t_{p}^{\prime}\) instead of \(t_{p}\). Note that Corollary 3.11 implies the exact bounds on a set of generators of \(\Lambda\) we need. Let \(k_{1}=\cdots=k_{\lfloor n/2\rfloor}=0,k_{\lfloor n/2\rfloor+1}=...=k_{n}=1\), and \(a\in A\) as in Definition 3.4. We claim that for some \(y_{p}\in T_{a}x_{p}\) we have \(d(y,y_{p})\ll\exp(-\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}t_{p}^{\prime})\). Indeed, by Theorem 3.5 we have \(\left\|T_{a}^{\mathrm{F}}\right|_{L_{0}^{2}(X_{n})}\right\|\leq p^{-\lfloor n /2\rfloor/2}\). Thus by Lemma 3.6, there is \(y_{p}\in T_{a}(x_{p})\) with
\[d(y_{p},y)=O(p^{-\frac{\lfloor n/2\rfloor}{2(n^{2}-1)}})=O\left(\exp\left(t_{p }^{\prime}\cdot\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}\right)\right).\]
We now choose \(t_{p}=t_{p}^{\prime}=O(1)\) to be smaller then \(t_{p}\) to ensure
\[d(y_{p},y)<\exp\left(t_{p}\cdot\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}\right).\]
This preserves the bounds on the generators of \(\Lambda\).
Note that the implicit constant was provided by Theorem 3.5 and depends only on the compact set containing \(y\).
## 4 Approximation of Invariant Measures
In this section, we prove Theorem 1.1. We approximate measures by approximating their generic points. When we want to approximate a finite collection of measures simultaneously, we need to find regions of \(A\)-orbits that approximate all measures on subregions. The following lemma is our tool for this purpose:
**Lemma 4.1** (Diagonal closing lemma).: _For every compact set \(K\subseteq X_{d}\) there are a compact sets \(K_{A}\subset A,K_{L}\subset L,K_{U}\subset U\), such that for every \(x_{0},x_{1}\in K\) there are \(k_{A}\in K_{A},k_{L}\in K_{L},k_{U}\in K_{U}\) such that \(k_{U}k_{A}k_{L}x_{0}=x_{1}\). Here \(A,L,U\) are defined as in Definition 2.3._
Proof.: First, we will show that for every \(x_{0},x_{1}\in X_{d}\) we have \(x_{0}\in LAUx_{1}\). Since \(U\) is the expanding horosphere of \(a_{t}\), almost every point in \(Ux_{1}\) is generic with respect to the forward \(a_{t}\) action and the Haar measure \(m_{X_{n}}\). Choose a \(a_{t}\)-generic point point \(u_{0}x_{1}\in Ux_{1}\). The \(LU\)-decomposition implies that the product map \(U\times A\times L\to\mathrm{SL}_{n}(\mathbb{R})\) is one to one. Since \(U\times A\times L\) and \(\mathrm{SL}_{n}(\mathbb{R})\) are \(n^{2}-1\) dimensional, this implies that \(UAL\subseteq\mathrm{SL}_{n}(\mathbb{R})\) is open. Hence \(UALx_{0}\) is an open set in \(X_{n}\), and hence of positive measure. Hence:
\[\frac{1}{T}\int_{0}^{T}\chi_{a_{t}u_{0}x_{1}\in UALx_{0}}dt\xrightarrow{T\to \infty}m_{X_{n}}(UALx_{0})>0 \tag{4.1}\]
and hence for some \(t>0\) we have \(a_{t}u_{0}x_{1}\in UALx_{0}\). Since \(AU=UA\) we deduce that \(x_{0}\in LAUX_{1}\).
Let \(\|\cdot\|_{A}:A\to[0,\infty),\|\cdot\|_{L}:L\to[0,\infty),\|\cdot\|_{U}:U\to[0,\infty)\) be the distance from the identity. Define by \(f:K^{2}\to[0,\infty)\) the map attaching
\[f(x_{1},x_{2})=\inf\{\max(\|a\|_{A},\|t\|_{L},\|u\|_{U}):a\in A,l\in L,u\in U:x_ {0}=laux_{1}\}.\]
Since \(LAU\) is open and the multiplication map \(L\times A\times U\to LU\) is a homeomorphism, if we have \(laux_{0}=x_{1}\) for \(a\in A,l\in L,u\in U\), then for every sufficiently close \(\hat{x}_{0}\sim x_{0},\hat{x}_{1}\sim x_{1}\) there are \(\hat{a}\in a,\hat{l}\in L,\hat{u}\in U\) arbitrarily close to \(a,l,u\) such that \(\hat{u}\hat{l}\hat{a}\hat{x}_{0}=\hat{x}_{1}\). This implies that \(f\) is upper semicontinuous. In particular, for every \((x_{0},x_{1})\in K^{2}\) we get that \(f\) is bounded in a neighborhood of \((x_{1},x_{2})\). Since \(K^{2}\) is compact this implies that \(f\) is bounded everywhere, as desired.
**Definition 4.2**.: _For \(v,v^{\prime}\in\mathbb{R}_{0}^{n-1}\), we say that \(v\prec v^{\prime}\) if for all \(i=1,\ldots,n-1\) we have \(v_{i+1}-v_{i}\leq v^{\prime}_{i+1}-v^{\prime}_{i}\). For every \(v_{0}\in\mathbb{R}_{0}^{n-1}\) with \(0\prec v_{0}\), define a box\(S_{v_{0}}=\{v\in\mathbb{R}_{0}^{n-1}:0\prec v\prec v_{0}\}\). A boxed map is a map \(f:S_{v_{0}}\to X_{d}\) of the form \(f(v)=\exp(v).x_{0}\) for some \(x_{0}\in X_{n}\). For every compact set \(K\subset X_{n}\), a boxed map \(f_{0}:S_{v_{0}}\to X_{d}\) is said to be \(K\)-bounded if \(f(0),f(v_{0})\in K\). Recall that \(w_{0}=(\frac{d+1-2i}{2})_{i=1}^{n}\)._
In the following lemma, we fix \(K,K^{\prime}\) and let \(R\to\infty\). The constants in the \(O\)-notations depend on \(K,K^{\prime}\) and hold as \(R\to\infty\).
**Corollary 4.3** (Gluing boxes).: _Let \(K\subset X_{n}\) be compact and \(K\subset K^{\prime}\subset X_{n}\) a compact neighborhood. In this corollary all \(O\) notations depend on \(K,K^{\prime}\). Let \(R>0\). Fix any two \(K\)-bounded boxed maps \(f_{1}:S_{v_{1}}\to X_{n},f_{2}:S_{v_{2}}\to X_{n}\) with \(v_{1},v_{2}\succ Rw_{0}\). Then there exists a \(K^{\prime}\)-bounded boxed map \(f:S_{v_{3}}\to X_{n}\) with_
* \(v_{3}=v_{1}+v_{2}+O_{K,K^{\prime}}(1)\)_;_
* _There are two points_ \(w_{1},w_{2}\in S_{v_{3}}\) _with_ \(\|w_{1}\|=o_{R}(R)\)_,_ \(\|w_{2}-v_{1}\|=o_{R}(R)\) _and_ \(0<\rho=o_{R}(R)\) _we have that for all_ \(i=1,2\) _and_ \(v\in S_{v_{i}-2\rho w_{0}}\)_,_ \[d_{X_{n}}\left(f_{i}(\rho w_{0}+v),f(w_{i}+v)\right)=o_{R}(1).\]
_See Figure 1 for a visualization of these conditions._
Proof.: Let \(K_{A}\subset A,K_{L}\subset L,K_{U}\subset U\) as in Lemma 4.1, constructed for \(K\). Assume \(K_{A}=\exp(K_{0})\) for \(K_{0}\subseteq\{w\in\mathbb{R}_{0}^{n-1}:|w_{i}-w_{i+1}|<C_{0}\) for all \(i=1,\ldots,n-1\}\). Note that
\[\text{for all }\ T>0,v\geq Tw_{0},u\in K_{U}\ \text{ we have }\ d(\exp(-v)u\exp(v),I)=O(\exp(-T)), \tag{4.2}\]
and similarly,
\[\text{for all }\ T>0,v\geq Tw_{0},l\in K_{L}\ \ \text{we have }\ d(\exp(v)l\exp(-v),I)=O(\exp(-T)).\]
Apply Lemma 4.1 for \(f_{1}(v_{1}),f_{2}(0)\), we get that there are \(v_{0}\in\mathbb{R}_{0}^{n-1},k_{l}\in K_{L},k_{u}\in K_{U}\) such that
\[|(v_{0})_{i}-(v_{0})_{i+1}|<C_{0}\text{ for all }i=1,\ldots,n-1,\]
and
\[k_{l}\exp(v_{0})k_{u}f_{1}(v_{1})=f_{2}(v_{2}).\]
Let \(\rho=\sqrt{R}\), \(v_{3}=v_{1}+v_{2}+v_{0}\), \(x_{0}=\exp(-v_{1})k_{u}\exp(v_{1})f_{1}(0)\), \(f:S_{v_{3}}\to X_{n}\) defined by \(f(v)=\exp(v)x_{0}\) for all \(v\in S_{v_{3}}\), \(v_{1}=\rho w_{0}\) and \(v_{2}=v_{1}+\rho w_{0}+v_{0}\). By Eq. (4.2) we get that \(d(\exp(-v_{1})k_{u}\exp(v_{1}),I)=O(\exp(-R))=o_{R}(1)\). hence we deduce that for \(R\) sufficiently big, \(x_{0}\in K^{\prime}\).
Moreover, for all \(v\in S_{v_{1}-2\rho w_{0}}\) we have
\[f(v_{0}+v)=\exp(v_{0}+v)x_{0}=\exp(v_{0}+v-v_{1})k_{u}\exp(v_{1}-(v_{0}+v))f_{ 1}(v_{0}+v).\]
Since \(v\leq v_{1}-2\rho w_{0},v_{0}=\rho w_{0}\) we get that \(v_{1}-(v_{0}+v)\geq\rho w_{0}\), and hence
\[d\left(\exp(v_{0}+v-v_{1})k_{u}\exp(v_{1}-(v_{0}+v)),I\right)=o_{R}(1),\]
Figure 1: Plot of the boxes in Corollary 4.3. One can see \(S_{v_{3}}\), and in it \(S_{v_{1}}+w_{1},S_{v_{2}}+w_{2}\), and in them the regions approximated by \(f_{1},f_{2}\).
which implies that \(d(f(v_{0}+v),f_{1}(\rho w_{0}+v))=o_{R}(1)\).
Note that
\[f(v_{3})=\exp(v_{1}+v_{2}+v_{0})x_{0}=\exp(v_{2})\exp(v_{0})k_{u}f_{1}(v_{1})= \exp(v_{2})k_{l}^{-1}f_{2}(0)=\exp(v_{2})k_{l}^{-1}\exp(-v_{2})f_{2}(v_{2}).\]
From here the proof of the remaining conditions for \(f_{2}\) and \(f(v_{3})\) are symmetric to the approximation of \(f_{1}\) and \(f_{1}(0)\).
**Corollary 4.4**.: _For any \(\mu_{1},\ldots,\mu_{k}\) ergodic \(A\)-invariant measures on \(X_{n}\) and any \(R>0\) there exists \(x_{R}\in X_{n}\) such that for every \(i=1,\ldots,k\):_
\[\lim_{R\to\infty}\frac{1}{\operatorname{vol}(S_{Rw_{0}})}\left(v\mapsto\exp \left(v+iRw_{0}\right).x_{R}\right)_{*}m_{\mathbb{R}_{0}^{n-1}}\mid_{S_{Rw_{0 }}}=\mu_{i} \tag{4.3}\]
_In addition, the set \(\{x_{R}:R>0\}\) is pre-compact._
Proof.: Let \(K\) be a compact set with \(\mu_{i}(K)>0\) for each \(i\), and for each \(i=1,\ldots,k\) choose \(y_{i}\in K\) to be a generic point for the \(A\) action on \(\mu_{i}\). By ergodicity of \(\mu_{i}\):
\[\lim_{R\to\infty}\frac{1}{\operatorname{vol}(S_{Rw_{0}})}\int_{S_{Rw_{0}}} \chi_{\exp(v)y_{i}\in K}dm_{\mathbb{R}_{0}^{n-1}}=\mu_{i}(K)>0.\]
Denote
\[\rho_{i}=\sup\{\rho>0:\exp(v)y_{i}\notin K,\forall v\in Rw_{0}-S_{\rho w_{0}}\}.\]
Since \(\int_{Rw_{0}-S_{\rho w_{0}}}\chi_{\exp(v)y_{i}\in K}dm_{\mathbb{R}_{0}^{n-1}}=0\), we get that \(\frac{\rho_{i}}{R}\xrightarrow{R\to\infty}0\). By the Definition of \(\rho_{i}\), there is \(v_{i}\in Rw_{0}-S_{\rho_{i}w_{0}}\) such that \(\exp(v_{i})y_{i}\in K\). Since \(\|v_{i}-R_{i}\|=O(\rho_{i})=o_{R}(R)\) we deduce that
\[\frac{1}{\operatorname{vol}(S_{v_{i}})}\left(v\mapsto\exp(v)y_{i}\right)_{*}m_ {\mathbb{R}_{0}^{n-1}}\mid_{S_{v_{i}}}\xrightarrow{R\to\infty}\mu_{i}.\]
The corollary follows now from iteratively applying Corollary 4.3 to glue the boxed maps \((v\mapsto\exp(v)y_{i})|_{S_{v_{i}}}\) to one boxed map \(f\), using an increasing set of compact neighborhoods \(K\subset K_{1}\subset K_{2}\subset\cdots\subset K_{k}\).
As a corollary of Lemma 3.1 and Corollary 4.4 we can prove Theorem 1.1:
Proof of Theorem 1.1.: Let \(\mu_{1},\mu_{2},\ldots,\mu_{k}\) be ergodic \(A\)-invariant measures on \(X_{n}\). For any \(R>0\), let \(x_{R}\in X_{n}\) be the point guaranteed by Corollary 4.4. Let \(K\) the compact closure of \(\{x_{R}:R>0\}\).
Lemma 3.1 guarantees a sequence \((t_{m})_{m=0}^{\infty}\) such that \(t_{m}\xrightarrow{m\to\infty}\infty\) such that for all \(R>0,m\geq 0\) there is \(y_{R,m}\in X_{n}\) such that \(d(x_{R},y_{R,m})<e^{-\alpha t_{m}}\), where \(\alpha=\frac{\lfloor n/2\rfloor}{4n(n^{2}-1)}>0\), and \(y_{R,m}\) is stabilized by a subgroup \(\exp(\Lambda_{R,m})\subseteq A\) where \(\Lambda_{R,m}\subseteq\mathbb{R}_{0}^{n-1}\) is generated by vectors \((v_{R,m}^{(i)})_{i=1}^{n-1}\) such that
\(\left|\left(v_{R,m}^{(i)}\right)_{j}-(1-\delta_{ij})t_{m}\right|=O(1)\). Choose \(R_{m}=\frac{\alpha t_{m}}{k(n-1)}-\sqrt{t_{m}}\) so that all \(v\in S_{kR_{m}w_{0}}\) we have \(d(\exp(v)x_{R_{m}},\exp(v)y_{R_{m},m})=o_{m}(1)\). This implies that
\[\lim_{m\to\infty}\frac{1}{\operatorname{vol}(S_{R_{m}w_{0}})}\left(v\mapsto \exp\left(v+iR_{m}w_{0}\right).y_{R_{m},m}\right)_{*}m_{\mathbb{R}_{0}^{n-1}} \mid_{S_{R_{m}w_{0}}}=\mu_{i}. \tag{4.4}\]
The choice of \(R_{m}\) guarantees that \(S_{Rw_{0}}\) injects into \(\mathbb{R}_{0}^{n-1}/\Lambda_{R_{m},m}\) and hence Eq. (4.4) implies that \(\lim_{m\to\infty}\mu_{Ay_{R_{m},m}}\) contains \(\mu_{i}\) as an ergodic component.
## 5 Entire Mass Approximations
First, we prove Corollary 1.4 of Theorem 1.1 and [17, Theorem 1.1].
Proof of Corollary 1.4.: Let \(\mu_{1},\ldots,\mu_{k}\in\mathcal{M}(X_{n})_{e}^{A}\). By [17, Theorem 1.1] we can find \((\nu_{m})_{m}\subset\mathcal{M}(X_{n})_{c}^{A}\) such that \(\nu_{m}\to 0\) weakly. By Remark 1.2 of Theorem 1.1, for any \(m\) there exists \(\mu^{(m)}\in\overline{\mathcal{M}(X_{n})_{c}^{A}}\) such that \(\nu_{m},\mu_{1},\ldots,\mu_{k}\) all appear in \(\mu^{(m)}\)'s ergodic decomposition with coefficients bounded below by \(\Theta(k^{2-n})\). Any weak limit \(\mu^{(\infty)}\) of \((\mu^{(m)})_{m}\) will belong to \(\overline{\mathcal{M}(X_{n})_{c}^{A}}\) and satisfy, since \(\nu_{m}\to 0\), that at least \(\Theta(k^{2-n})\) of the mass of \(\mu^{(m)}\) escapes, namely \(\mu^{(\infty)}(X_{n})\leq 1-\Theta(k^{2-n})\). On the other hand, for \(i=1,\ldots,k\), \(\mu_{i}\) will still appear in the ergodic decomposition of \(\mu^{(\infty)}\) with coefficient bounded below by \(\Theta(k^{2-n})\) as desired.
The remainder of this section will be dedicated to the proof of Theorem 1.5. The proof of this theorem will follow the same lines as the proof of Theorem 1.1: We will first construct a special number field and prove some result on its units. Then we will connect these results to Hecke operators and use the properties of Hecke operators to complete the proof. In contrast to the method in the previous section, here we will fix \(p\) to be the minimal prime which is congruent to \(1\mod 2n\).
### Distribution of Hecke neighbors of points close to the cusp
The following corollary analyzes the distribution properties of Hecke operators where we begin with a lattice high in the cusp. For that, we will use a result that follows from Theorem 3.5 but appeared first at [6, Theoreme 1.2].
**Corollary 5.1**.: _Fix \(x\in X_{n}\), and a prime number \(p\). For \(k_{1}\leq k_{2}\leq...\leq k_{n},a\in\operatorname{SL}_{n}(\mathbb{R})\) as in Definition 3.4 we have:_
\[T_{a}^{\mathrm{M}}(x)\xrightarrow{k_{n}-k_{1}\to\infty}m_{x_{n}}. \tag{5.1}\]
**Remark 5.2**.: The convergence is uniform on compact sets. This follows from the fact that \(T_{a}(gx)=gT_{a}(x)\) for all \(g\in\operatorname{SL}_{n}(\mathbb{R})\), and the result for the particular case \(x=\mathbb{Z}^{n}\in X_{n}\).
Recall the definition of Minkowski's successive minima \(\lambda_{i}(x)\) for \(x\in X_{n},i=1,\ldots,n\) from [3, Chapter VIII]. They satisfy that a closed set \(U\subseteq X_{n}\) is compact if and only if \(\inf_{x\in U}\lambda_{1}(x)>0\) if and only if \(\sup_{x\in U}\lambda_{n}(x)<\infty\).
**Theorem 5.3**.: _Fix a prime number \(p\). Denote by \(a_{p}=\operatorname{diag}(p^{-(n-1)/n},p^{1/n},p^{1/n},\ldots,p^{1/n})\). For every point \(x\in X_{n}\) and \(x^{\prime}\in T_{a_{p}}(x)\) we have_
\[p^{1/n}\lambda_{1}(x)\geq\lambda_{1}(x^{\prime}) \tag{5.2}\]
_In addition, for every \(\delta>0\) there exists a compact set \(C(\delta)\) such that the following holds. Let \((x_{i})_{i=1}^{\infty}\) be a sequence of lattices and let \(p_{i}\xrightarrow{i\to\infty}\infty\) be an prime sequence. Assume that_
\[p_{i}^{\frac{1}{n}}\lambda_{1}(x_{i})\geq 1. \tag{5.3}\]
_Then_
\[\liminf_{i\to\infty}T_{a_{p_{i}}}^{\operatorname{M}}(x_{i})(C( \delta))>1-\delta. \tag{5.4}\]
**Remark 5.4**.: Although we state no-escape-of-mass in Eq. (5.4), one can prove an equidistribution result: If we replace Eq. (5.3) by
\[p_{i}^{\frac{1-\varepsilon}{n}}\lambda_{1}(x_{i})\geq 1. \tag{5.5}\]
for some \(\varepsilon>0\) we get that
\[T_{a_{p_{i}}}^{\operatorname{M}}(x_{i})\xrightarrow{i\to\infty}m_ {X_{n}}.\]
This result could simplify the proof of Theorem 1.5, but its proof is too complicated and can be avoided. The proof uses the fact that if \(k=k^{\prime}+s\) then \(T_{a^{k}}^{\operatorname{M}}\) is a large component of \(T_{a^{k^{\prime}}}^{\operatorname{M}}\circ T_{a^{s}}^{\operatorname{M}}\), and a \(p\)-adic interpretation of Hecke operators.
**Remark 5.5**.: With Eq. (5.5) instead of (5.5), Theorem 5.3 could be extended for The hecke operators \(T_{a_{p_{i}}^{k_{i}}}\), with \(p_{i}^{k_{i}}\) in the place of \(p_{i}\). For different Hecke operators there are other thresholds, stated in terms of different Minkowski successive minima. The equidistribution result holds as well.
Proof of Theorem 5.3.: The bound on \(\lambda_{1}\) of every Hecke neighbor follows from the definition of Hecke operators. Since \(x\subset p^{-1/n}x^{\prime}\) we get \(\lambda_{1}(x^{\prime})\leq p^{1/n}\lambda_{1}(x)\).
Let \(x\in X_{n}\) be a lattice with \(\lambda_{1}(x)\geq p^{-\frac{1}{n}}\) and \(x^{\prime}\) be a random point in \(T_{a^{k}}(x)\). We show that for every \(\delta>0\) there is a compact set \(C\subset X_{n}\) depending only on \(\delta\) such that \(\mathbb{P}(x^{\prime}\in C)\geq 1-\delta\) provided that \(p\) is sufficiently large as a function of \(\delta\).
Let \(C_{r}=\{y\in X_{n}:\lambda_{1}(y)\geq r\}\) for some \(1>r>0\) which will be chosen later as a function of \(\delta\). To bound \(\mathbb{P}(x^{\prime}\notin C_{r})\), we will bound:
\[\mathbb{E}(\#(x^{\prime}\cap B(r)\setminus\{0\}))\geq\mathbb{P}(x^{\prime} \notin C_{r}).\]
Here \(B(r)\) is the radius \(r\) ball in \(\mathbb{R}^{n}\). Since \(p^{-(n-1)/n}x^{\prime}\subseteq x\), we have:
\[\mathbb{E}(\#(x^{\prime}\cap B(r)\setminus\{0\}))=\sum_{v\in B(r)\cap p^{-(n-1 )/n}x\setminus\{0\}}\mathbb{P}(v\in x^{\prime}). \tag{5.6}\]
To analyze the probability \(\mathbb{P}(p^{-(n-1)/n}v\in x^{\prime})\) note that \(px\subseteq p^{(n-1)/n}x^{\prime}\) and satisfies \(p^{(n-1)/n}x/px=\mathbb{Z}/p\).. In particular, if \(v\in px\) then \(\mathbb{P}(v\in x)p^{(n-1)/n}=1\). Otherwise, if \(v\in(x\setminus px)\cap p^{(n-1)/n}x^{\prime}\) then \(v\) determines \(x^{\prime}\) by the formulaa \(p^{(n-1)/n}x^{\prime}=px+w\mathbb{Z}\), and hence
\[\mathbb{P}(v\in x^{\prime})=\frac{1}{\#T_{a_{p}}(x)}=\frac{p-1}{p^{n}-1}< \frac{1}{p^{n-1}}\]
. Using these estimates we get:
\[\sum_{v\in B(r)\cap p^{-(n-1)/n}x\setminus\{0\}}\mathbb{P}(v\in x^{\prime}) \leq\sum_{v\in B(p^{(n-1)/n}r)\cap x\setminus px}\frac{1}{p^{n-1}}+\sum_{v\in B (p^{(n-1)/n}r)\cap px\setminus\{0\}}1. \tag{5.7}\]
The assumption on \(\lambda_{1}(x)\) implies that the second term in the right hand side of Eq. (5.7) vanishes. To bound the first term, we need to analyze \(\#B(p^{(n-1)/n}r)\cap x\). The following claim is a different wording of [15, Lemma 3.5].
**Claim 5.6**.: _For every lattice \(x\), \(R>0\) we have that if \(R>\lambda_{1}(x)\) we have_
\[\#\left(B(R)\cap x\setminus 0\right)\asymp\max_{i=1}^{n}\frac{R^{i}}{\lambda_{ 1}(x)\cdots\lambda_{i}(x)}\]
By Minkowski's theorem ([3, Chapter VIII, Theorem V]), \(\lambda_{1}(x)\cdots\lambda_{n}(x)\asymp\mathrm{cov}(x)\). In particular, \(\lambda_{n}(x)\ll p^{(n-1)/n}\). Thus,
\[\#B(p^{(n-1)/n}r)\cap x \leq\max_{i=1}^{n}\frac{\left(p^{(n-1)/n}r\right)^{i}}{\lambda_{ 1}(x)\cdots\lambda_{i}(x)}=\prod_{i=1}^{n}\max\left(\frac{p^{(n-1)/n}r}{ \lambda_{i}(x)},1\right) \tag{5.8}\] \[\asymp\prod_{i=1}^{n}\max\left(p^{(n-1)/n}r,\lambda_{i}(x)\right) \ll p^{(n-1)/n}r\cdot(p^{(n-1)/n})^{n-1}=rp^{n-1},\]
provided that \(r\frac{1}{p^{(n-1)/n}}\).
Altogether Eqs. (5.6), (5.7) and (5.8) implies that
\[\mathbb{P}(x^{\prime}\notin C_{r})\ll r.\]
For every \(\delta>0\) choose \(r(\delta)>0\) sufficiently small so that we have \(\mathbb{P}(x^{\prime}\in C_{r(\delta)})>1-\delta\) for all \(p\geq\delta^{-n/(n-1)}\).
**Definition 5.7** (Shapira's orbit).: _Let \(0<\eta<\frac{1}{2n}\) fixed and \(M>0\) be a size parameter. Let \(0\leq a_{1}<\cdots<a_{n}\leq M\) be integers with \(a_{i+1}-a_{i}\geq\eta M\), say, \(a_{i}=\lfloor iM/n\rfloor\). Let \(P(z)=(z-a_{1})(z-a_{2})\cdots(z-a_{n})-1\) and \(K=\mathbb{Q}[\alpha]\) where \(P(\alpha)=0\). Let \(x_{\mathbb{Z}[\alpha]}\) as in Definition 2.4._
The theorem we state is an accumulation of results and computations given in Shapira [17].
**Definition 5.8**.: _For every \(\pi\in S_{n}\) (the permutation group), denote by \(F_{\pi}\subseteq\mathbb{R}_{0}^{n-1}\) the set of vectors \(F_{\pi}=\{v\in\mathbb{R}_{0}^{n-1}:v_{\pi(i)}\geq v_{\pi(i+1)}-1:i=1,\ldots,n\}\). Here we use cyclic index notations and \(\pi(n+1)=\pi(1)\). The set \(F_{\pi}\) is compact. Note that \(F_{\pi}\) is well defined for \(\pi\in S_{n}/C_{n}\) where \(C_{n}\) is the cyclic group of rotations._
We recall a result by Shapira.
**Theorem 5.9** (Shapira [17]).: _There is a bound \(M_{0}(\eta)\) such that for all \(M>M_{0}(\eta)\) the following occurs. Let \(x_{\mathbb{Z}[\alpha]}\) as constructed above. It is stabilized by \(\exp\Lambda\) where \(\Lambda\) is generated by \(v_{1},...,v_{n-1},v_{n}\) satisfying \(v_{1}+...+v_{n}=0\) and_
\[\left|\left(v^{(i)}\right)_{j}-(2-2n\delta_{ij})\log M\right|=O(1).\]
_Moreover there is a finite collection of points \(P_{0}\subseteq\mathbb{R}_{0}^{n-1}/\Lambda\) and a map \(\pi:P_{0}\to S_{n}/C_{n}\) such_
\[\frac{\operatorname{vol}\left(\bigsqcup_{p\in P_{0}}p+(1-o_{M}(1))\log(M)F_{ \pi(p)}\right)}{\operatorname{vol}(\mathbb{R}_{0}^{n-1}/\Lambda)}=1-o_{M}(1),\]
_and for all \(p\in P_{0},v\in(1-o_{M}(1))\log(M)F_{\pi(p)}\) one has_
\[d_{X_{n}}(\exp(p+v)x_{\mathbb{Z}[\alpha]},\exp(v)\mathbb{Z}^{n})=o_{M}(1).\]
Denote by \(\text{min-co}:\mathbb{R}_{0}^{n-1}\rightarrow(-\infty,0]\) the minimum of the coordinates function, and note that \(\text{min}_{F_{\pi}}(\text{min-co})=-\frac{d-1}{2}\). This function is important since
\[\lambda_{1}(\exp(v)\mathbb{Z}^{n})=\exp(\text{min-co}(v))\quad\text{for all} \quad v\in\mathbb{R}_{0}^{n-1}. \tag{5.9}\]
Let \(m_{F_{\pi}}\) be the uniform probability measure on \(F_{\pi}\) for all \(\pi\in S_{n}\). Note that \((\text{min-co})_{*}m_{F_{\pi}}\) is independent of \(\pi\in S_{n}\). We use the following corollary of Shapira's result, which follows Eq. (5.9):
**Corollary 5.10**.: _In the setting of Theorem 5.9, we have_
\[d\left(\int\delta_{\frac{1}{\log M}\log\lambda_{1}(x^{\prime})}d\mu_{\mathbb{Z}[ \alpha]}(x^{\prime}),(\text{min-co})_{*}m_{F_{1}}\right)=o_{M}(1).\]
Proof of Theorem 1.5.: Let \(c\in(0,1]\) and let \(M>0\) be large enough. From now on, every measure in our construction will depend on \(M\). Since \(m_{F_{1}}\) is absolutely continuous with respect to Lebesgue, there is \(\eta>0\) such that \(m_{F_{1}}(\text{min-co}^{-1}[-\eta,0])=c\). Let \(p\) be the first prime number \(p>M^{n\eta}\). By the Prime Number Theorem, \(p=M^{\eta n+O(1/\log M)}\). Let \(q\) be a the first prime number with \(q>\log M\). Again by the Prime Number Theorem, \(q=M^{O_{M}(\log\log M/\log M)}\). Altogether,
\[pq=M^{\eta n+O(\log\log M/\log M)}. \tag{5.10}\]
Let \(a_{p},a_{q}\) as in Theorem 5.3 contracted for \(p,q\) respectively. Let \(x_{\mathbb{Z}[\alpha]}=x_{\mathbb{Z}[\alpha]}(M)\) be as in Definition 5.7. The measure
\[\nu_{0}=T^{\text{M}}_{a_{q}}(T^{\text{M}}_{a_{p}}(\mu_{Ax_{\mathbb{Z}[\alpha] }})),\]
is supported on several compact orbits which we will now describe. Consider the set
\[B=T_{a_{q}}(T_{a_{p}}(x_{\mathbb{Z}[\alpha]}))\subset X_{n}.\]
Define a partition
\[B=B_{1}\sqcup B_{2}\sqcup\cdots\sqcup B_{r} \tag{5.11}\]
by saying that two points \(y_{0},y_{1}\in B\) are in the same set \(B_{i}\) if \(Ay_{0}=Ay_{1}\). Note that \(B\) is invariant under the \(\text{stab}_{A}(x_{\mathbb{Z}[\alpha]})\), and hence so are \(B_{i}\). For every \(x=ax_{\mathbb{Z}[\alpha]}\in Ax_{\mathbb{Z}[\alpha]}\) define \(T_{B_{i}}(x)=aB_{i}\). The element \(a\) is well defined up to \(\text{stab}_{A}(x_{\mathbb{Z}[\alpha]})\), and since \(B_{i}\) is invariant to this ambiguity, \(T_{B_{i}}(x)\) is well defined.
Define \(T^{\text{M}}_{B_{i}}(x)=\frac{1}{\#T_{B_{i}}(x)}\sum_{y\in T_{B_{i}}(x)} \delta_{y}\). We deduce that
\[\nu_{0}=\sum_{i=1}^{r}\beta_{i}T^{\text{M}}_{B_{i}}(\mu_{x_{\mathbb{Z}[\alpha ]}}),\]
for some \(\beta_{i}>0\) with \(\sum_{i=1}^{r}\beta_{i}=1\) (See Remark 5.12 for further discussion on \((\beta_{i})_{i=1}^{r}\)). We will now analyze \(\nu_{0}\) and \(T^{\text{M}}_{B_{i}}(\mu_{x_{\mathbb{Z}[\alpha]}})\).
**Analysis of Hecke-like operators on different parts of the compact orbit:** Distinguish two parts of \(Ax_{\mathbb{Z}[\alpha]}\):
\[(Ax_{\mathbb{Z}[\alpha]})^{-}=\{x^{\prime}\in Ax_{\mathbb{Z}[ \alpha]}:\lambda_{1}(x^{\prime})\leq\exp((-1-\varepsilon)\eta\log M)\} \tag{5.12}\] \[(Ax_{\mathbb{Z}[\alpha]})^{+}=\{x^{\prime}\in Ax_{\mathbb{Z}[ \alpha]}:\lambda_{1}(x^{\prime})\geq\exp((-1+\varepsilon)\eta\log M)\}. \tag{5.13}\]
Then by Corollary 5.10 we get that \(\mu_{\mathbb{Z}[\alpha]}((Ax_{\mathbb{Z}[\alpha]})^{-})=1-c+O(\varepsilon)+o_{M}(1)\) and \(\mu_{\mathbb{Z}[\alpha]}((Ax_{\mathbb{Z}[\alpha]})^{+})=c+O(\varepsilon)+o_{M}(1)\). Consequently, we get that
\[\mu_{\mathbb{Z}[\alpha]}(Ax_{\mathbb{Z}[\alpha]}\setminus((Ax_{\mathbb{Z}[ \alpha]})^{+}\cup(Ax_{\mathbb{Z}[\alpha]})^{-}))=O(\varepsilon)+o_{M}(1). \tag{5.14}\]
**Analysis on \((Ax_{\mathbb{Z}[\alpha]})^{-}\):** We will now analyze the escape of mass for the measures \(T^{\mathrm{M}}_{B_{i}}(\mu_{x_{\mathbb{Z}[\alpha]}})\). For all \(x^{\prime}\in(Ax_{\mathbb{Z}[\alpha]})^{-}\), Theorem 5.3 implies that \((pq)^{1/n}\lambda_{1}(x^{\prime})\geq\lambda_{1}(y^{\prime})\) for all \(y^{\prime}\in T_{a_{q}}(T_{a_{p}}(x^{\prime}))\). By Eqs. (5.12) and (5.10), we get an upper bound of
\[\lambda_{1}(y^{\prime})\leq M^{-\varepsilon+O(\log\log M/\log M)}.\]
Hence if \(\varepsilon\) is sufficiently big as a function of \(M\), say, \(\varepsilon=\frac{1}{\log\log M}\), we get that for all \(i=1,\ldots,r\),
\[\int_{(Ax_{\mathbb{Z}[\alpha]})^{-}}\int\chi_{\mathcal{K}_{ \varepsilon}}d(T^{\mathrm{M}}_{B_{i}}(x^{\prime}))d\mu_{\mathbb{Z}[\alpha]}(x ^{\prime})=\int_{(Ax_{\mathbb{Z}[\alpha]})^{-}}\int\chi_{\mathcal{K}_{ \varepsilon}}dT^{\mathrm{M}}_{a_{q}}(T^{\mathrm{M}}_{a_{p}}(x^{\prime}))d\mu_ {\mathbb{Z}[\alpha]}(x^{\prime})=0. \tag{5.15}\]
Here \(\mathcal{K}_{\varepsilon}\) is defined as in Definition 2.2.
**Analysis on \((Ax_{\mathbb{Z}[\alpha]})^{+}\):** Let \(x^{\prime}\in(Ax_{\mathbb{Z}[\alpha]})^{+}\). By Eqs. (5.13) and the definition of \(p\) we get that
\[\lambda_{1}(x^{\prime})\geq p^{-(1-\varepsilon)/n}.\]
By Theorem 5.3, we get that \(T^{\mathrm{M}}_{a_{p}}(x^{\prime})\) has no escape of mass as \(M\to\infty\). By Corollary 5.1, we get that
\[T^{\mathrm{M}}_{a_{q}}(T^{\mathrm{M}}_{a_{p}}(x^{\prime}))\xrightarrow{M\to \infty}m_{X_{n}}. \tag{5.16}\]
**Definition 5.11** (Spaces of measures and ergodic decomposition).: _Let \(X_{n}^{*}=X_{n}\sqcup\{*\}\) denote the one point compactification of \(X_{n}\). This is a compact metric space. Hence, by Banach Alaoglu's Theorem, the space of probability measures on \(X_{n}^{*}\), namely \(\mathcal{M}(X_{n}^{*})\), is again a compact metric space. The space of \(A\)-invariant probability measures, denoted \(\mathcal{M}(X_{n}^{*})^{A}\) is a closed subset, hence again a compact metric space. Again, by Banach Alaoglu, the space of of probability measures on \(\mathcal{M}(X_{n}^{*})^{A}\), namely \(\mathcal{M}(\mathcal{M}(X_{n}^{*})^{A})\), is again a compact metric space. We interpret \(m_{X_{n}},\nu_{0},T^{\mathrm{M}}_{B_{i}}(\mu_{Ax_{\mathbb{Z}[\alpha]}})\) as points in \(\mathcal{M}(X_{n}^{*})^{A}\). Let \(\omega=\sum_{i=1}^{r}\beta_{i}\delta_{T^{\mathrm{M}}_{B_{i}}(\mu_{Ax_{ \mathbb{Z}[\alpha]}})}\in\mathcal{M}(\mathcal{M}(X_{n}^{*})^{A})\). This is the ergodic decomposition of \(\nu_{0}\), and it satisfies \(\int d\omega=\nu_{0}\)._
In these terms we will analyze the limiting behavior we got earlier. Denote
\[\lim_{M\to\infty}\nu_{0}=\vec{\nu}_{0},\qquad\lim_{M\to\infty}\omega=\vec{ \omega}.\]
Eq. (5.15) implies that \(\omega\) is supported on measures giving mass at most \(c+o_{M}(1)\) to \(\mathcal{K}_{\varepsilon}\). Taking
\(M\) to infinite we get that \(\vec{\omega}\) is supported on measures giving mass at most \(c\) to \(X_{n}\), that is, giving mass at least \(1-c\) to the additional point \(*\). In formula,
\[\vec{\omega}\left(\{\mu\in\mathcal{M}(X_{n}^{*}):\mu(\{*\})\geq 1-c\} \right)=1. \tag{5.17}\]
Hence
\[(1-c)\delta_{*}\leq\int d\vec{\omega}=\lim_{M\to\infty}\int\omega =\vec{\nu}_{0}. \tag{5.18}\]
On the other hand, Eq (5.16) implies that the part of \(\nu_{0}\) coming from the application of \(T_{a_{q}}^{\mathrm{M}}\circ T_{a_{p}}^{\mathrm{M}}\) on \((Ax_{\mathbb{Z}[\alpha]})^{+}\) equidistributes as \(M\to\infty\). This implies that \(\vec{\nu}_{0}\geq cm_{X_{n}}\). Together with (5.18) we deduce that
\[\vec{\nu}_{0}=cm_{X_{n}}+(1-c)\delta_{*}. \tag{5.19}\]
This is the ergodic decomposition of \(\vec{\nu}_{0}\), which implies that \(\vec{\omega}\), whose integral is \(\vec{\nu}_{0}\) is supported on measures of the form \(c^{\prime}m_{X_{n}}+(1-c^{\prime})\delta_{*}\) for some \(c^{\prime}\). By Eq. (5.17) we deduce that almost surely \(c^{\prime}\geq c\). Since the integral \(\int d\omega=cm_{X_{n}}+(1-c)\delta_{*}\), we deduce that \(\vec{\omega}=\delta_{cm_{X_{n}}+(1-c)\delta_{*}}\). In particular,
\[\min_{i=1}^{r}d_{\mathcal{M}(X_{n}^{*})}(T_{B_{i}}^{\mathrm{M}}( \mu_{Ax_{\mathbb{Z}[\alpha]}}),cm_{X_{n}}+(1-c)\delta_{*})=\min_{\mu\in\mathrm{ supp}(\omega)}d_{\mathcal{M}(X_{n}^{*})}(\mu,\mathrm{supp}(\vec{\omega}))\xrightarrow{M\to \infty}0,\]
as desired.
**Remark 5.12**.: The constants \(\beta_{i}\) can be computed, \(\beta_{i}=\frac{\#B_{i}}{\#B}\). However, this computation needs a disjointness result, \(T_{a_{q}}(y)\cap T_{a_{q}}(y^{\prime})=\emptyset\) for all \(y\neq y^{\prime}\in T_{p}(x_{\mathbb{Z}[\alpha]})\). This is not needed in the proof.
**Remark 5.13** (Variants on the proof).: **Using one Hecke operator:** In the proof above we use the composition of the Hecke operators \(T_{a_{q}}^{\mathrm{M}}\circ T_{a_{p}}^{\mathrm{M}}\). Had we proved the stronger version of Theorem 5.3 as suggested in Remark 5.4, we could use \(T_{a_{p}}^{\mathrm{M}}\) directly.
**Obtaining a lower bound on the volume:** We sketch a way to perturb the above proof to obtain a lower bound on the volume of the orbits which is polynomial in the discriminant. This lower bound will come from showing that many of the \(B_{i}\)'s defined in Eq. 5.11 can be made of polynomial size in the discriminant. The \(B_{i}\)'s correspond to orbits of action of \(\mathcal{O}_{K}^{\times}\) on \(\mathbb{P}((\mathbb{Z}/p\mathbb{Z})^{n})\). To see that the orbits can be made large, one can use the number field Artin's conjecture Artin's Conjecture, which implies that typically one of the \(B_{i}\)'s is very large. However, we can sketch a rigorous argument ensuring that most of the \(B_{i}\)'s are large. For that, let \(r\) be the smallest prime such that \(r=1\mod 2n\) and choose the \(a_{i}\)'s in Definition 5.7 such that \(a_{i}\mod r\) are the roots of \(x^{n}+1\mod r\). Moreover, choose the \(a_{i}\)'s such that \(a_{1}\cdot a_{2}\cdots a_{n}\not\equiv(-1)^{n}\mod r^{2}\). This choice of \(a_{i}\) ensures
that corresponding the number field \(K\) is totally ramified at \(r\). This can be used to show that the units in \(\mathcal{O}_{K}^{\times}\) generate a group of size \(\Theta(r^{nm})\) inside \((\mathcal{O}_{K}/r^{m}\mathcal{O}_{K})^{\times}\) for all \(m\). Now we replace the Hecke operators \(T_{a_{p}}\) in the proof by \(T_{a_{r}^{m}}\). Then, the lower bound on the orbit sizes implies the lower bound on the size of many \(B_{i}\)'s and therefore also the bound on the volumes.
|
2310.19531 | MiLe Loss: a New Loss for Mitigating the Bias of Learning Difficulties
in Generative Language Models | Generative language models are usually pretrained on large text corpus via
predicting the next token (i.e., sub-word/word/phrase) given the previous ones.
Recent works have demonstrated the impressive performance of large generative
language models on downstream tasks. However, existing generative language
models generally neglect an inherent challenge in text corpus during training,
i.e., the imbalance between frequent tokens and infrequent ones. It can lead a
language model to be dominated by common and easy-to-learn tokens, thereby
overlooking the infrequent and difficult-to-learn ones. To alleviate that, we
propose a MiLe Loss function for mitigating the bias of learning difficulties
with tokens. During training, it can dynamically assess the learning difficulty
of a to-be-learned token, according to the information entropy of the
corresponding predicted probability distribution over the vocabulary. Then it
scales the training loss adaptively, trying to lead the model to focus more on
the difficult-to-learn tokens. On the Pile dataset, we train generative
language models at different scales of 468M, 1.2B, and 6.7B parameters.
Experiments reveal that models incorporating the proposed MiLe Loss can gain
consistent performance improvement on downstream benchmarks. | Zhenpeng Su, Xing Wu, Xue Bai, Zijia Lin, Hui Chen, Guiguang Ding, Wei Zhou, Songlin Hu | 2023-10-30T13:33:21Z | http://arxiv.org/abs/2310.19531v7 | # InfoEntropy Loss to Mitigate Bias of Learning Difficulties
###### Abstract
Generative language models are usually pre-trained on large text corpus via predicting the next token (i.e., sub-word/word/phrase) given the previous ones. Recent works have demonstrated the impressive performance of large generative language models on downstream tasks. However, existing generative language models generally neglect an inherent challenge in text corpus during training, i.e., the imbalance between frequent tokens and infrequent ones. It can lead a language model to be dominated by common and easy-to-learn tokens, thereby overlooking the infrequent and difficult-to-learn ones. To alleviate that, we propose an **Information Entropy** Loss (InfoEntropy Loss) function. During training, it can dynamically assess the learning difficulty of a to-be-learned token, according to the information entropy of the corresponding predicted probability distribution over the vocabulary. Then it scales the training loss adaptively, trying to lead the model to focus more on the difficult-to-learn tokens. On the Pile dataset, we train generative language models at different scales of 468M, 1.2B, and 6.7B parameters. Experiments reveal that models incorporating the proposed InfoEntropy Loss can gain consistent performance improvement on downstream benchmarks.
## 1 Introduction
Generative language models like GPT-3 Brown et al. (2020) are generally pretrained on extensive textual data, in the manner of predicting the next token given the previous ones for each training text. Recently, large generative language models have been exhibiting impressive performance on various downstream natural language tasks, like dialogue system, classification, sequence labeling, etc. Touvron et al. (2023); Brown et al. (2020); Chowdhery et al. (2022), and attracting much attention from both academia and industry.
However, previous works have overlooked an inherent issue in natural language corpus that might affect the pretraining of a language model, i.e., frequent tokens far outnumber infrequent ones. Actually, Zipf's law Piantadosi (2014) highlights the inherent imbalance of tokens in natural language datasets, i.e., a few frequent tokens would dominate a dataset while many infrequent ones only form a minor portion. For instance, \(50\%\) of the Brown Corpus Francis and Kucera (1979), which comprises over a million tokens, is covered by only the top 135 most frequent tokens.
The imbalance of tokens is essentially a class imbalance problem. We argue that infrequent tokens are difficult to learn due to their fewer occurrences, in contrast to the frequent ones that can be learned adequately Lin et al. (2017). To confirm that, we utilize the remarkable language model LLaMA Touvron et al. (2023) with 6.7B parameters on the Pile Gao et al. (2021) validation set and perform a detailed perplexity (PPL) analysis at the token level. It's worth noting that a higher perplexity is indicative of a token's higher learning difficulty. In our analysis, all tokens are grouped into three frequency buckets: high, medium, and low, based on their counts in the whole Pile dataset1. Here, we calculate the frequency of each token and sort them in descending order of frequency. Then, we categorize the top tokens that cover \(80\%\) of the dataset as tokens of high frequency, those that cover the extra \(15\%\) (i.e., \(80\%-95\%\)) of the dataset as
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Frequency Bucket & high & medium & low \\ \hline PPL & 4.323 & 13.541 & 15.517 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The average perplexity (PPL) for tokens in different frequency buckets.
tokens of medium frequency, and the remaining \(5\%\) as tokens of low frequency. As shown in Table 1, for the tokens of high frequency, LLaMA derives a much lower average perplexity (\(4.394\)) than those of medium (\(13.891\)) or low (\(15.814\)) frequency. That confirms our assumption: token imbalance can lead to the bias of learning difficulties. More explicitly, those frequent and easy-to-learn tokens (i.e., classes) might overwhelm the model and make it neglect the infrequent and difficult-to-learn ones during training Lin et al. (2017). Therefore, we emphasize that the latter kinds of tokens should be given more attention during language model pretraining.
It is a straightforward idea to use the notable Focal Loss Lin et al. (2017) from the field of object detection as an alternative to the prevalent Cross-Entropy Loss for the next token prediction. This modification aims to intensify the language model's focus on the infrequent and difficult-to-learn tokens. Focal Loss is a dynamically scaled version of the Cross-Entropy Loss, where the scaling factor decreases as the predicted probability w.r.t the ground-truth token increases. Specifically, Focal Loss decreases the weights of the easy-to-learn tokens, as their predicted probabilities are higher, and meanwhile increases the weights of the difficult-to-learn ones, as their predicted probabilities are lower. In that way, it compels the language model to pay more attention to difficult-to-learn tokens.
Nevertheless, Focal Loss Lin et al. (2017) only takes into account the probability w.r.t the ground-truth token when assessing its learning difficulty, and is intuitively designed for the multi-class classification problem where an object is only associated with a single class label. Indeed, in language model pretraining, when predicting the next token given the previous ones, there might exist multiple valid tokens besides the ground-truth one. This makes predicting the next token more like a multi-label classification problem, where an object can be associated with multiple class labels. For example, as shown in Figure 1, given the previous tokens "I like playing ", there are multiple valid next tokens, like "basketball", "football", "golf", etc. Suppose the target training token sequence is "I like playing basketball". As the valid tokens would divide up almost the total probability (i.e., 1.0), the ground-truth token "basketball" would be given a smaller probability (e.g., 0.18). Then Focal Loss would treat "basketball" for the position as a difficult-to-learn token. However, as all the other valid tokens are also correct for the position in the view of language modeling, only allowing "basketball" to be predicted is unsuitable. Thus, the learning difficulty assessed by Focal Loss for "basketball" is imperfect in such a multi-label classification case.
In this paper, we propose a new loss function termed InfoEntropy Loss (InfoEntropy Loss) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases. We observe that when a next target token is easy-to-learn, the minor valid tokens would divide up almost the total probability while others are associated with very low probabilities, resulting in a low information entropy of the predicted probability distribution over the vocabulary. On the contrary, if a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy. Therefore, instead of relying on the single probability of the ground-truth token as Focal Loss, the proposed InfoEntropy Loss uses the information entropy of the predicted probability distribution for assessing learning difficulties, which can better handle cases with multiple valid tokens. Then, tokens exhibiting high-entropy, possibly being difficult-to-learn, will be assigned increased weights during language model pretraining.
To validate the effectiveness of the proposed InfoEntropy Loss, we train three different-sized models on the Pile dataset Gao et al. (2021). Experimental results indicate that InfoEntropy Loss steadily outperforms Focal Loss and Cross-Entropy Loss on downstream benchmarks.
Our contributions can be summarized as follows.
* We highlight the bias of learning difficulties in generative language models, which is mainly caused by the inherent token imbalance in textual training data.
* We propose a new loss function termed InfoEntropy Loss (\(\text{InfoEntropy Loss}\)) to better enable a language model to pay more attention to the difficult-to-learn tokens in such multi-label classification cases.
Figure 1: An example where predicting the next token is more like a multi-label classification problem.
tropy Loss to enhance Focal Loss for mitigating the bias of learning difficulties.
* We validate the effectiveness of the proposed InfoEntropy Loss with extensive experiments. Experimental results show that it consistently outperforms Focal Loss and Cross-Entropy Loss.
## 2 Related Works
### Language Models
Language Models are statistical models that aim to maximize the likelihood of the training sequences of tokens (Touvron et al., 2023). Early language models are based on the statistics of \(n\)-grams (Bahl et al., 1983; Katz, 1987; Kneser and Ney, 1995). Then the focus has shifted toward neural-network-based models. Recurrent Neural Networks (Mikolov et al., 2010) and their variants, e.g., LSTMs (Graves, 2013), have been successful in this regard. Those models are capable of learning complex patterns in textual data and have achieved remarkable results in various language modeling tasks.
Recently, Transformers are commonly used as the backbone network for language models. Representative works include BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT-2 (Radford et al., 2019), UniLM (Dong et al., 2019), and T5 (Raffel et al., 2020), etc. Since the advent of GPT-3 (Brown et al., 2020) with 175 billion parameters, which achieves outstanding performance in various downstream tasks, the research landscape has increasingly pivoted towards large generative language models. Notable works like Gopher (Rae et al., 2021), Pythia (Biderman et al., 2023), PaLM (Chowdhery et al., 2022), GaLM (Du et al., 2022), OPT (Zhang et al., 2022) and LLaMA (Touvron et al., 2023), have also been proposed.
However, previous works do not consider the bias of learning difficulties among tokens, which is mainly caused by the inherent token imbalance in the textual training data. They probably overlook some difficult-to-learn but informative tokens during model training. To tackle that, in this paper we introduce InfoEntropy Loss, aiming to lead generative language models to pay more attention to those tokens.
### Class Imbalance
Class Imbalance refers to a highly skewed distribution of classes in the training data, which means that the number of instances in some classes is significantly higher than those in the other classes (Yang and Xu, 2020). A commonly used solution is to perform data re-sampling, where the minority classes are up-sampled (Chawla et al., 2002; Ando and Huang, 2017; Pouyanfar et al., 2018; Shen et al., 2016), and the majority classes are down-sampled (Lee et al., 2016; Buda et al., 2018). Other works (Cui et al., 2019; Dong et al., 2019; Lin et al., 2017) have also proposed enhanced loss functions to mitigate issues caused by class imbalance, e.g., Focal Loss.
In language modeling, to mitigate the mentioned bias of learning difficulties caused by the inherent token imbalance, one may simply refer to the data re-sampling method. However, data re-sampling at the token level, i.e., up-sampling infrequent tokens and down-sampling frequent ones, will probably break the semantics of training texts. Meanwhile, re-sampling at the coarse-grained sentence/paragraph/document/domain level will equally increase/decrease the number of both kinds of tokens, and thus cannot well tackle the token imbalance.
Therefore, we consider enhancing the loss function to alleviate the bias of learning difficulties among tokens for generative language models, enabling them to pay more attention to those difficult-to-learn but informative tokens. Firstly, we attempted to use the notable Focal Loss. However, since predicting the next token in generative language models is more like a multi-label classification problem as analyzed before, Focal Loss struggles to give suitable scaling factors for cases with multiple valid next tokens. To tackle that, we introduce the InfoEntropy Loss.
## 3 Method
### Preliminaries
Language Model PretrainingAs mentioned before, a generative language model is generally trained via predicting the next token (i.e., sub-word/word/phrase), one by one, based on the previous ones for each training text, aiming to maximize the likelihood. Formally, given a training text \(T\) consisting of \(n\) tokens, i.e., \(T=[t_{1},\dots,t_{i-1},t_{i},\dots,t_{n}]\), when predicting a target token \(t_{i}\), the generative language model takes the previous ones \(\mathbf{t}=[t_{1},t_{2},...,t_{i-1}]\) as input, and then generates a probability distribution \(\mathbf{p}\) over the vocabulary as output. In nearly all implementa
tions, the Cross-Entropy loss is employed as the loss function, to maximize the predicted probability \(\mathbf{p}_{t_{i}}\) w.r.t the ground-truth token \(t_{i}\). Considering that the recent state-of-the-art deep language models (LM) predominantly leverage the Transformer architecture Vaswani et al. (2017), the training loss \(\mathcal{L}_{CE}\) of the generative language model can be formulated as follows.
\[\mathcal{L}_{CE} =-\log(\mathbf{p}_{t_{i}}) \tag{1}\] \[s.t.,\quad\mathbf{p} =\text{softmax}(W\mathbf{H}_{i-1}^{last})\] (2) \[\mathbf{H}^{last} =\text{Transformer}(\text{Embedding}(\mathbf{t})) \tag{3}\]
Here, \(\mathbf{H}^{last}\) denotes the hidden states of the last layer of the Transformer architecture, which consists of the hidden states w.r.t the previous tokens \(\mathbf{t}=[t_{1},t_{2},...,t_{i-1}]\), i.e., \(\mathbf{H}^{last}=[\mathbf{H}_{1}^{last},\mathbf{H}_{2}^{last},\ldots, \mathbf{H}_{i-1}^{last}]\). With \(\mathbf{H}_{i-1}^{last}\), a linear projection layer \(W\) is introduced to derive the predicted probability distribution \(\mathbf{p}\) over the vocabulary, with a softmax operation.
Focal Loss for ClassificationFocal Loss is originally proposed for object detection to address the issue of extreme foreground-background class imbalance encountered during the training of one-stage object detectors Lin et al. (2017). Focal Loss can lead a classification model to concentrate more on a sparse set of difficult-to-learn classes and prevent the abundance of easy-to-learn classes from overwhelming the model during training. Actually, Focal Loss is an extension of Cross-Entropy Loss, with an extra dynamic scaling factor, as formulated below.
\[\mathcal{L}_{FL}^{0}=-(1-p)^{\gamma}\log(p) \tag{4}\]
Here, \(p\) is the predicted probability w.r.t the ground-truth class, and \(\gamma\) is a hyperparameter with \(\gamma\geq 0\). It can be seen that when \(\gamma=0\), Focal Loss would degenerate to Cross-Entropy Loss. As \(p\) decreases, i.e., getting more-difficult-to-learn, the dynamic scaling factor \((1-p)^{\gamma}\) increases, thus giving more attention (i.e., higher weights) to the difficult-to-learn classes.
### Focal Loss for Language Models
Generative language models are commonly trained on the massive textual corpus, which exhibits inherent token imbalance as revealed by Zipf's law Piantadosi (2014). Such an imbalance of tokens can lead to two primary challenges: 1) Training efficiency becomes sub-optimal. A large number of easy-to-learn tokens (i.e., classes) provide marginal gains in learning signals. Lin et al. (2017). 2) The training process can be overwhelmed by a large proportion of the frequent and easy-to-learn tokens, and thus pay insufficient attention to the other infrequent, difficult-to-learn but informative tokens, which might lead to performance degradation.
As revealed in Equation (1), training a generative language model is essentially a classification problem. Therefore to mitigate the bias of learning difficulties caused by the inherent token imbalance, Focal Loss can be applied. Specifically, we can use the Focal Loss as a substitute for the Cross-Entropy Loss in Equation (1) to train a generative language model as follows.
\[\mathcal{L}_{FL}=-(1-\mathbf{p}_{t_{i}})^{\gamma}\log(\mathbf{p}_{t_{i}}) \tag{5}\]
Here, the dynamic scaling factor \((1-\mathbf{p}_{t_{i}})^{\gamma}\) is derived based on the predicted probability \(\mathbf{p}_{t_{i}}\) of the to-be-learned token \(t_{i}\). Similarly, as the probability \(\mathbf{p}_{t_{i}}\) decreases (i.e., being more difficult to learn), the scaling factor \((1-\mathbf{p}_{t_{i}})^{\gamma}\) increases correspondingly. Therefore, more-difficult-to-learn tokens will receive higher loss weights.
### Proposed InfoEntropy Loss
However, as illustrated in Figure 1 and analyzed before, in language model pretraining, predicting the next token is more like a multi-label classification problem. When there are multiple valid next tokens for a given sequence of previous tokens, the learning difficulty assessed by Focal Loss is imperfect.
To tackle that, we propose InfoEntropy Loss, which leverages the information entropy of the predicted probability distribution \(\mathbf{p}\) over the vocabulary, instead of the single probability \(\mathbf{p}_{t_{i}}\) as Focal Loss, to derive a dynamic scaling factor. InfoEntropy Loss is naturally designed for cases with multiple valid tokens. It is inspired by the following observations: 1) when a next token is easy-to-learn, the minor valid tokens would divide up almost the total probability (i.e., 1.0) while others are associated with very low probabilities (i.e., \(\mathbf{p}\) is more focused), resulting in a low information entropy; 2) when a next token is difficult-to-learn, the predicted probability distribution would be more uniform, resulting in a higher information entropy.
Specifically, InfoEntropy Loss can be formulated as follows in language model pretraining.
\[\mathcal{L}_{IL}=-(1-\sum_{j}\mathbf{p}_{j}\log(\mathbf{p}_{j}))^{\gamma}\log (\mathbf{p}_{t_{i}}) \tag{6}\]
Here, \(-\sum_{j}\mathbf{p}_{j}\log(\mathbf{p}_{j})\geq 0\) is the information entropy of the predicted probability distribution \(\mathbf{p}\) over the vocabulary. Note that when \(\mathbf{p}\) is a uniform distribution, i.e., \(p_{j}=\frac{1}{N}\) with \(N\) being the vocabulary size for all \(j\), the information entropy reaches its upper bound \(\log(N)\). Therefore, the dynamic scaling factor \((1-\sum_{j}\mathbf{p}_{j}\log(\mathbf{p}_{j}))\) is bounded in \([1,1+\log(N)]\). When a next token is difficult to learn, the corresponding higher information entropy results in a higher scaling factor, and thus InfoEntropy Loss increases the loss weights for such tokens. Conversely, InfoEntropy Loss decreases the loss weights for easy-to-learn tokens, according to their lower information entropies.
## 4 Experiments
We train three generative language models of different capacities, i.e., 468M, 1.2B, and 6.7B parameters, on the open-source Pile dataset Gao et al. (2021) as Biderman et al. (2023); Xie et al. (2023); Carlini et al. (2023), and make comparisons among different loss functions.
### The Pile dataset
The Pile dataset is a public large-scale corpus for language model pretraining, which has over 825GB English texts across 22 domains. For experiments, we tokenize it using the remarkable LLaMA tokenizer Touvron et al. (2023) with a 32k-sized vocabulary. As the number of tokens changes with a new tokenizer, we follow Xie et al. (2023) to re-calculate the sampling weight for each domain. Specifically, we chunk the dataset into sequences of 1,024 tokens, and then for each domain, we multiply its corresponding number of sequences with its domain-specific epochs reported in Gao et al. (2021). Finally, we normalize all the multiplication results to obtain the sampling weights listed in Table 3.
### Experimental setup
We train three generative language models with 468M, 1.2B, and 6.7B parameters, respectively. Specifically, the architectures of the 468M-parameter and the 1.2B-parameter models, including the dimensionality of hidden states, the number of layers, etc., are identical to those of the 410M-parameter and the 1.0B-parameter models outlined in Biderman et al. (2023). The minor differences in parameter sizes are attributed to the variations of vocabulary size in the embedding layer. As for the 6.7B-parameter model, its architecture is identical to LLaMA-7B Touvron et al. (2023). The corresponding hyperparameters for each model can be found in Table 2. Following LLaMA Touvron et al. (2023), we use the AdamW optimizer Loshchilov and Hutter (2019) with a learning rate of \(3.0e^{-4}\), \(2k\) warmup steps, and a cosine learning rate decay schedule. Following Lin et al. (2017), the hyperparameter \(\gamma\) is set as \(1.0\) for both Focal Loss and the proposed InfoEntropy Loss, unless explicitly stated otherwise. Due to the computational budget and following the pretraining settings of Xie et al. (2023), all models are pretrained with 100B tokens.
Following Touvron et al. (2023); Brown et al. (2020); Rae et al. (2021); Hoffmann et al. (2022), we primarily evaluate all models on tasks of commonsense reasoning, closed-book question answering, and massive multitask language understanding. For fair comparisons, we utilize the open-source pipeline lm-evaluation-harness2Gao et al. (2021) for evaluation, as Biderman et al. (2021)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline model size & dimension & \(n\) heads & \(n\) layers & learning rate & batch size & seq length \\ \hline
468M & 1024 & 16 & 24 & \(3.0e^{-4}\) & 1024 & 1024 \\
1.2B & 2048 & 8 & 16 & \(3.0e^{-4}\) & 1024 & 1024 \\
6.7B & 4096 & 32 & 32 & \(3.0e^{-4}\) & 2048 & 2048 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model sizes, architectures, and optimization hyper-parameters.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & Weights & & Weights \\ \hline ArXiv & 0.1997 & OpenSubtitles & 0.0239 \\ BookCorpus2 & 0.0100 & OpenWebText2 & 0.1735 \\ Books3 & 0.1640 & PhilPapers & 0.0073 \\ DM Mathematics & 0.0502 & Pile-CC & 0.1551 \\ Enron Emails & 0.0030 & PubMed Abstracts & 0.0536 \\ EuroPar & 0.0156 & PubMed Central & 0.2823 \\ FreeLaw & 0.0895 & StackExchange & 0.1027 \\ Github & 0.0962 & USPTO Backgrounds & 0.0586 \\ Gutenberg(PG-19) & 0.0481 & Ubuntu IRC & 0.0229 \\ HackerNews & 0.0117 & Wikipedia(en) & 0.1121 \\ NIH ExPorter & 0.0047 & YoutubeSubtitles & 0.0151 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sampling weights on the Pile dataset.
2023; Dettmers and Zettlemoyer, 2023).
### Experimental Results
Common Sense ReasoningFollowing Touvron et al. (2023); Brown et al. (2020); Rae et al. (2021); Hoffmann et al. (2022), we employ 8 widely used benchmark datasets for the evaluation of common sense reasoning, including BoolQ Clark et al. (2019), HellaSwag Zellers et al. (2019), LAMBADA Paperno et al. (2016), OpenBookQA Mihaylov et al. (2018), PIQA Bisk et al. (2020), SIQA Sap et al. (2019), StoryCloze Mostafazadeh et al. (2016),Winogrande Sakaguchi et al. (2020). We report the model performance in terms of accuracy for zero-shot and few-shot settings in Table 4, like Touvron et al. (2023); Brown et al. (2020).
We can observe that the proposed InfoEntropy Loss substantially outperforms both Cross-Entropy Loss and Focal Loss on different setups with different model capacities. Specifically, for models with 468M and 6.7B parameters on 0/1/5-shot settings, InfoEntropy Loss consistently achieves superior performance to both compared baselines. As for the 1.2B-parameter model, although InfoEntropy Loss yields slightly lower average performance than Focal Loss, it still delivers the highest performance on 6 out of the 8 datasets and steadily outperforms Cross-Entropy Loss on most datasets.
These results clearly demonstrate the effectiveness of the proposed InfoEntropy Loss. We attribute it to that InfoEntropy Loss compels language models to allocate more attention to those difficult-to-learn yet informative tokens during pretraining, which mitigates the bias of learning difficulties among tokens. Moreover, the consistent performance superiority of InfoEntropy Loss over Focal Loss also validates that, relying on the information entropy of the predicted probability distribution over the vocabulary to assess the learning difficulties of tokens is more reasonable.
Closed Book Question AnsweringFollowing Brown et al. (2020); Touvron et al. (2023), for the task of closed book question answering, we evaluate the performance of the largest 6.7B-parameter models with different loss functions on
\begin{table}
\begin{tabular}{l l|c c c c c c c|c} \hline \hline & \multicolumn{1}{c}{BoolQ} & \multicolumn{1}{c}{HellaSwag} & LAMBADA & OpenBookQA & PIQA & SIQA & StoryCloze & Winogrande & Avg \\ \hline \multicolumn{12}{l}{468\(M\)} \\ \hline \multirow{4}{*}{0-shot} & Cross-Entropy Loss & 57.52 & 40.73 & 39.10 & 30.60 & 67.08 & 40.79 & 63.55 & 53.75 & 49.14 \\ & Focal Loss & 58.35 & 41.17 & 40.09 & **32.80** & 67.25 & **41.91** & 63.07 & 51.70 & 49.54 \\ & InfoEntropy Loss & **59.57** & **41.27** & **41.34** & 30.00 & **67.25** & 41.61 & **63.60** & **54.78** & **49.93** \\ \hline \multirow{4}{*}{1-shot} & Cross-Entropy Loss & 54.22 & 40.86 & 37.16 & 30.40 & **67.28** & 41.66 & 62.69 & 53.04 & 48.48 \\ & Focal Loss & 53.64 & **41.04** & 37.88 & **32.20** & 67.14 & **44.27** & 62.16 & 52.64 & 48.87 \\ & InfoEntropy Loss & **55.23** & 40.90 & **38.75** & 32.00 & 67.68 & 43.35 & **63.23** & **55.88** & **49.63** \\ \hline \multirow{4}{*}{5-shot} & Cross-Entropy Loss & 50.89 & 41.06 & 36.27 & 28.80 & **67.68** & 43.39 & 62.37 & 50.99 & 47.68 \\ & Focal Loss & 48.10 & **41.80** & 38.50 & **31.40** & 67.19 & **46.01** & **63.01** & 52.09 & 48.51 \\ & InfoEntropy Loss & **52.29** & 41.53 & **39.05** & 28.80 & 67.41 & 45.39 & 62.85 & **54.06** & **48.92** \\ \hline \multicolumn{12}{l}{_1.2B_} \\ \hline \multirow{4}{*}{0-shot} & Cross-Entropy Loss & 55.96 & 47.48 & 45.76 & 32.20 & 69.64 & 42.43 & 65.47 & 54.54 & 51.69 \\ & Focal Loss & **62.02** & 47.61 & 46.87 & 33.00 & 69.59 & **42.02** & 65.63 & 55.01 & **52.72** \\ & InfoEntropy Loss & 56.94 & **47.64** & **47.37** & **33.80** & **70.13** & 41.91 & **66.06** & **55.96** & 52.48 \\ \hline \multirow{4}{*}{1-shot} & Cross-Entropy Loss & 54.71 & 47.37 & 42.13 & 34.40 & 69.42 & 44.78 & 65.26 & **56.27** & 51.79 \\ & Focal Loss & **62.35** & **47.41** & 43.88 & 32.60 & 69.15 & 45.04 & 65.42 & 54.85 & **52.59** \\ & InfoEntropy Loss & 54.95 & 47.39 & **45.08** & **34.00** & **70.13** & **45.04** & **65.58** & 54.85 & 52.13 \\ \hline \multirow{4}{*}{5-shot} & Cross-Entropy Loss & 55.72 & 47.74 & 41.55 & 33.00 & 69.86 & 45.04 & 66.11 & 55.64 & 51.83 \\ & Focal Loss & **62.17** & **48.00** & 42.87 & 32.00 & 69.75 & 45.60 & 66.01 & 56.20 & **52.82** \\ & InfoEntropy Loss & 55.38 & 47.78 & **45.00** & **34.00** & **70.13** & **45.26** & **66.22** & **56.83** & 52.70 \\ \hline \multicolumn{12}{l}{_6.7B_} \\ \hline \multirow{4}{*}{0-shot} & Cross-Entropy Loss & **62.14** & 58.91 & 55.54 & 34.40 & 73.61 & 44.06 & 70.66 & 61.40 & 57.59 \\ & Focal Loss & 59.72 & 59.59 & 55.64 & **36.60** & 73.94 & 43.04 & 70.12 & **61.88** & 57.57 \\ & InfoEntropy Loss & 60.89 & **59.63** & **57.73** & 35.20 & **73.99** & **44.06** & **71.25** & 61.01 & **57.97** \\ \hline \multirow{4}{*}{1-shot} & Cross-Entropy Loss & 59.24 & 58.68 & 53.48 & 37.00 & 73.99 & 47.90 & 70.60 & 60.69 & 57.70 \\ & Focal Loss & 58.53 & 59.23 & 52.59 & 35.60 & **74.27** & 48.06 & 69.96 & 59.91 & 57.27 \\ \cline{1-1} & InfoEntropy Loss & **60.46** & **59.56** & **55.35** & **38.00** & 73.29 & **48.57** & **70.87** & **61.01** & **58.39** \\ \hline \multirow{4}{*}{5-shot} & Cross-Entropy Loss & 61.28 & 59.44 & 54.01 & 37.00 & **74.16** & 49.03 & 71.30 & 63.06 & 58.66 \\ \cline{1-1} & Focal Loss & 57.98 & **60.10** & 55.91 & 36.80 & 74.05 & 50.0 & 70.44 & 62.90 & 58.52 \\ \cline{1-1} & InfoEntropy Loss & **62.20** & 60.06 & **58.16** & **37.80** & 73.61 & **50.67** & **71.67** & **63.30** & **59.68** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot and few-shot performance (i.e., accuracy) of models at different scales on common sense reasoning benchmarks.
two benchmark datasets, i.e., TriviaQA Joshi et al. (2017) and WebQuestions Berant et al. (2013). We report the exact match performance for the zero-shot and few-shot settings in Table 5.
It can be seen that language models trained with the proposed InfoEntropy Loss achieve superior performance across most settings. Compared with Cross-Entropy Loss, InfoEntropy Loss achieves substantial performance improvement in 5 out of 6 settings. Particularly, on TriviaQA, InfoEntropy Loss achieves a maximum performance improvement of 3.55% (0-shot) over Cross-Entropy Loss. Compared with Focal Loss, InfoEntropy Loss also exhibits consistent superiority. Notably, in the 0-shot setting on TriviaQA, InfoEntropy Loss outperforms Focal Loss by 4.17%.
**Massive Multitask Language Understanding** We further validate the effectiveness of the proposed InfoEntropy Loss on the MMLU Massive Multitask Language Understanding) benchmark Hendrycks et al. (2021). MMLU consists of multiple-choice questions covering 57 subjects, including STEM, social sciences, humanities, etc. It has been serving as a benchmark for evaluating the multitasking capability of pretrained language models. Following LLaMA Touvron et al. (2023), we evaluate the 6.7B-parameter models in the 5-shot setting. Among multiple choices, we choose the one with the highest probability normalized by the number of tokens.
As shown in Table 6, InfoEntropy loss exhibits superior performance on average. Compared with Cross-Entropy loss, InfoEntropy loss obtains performance improvement of 0.32%, 1.28%, and 0.91% for the field of STEM, Humanities, and Other, respectively. For the field of Social Sciences, the performance decline may be attributed to that InfoEntropy Loss tends to consider Social Sciences samples as easier-to-learn ones. We intend to study it in depth in our future work. Compared with Focal Loss, InfoEntropy Loss also yields superior performance on all fields except STEM. All the results above further demonstrate the proposed InfoEntropy Loss's effectiveness and reasonableness.
## 5 Analyses
We conduct further experiments to provide more insightful analyses on the proposed InfoEntropy Loss.
### Impact of \(\gamma\)
We aim to discern the performance change of the proposed InfoEntropy Loss on language models with different values of \(\gamma\), i.e., the hyperparameter in Equation (6). It's worth noting that when \(\gamma\) is set to 0, InfoEntropy Loss is functionally equivalent to Cross-Entropy Loss. As \(\gamma\) increases, the language model becomes more focused on the difficult-to-learn tokens, i.e., those with higher information entropy. Here we conduct a grid search for \(\gamma\) on language models of various scales (i.e., 468M, 1.2B, and 6.7B parameters), and use the average performance in 5-shot learning for the Common Sense Reasoning task that covers the most benchmarks as the evaluation metric.
As shown in Figure 2, when \(\gamma\) increases from \(0\) to \(5\) for the 468M-parameter model or increases from \(0\) to \(2\) for the 1.2B-parameter/6.7B-parameter models, the performances of InfoEntropy Loss consistently surpass those of Cross-Entropy Loss. The results clearly demonstrate that the performance of InfoEntropy Loss is not very sensitive to the setting of the hyperparameter \(\gamma\), which shows practical applicability. As expected, when \(\gamma\) increases to a
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & \begin{tabular}{c} Cross-Entropy \\ Loss \\ \end{tabular} & \begin{tabular}{c} Focal \\ Loss \\ \end{tabular} &
\begin{tabular}{c} InfoEntropy \\ Loss \\ \end{tabular} \\ \hline STEM & 29.59 & **29.99** & 29.91 \\ Social Sciences & **29.64** & 27.57 & 28.07 \\ Humanities & 27.00 & 27.35 & **28.28** \\ Other & 29.94 & 29.34 & **30.85** \\ \hline Avg & 29.38 & 28.90 & **29.68** \\ \hline \hline \end{tabular}
\end{table}
Table 6: The 5-shot learning performance of 6.7B-parameter models on MMLU.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & 0-shot & 1-shot & 5-shot \\ \hline _TriviaQA_ & & & \\ \hline Cross-Entropy Loss & 17.09 & 21.98 & 26.33 \\ Focal Loss & 16.47 & 23.03 & 27.31 \\ InfoEntropy Loss & **20.64** & **23.42** & **28.75** \\ \hline _WebQuestions_ & & & \\ \hline Cross-Entropy Loss & **5.22** & 9.79 & 14.17 \\ Focal Loss & 4.53 & 9.60 & **14.62** \\ InfoEntropy Loss & 5.02 & **9.89** & 14.57 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Zero-shot and few-shot exact match performance of 6.7B-parameter models on closed-book question-answering benchmarks.
relatively large value, the performance of InfoEntropy Loss declines, because too much attention is given to the difficult-to-learn tokens, and the easy-to-learn ones get overlooked as a result.
### Perplexity on the Pile Validation Set
Here we further discuss how the proposed InfoEntropy Loss affects the perplexity of pretrained language models on the Pile validation set.
Table 7 reports the perplexity of the largest 6.7B-parameter models trained with \(\gamma\) increasing from \(0\) to \(5\) for InfoEntropy Loss. Among them, \(\gamma=0\) is equivalent to Cross-Entropy Loss. Notably, when \(\gamma=0.5\), the perplexity obtained by InfoEntropy Loss is lower than that by Cross-Entropy Loss (i.e., \(\gamma=0\)). However, as we increase \(\gamma\), the perplexity of InfoEntropy Loss also increases and becomes higher than that of Cross-Entropy Loss. The increase of perplexity can be attributed to: 1) the measurement of perplexity is directly related to the exponentiation of Cross-Entropy Loss, and thus optimizing Cross-Entropy Loss during training is consistent with optimizing the perplexity; 2) the objective function of InfoEntropy Loss somewhat diverges from that of perplexity due to the dynamic scaling factor, and thus optimizing it may lead to an increase of perplexity.
To thoroughly inspect how the perplexity increases, we conduct a fine-grained analysis of perplexity at the token level. Similar to the perplexity analysis before, we group all tokens into three learning-difficulty levels based on their corresponding frequencies, i.e., easy, medium, and difficult. Specifically, we categorize the top tokens that cover \(80\%\) of the Pile dataset as easy, those that cover the extra \(15\%\) (i.e., \(80\%-95\%\)) of the Pile dataset as medium, and the remaining \(5\%\) as difficult. The average perplexity for tokens in each learning-difficulty level, obtained by Cross-Entropy Loss and the proposed InfoEntropy Loss with \(\gamma=1\), is shown in Figure 3. It can be seen that, compared with Cross-Entropy Loss, InfoEntropy Loss results in an unnoticeable increase in perplexity for the easy tokens, while for the medium or the difficult tokens, InfoEntropy Loss substantially reduces their perplexity with a noticeable decline. Given that easy tokens dominate the dataset, the overall increase in perplexity is expected. However, the substantial decline of perplexity for the medium or the difficult tokens further demonstrates the effectiveness of InfoEntropy Loss in guiding language models to focus more on infrequent, difficult-to-learn but informative tokens and thereby mitigating the bias of learning difficulties during training.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \(\gamma\) & 0 & 0.5 & 1 & 2 & 5 \\ \hline PPL & 5.473 & 5.467 & 5.492 & 5.608 & 6.317 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The perplexity (PPL) on the Pile validation set under different \(\gamma\) values for InfoEntropy Loss. Among them, \(\gamma=0\) equals Cross-Entropy Loss.
Figure 3: The average perplexity (i.e., PPL) for tokens in different learning-difficulty levels.
Figure 2: The performance of InfoEntropy Loss and Cross-Entropy Loss in 5-shot learning with different \(\gamma\) values.
Conclusions
In this paper, we present our observation of the bias of learning difficulties among tokens during language model pretraining, mainly caused by the inherent token imbalance in textual training data. We initially introduce Focal Loss as an attempt to mitigate the bias of learning difficulties. However, we find that considering the single probability of the ground-truth next token for assessing its learning difficulty is unreasonable, especially in cases with multiple valid next tokens. To tackle that, we propose InfoEntropy Loss, which assesses the learning difficulty of a token by taking into account the global information entropy of the predicted probability distribution over the vocabulary. Extensive experiments demonstrate that, compared with both Cross-Entropy Loss and Focal Loss, the proposed InfoEntropy Loss achieves superior performance for various downstream tasks in zero-shot and few-shot learning settings.
## 7 Limitations
In the proposed InfoEntropy Loss, we scale the Cross-Entropy Loss based on information entropy to lead a generative language model to allocate more attention to difficult-to-learn tokens, which yields superior performance. Yet the effectiveness of InfoEntropy Loss may be influenced by the quality of the training data. Specifically, as noisy data samples are generally outliers, the predicted probability distributions on them would typically exhibit high information entropy. Thus, too many noisy samples may make InfoEntropy Loss amplify their corresponding loss weights too much, causing negative impacts on the model performance. We leave the investigation of how noisy data samples affect InfoEntropy Loss to our future research.
|
2308.03723 | Dimensionality Reduction for Improving Out-of-Distribution Detection in
Medical Image Segmentation | Clinically deployed segmentation models are known to fail on data outside of
their training distribution. As these models perform well on most cases, it is
imperative to detect out-of-distribution (OOD) images at inference to protect
against automation bias. This work applies the Mahalanobis distance post hoc to
the bottleneck features of a Swin UNETR model that segments the liver on
T1-weighted magnetic resonance imaging. By reducing the dimensions of the
bottleneck features with principal component analysis, OOD images were detected
with high performance and minimal computational load. | McKell Woodland, Nihil Patel, Mais Al Taie, Joshua P. Yung, Tucker J. Netherton, Ankit B. Patel, Kristy K. Brock | 2023-08-07T16:58:48Z | http://arxiv.org/abs/2308.03723v2 | # Dimensionality Reduction for Improving Out-of-Distribution Detection in Medical Image Segmentation
###### Abstract
Clinically deployed segmentation models are known to fail on data outside of their training distribution. As these models perform well on most cases, it is imperative to detect out-of-distribution (OOD) images at inference to protect against automation bias. This work applies the Mahalanobis distance post hoc to the bottleneck features of a Swin UNETR model that segments the liver on T1-weighted magnetic resonance imaging. By reducing the dimensions of the bottleneck features with principal component analysis, OOD images were detected with high performance and minimal computational load.
Keywords:Out-of-distribution detection, Swin UNETR, Segmentation, Mahalanobis distance, Principal component analysis
## 1 Introduction
Deep learning (DL) models struggle to generalize to information that was not present while the model was being trained [1]. This problem is exacerbated in the medical field as collecting large-scale, annotated, and diverse training datasets is challenging due to the cost of labeling, presence of rare cases, and patient privacy. Even models that have demonstrated high-performance during external validation may fail when presented with novel information after clinical deployment. This can be demonstrated by the work of Anderson et al. [2]. On test data, 96% of their DL-based liver segmentations were deemed clinically acceptable, with the majority of their autoregmentations being preferred over manual segmentations. The two images that the model failed on contained cases that were not present during training - namely, ascites and a stent.
While autoregmentations are typically manually evaluated and corrected, if need be, by a clinician before they are used in patient treatment, the main concern is automation bias, where physicians may become too reliant on model output. Protecting against automation bias is especially important for clinically deployed segmentation models, as these segmentations influence the amount of radiation that a patient will receive during treatment. In a review study, Goddard et al. found that automation bias in healthcare
can be reduced by displaying low confidence values for recommendations that are likely incorrect [3].
Displaying confidence values that correspond to the likelihood that a DL-based prediction is correct is a non-trivial problem as DL models are inherently poorly calibrated [4]. While some methods attempt to calibrate the model [5, 6, 7, 8], others define an out-of-distribution (OOD) detection score [9, 10]. OOD detection operates under the assumption that the model is unlikely to perform well on data outside of the model's training distribution. While these methods perform well in theoretical settings, they often do not perform well in real-world scenarios [11]. This is especially true when these techniques are applied to medical images [12]. In fact, Cao et al. found that no method performed better than random guessing when applied to unseen medical conditions or artifacts [13].
The Mahalanobis distance is one of the most utilized OOD detection methods due to its simplicity [9]. One of the major reasons it struggles in practice is due to the curse of dimensionality. As it is a distance, it loses meaning in high-dimensional spaces and thus cannot be applied to images directly. In the classification domain, great success was achieved when the Mahalanobis distance was applied to embeddings extracted from pretrained transformers [14]. Similarly, Gonzalez et al., applied the Mahalanobis distance to embeddings extracted from an nnU-Net for medical image segmentation [15]. The major problem is that embeddings from 3D segmentation models are an order of magnitude larger than the embeddings from 2D classification models.
We build upon previous work by applying the Mahalanobis distance to principal component analysis (PCA) projected embeddings extracted from a pretrained Swin Transformer-based segmentation model. The main contributions of our paper are as follows:
1. Applying the Mahalanobis distance to a Swin UNETR model for OOD detection.
2. Reducing the dimensionality of bottleneck features using PCA before the Mahalanobis distance is applied.
3. Proposing a successful OOD detection pipeline that has minimal computation load and can be applied post hoc to any U-Net-based segmentation model.
## 2 Methods
### Data
The training dataset was comprised of 337 T1-weighted liver magnetic resonance imaging exams (MRIs). The T1-weighted images came from the Duke Liver MRI [16], AMOS [17, 18], and CHAOS [19, 20] datasets. 27 T1-weighted liver MRIs from The University of Texas MD Anderson Cancer Center were employed for testing the segmentation model.
To protect against automation bias, OOD images should be defined as images that differ enough from the training distribution that the segmentation model is likely to fail on them. As such, the model's test data is split into in-distribution (ID) and OOD
categories based on model performance. Specifically, an image is labelled ID if it has a Dice similarity coefficient (DSC) of at least 95%. Accordingly, an image is labelled OOD if it has a DSC under 95%. This follows Hendrycks et al., in the classification domain, who defined OOD data to be data that was incorrectly classified [21].
An additional 23 T1-weighted liver MRIs were acquired from The University of Texas MD Anderson Cancer Center for the OOD evaluation. All these images were flagged by physicians for poor image quality in a clinical setting. 14 images contained motion artifacts, 7 contained truncation artifacts, and the other two images contained a single artifact: magnetic susceptibility and spike noise. None had associated ground truth liver segmentations.
All test images were retrospectively acquired under an approved internal review board protocol. All images were preprocessed by reorientation to Right-Anterior-Supe-rior (RAS), resampling to a uniform spacing (1.5, 1.5, 2.0) mm, and normalization using each image's mean and standard deviation. Example test images are shown in Figure 1.
Figure 1: Sample images from the test dataset. (Top) Images that were determined to be OOD by poor performance of the segmentation algorithm. (Middle) Images that were flagged for poor image quality in the clinic. (Bottom) Images that were determined to be ID by good performance of the segmentation algorithm.
### Segmentation Model
A Swin UNETR model [22, 23] was trained to segment the T1-weighted MRIs. The encoder portion of the model was pretrained using self-distilled masked imaging (SMIT) [24] utilizing 3,610 unlabeled head and neck computed tomography scans (CTs) from the Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset [25]. The official Swin UNETR codebase1, built on top of the Medical Open Network for AI (MONAI) [26], was utilized for the pretrained weights and training. All default parameters were used, with no hyperparameter searches performed. Models were trained on a single node of a Kubernetes cluster with eight A100 graphic processing units. The final model was selected according to the weights with the highest validation DSC. It was evaluated on test images with the DSC and the Hausdorff distance.
Footnote 1: github.com/The-Veeraraghavan-Lab/SMIT
### Out-of-Distribution Detection
The Mahalanobis distance \(D\) measures the distance between a point \(x\) and a distribution with mean \(\mu\) and covariance matrix \(\Sigma\), \(D^{2}=(x-\mu)^{T}\Sigma^{-1}(x-\mu)\)[27]. Lee et al. first proposed using the Mahalanobis distance for OOD detection by using it to calculate the distance between test images embedded by a classifier and a Gaussian distribution fit to class-conditional embeddings of the training images [9]. Similarly, Gonzalez et al. used the Mahalanobis distance for OOD detection in segmentation networks by extracting embeddings from the encoder of a nnU-Net [15]. As distances in high dimensions are subject to the curse of dimensionality, both sets of authors decreased the dimensionality of the embeddings through average pooling. Lee et al. suggested pooling the embeddings such that the height and width dimensions are singular [9].
In our work, encoded representations of all images were extracted from the bottleneck features of the Swin UNETR models. Images were resized to (256, 128, 128) to ensure a uniform size of the encoded representations (768, 8, 4, 4). A Gaussian distribution was fit on the encodings of the training data. The Mahalanobis distance between the embedding of each test image and the Gaussian distribution was calculated. All calculations were performed on an Intel(r) Xeon(r) E5-2698 v4 @ 2.20GHz central processing unit.
As distances in extremely high-dimensional spaces often lose meaning [28], experiments were performed on the effect of decreasing the size of the bottleneck features with average pooling, principal component analysis (PCA), uniform manifold approximation and projection (UMAP) [29], and t-distributed stochastic neighbor embeddings (t-SNE) [30]. For average pooling, features were pooled in both 2- and 3-dimensions with kernel size \(j\) and stride \(k\) for \((j,k)\in\{(2,1),(2,2),(3,1),(3,2),(4,1)\}\). For PCA, each embedding was flattened and standardized. For both PCA and UMAP, a hyperparameter search was performed over the number of components \(n\) such that \(n\in\{2,4,8,16,32,64,128,256\}\). Average pooling was performed using the PyTorch Python package and PCA and t-SNE were performed using the scikit-learn Python
package. UMAP was performed using the UMAP Python package [31].Outside of the hyperparameter searches mentioned above, default parameters were used.
OOD detection was evaluated with the area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPR), and false positive rate at 75% true positive rate (FPR75). For all calculations, OOD was considered as the positive class. As both UMAP and t-SNE are stochastic, the average was taken over 10 iterations of the algorithms. Our code can be found at github.com/mckellwoodland/dimen_reduce_mahal.
## 3 Results
The Swin UNETR achieved a mean DSC of 96% and a mean Hausdorff distance of 14 mm. 13 images had a DSC over 95% and were thus classified as ID. The remaining 14 images were classified as OOD. Figure 3 displays visual examples of the segmentation quality of the model.
The calculation of the Mahalanobis distance, as originally defined, was computationally intractable. The inverse of the covariance matrix took \(\sim\)72 minutes to compute (Table 1). Once saved, it takes 75.5 GB to store the inverse. Once the matrix is in memory, it takes \(\sim\)2 seconds for each Mahalanobis distance calculation. The average (\(\pm\)SD) Mahalanobis distance on training data was 1203.02 (\(\pm\)24.66); whereas, the average (\(\pm\)SD) Mahalanobis distance on test data was \(1.47\times 10^{9}\) (\(\pm\)8.66\(\times 10^{8}\)) and \(1.52\times 10^{9}\)(\(\pm\)9.10\(\times\)10\({}^{8}\)) for ID and OOD images respectively. The high dimensionality of the calculation resulted in poor OOD detection performance (Table 1).
Reducing the dimensionality of the embeddings not only made the Mahalanobis distance calculation more computationally feasible, but also improved the OOD detection (Table 1). While the search over average pooling hyperparameters proved to be volatile (Table S1 in the Supplementary Material), the best results were achieved with 3D convolutions that resulted in the height and width dimensions being singular, supporting the suggestion of Lee et al. [9].
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Experiment** & **AUROC** & **AUPR** & **FPR75** & **Computation Time** \\ \hline Baseline & 0.51 & 0.60 & 0.85 & 4327.4080 \\ \hline AveragePool3D(3, 2) & 0.76 & 0.84 & 0.38 & 0.1450 \\ \hline AveragePool3D(4, 1) & 0.70 & 0.75 & 0.31 & 0.5721 \\ \hline UMAP(2), n=10 & 0.79 (\(\pm\)0.05) & 0.85 (\(\pm\)0.04) & 0.36 (\(\pm\)0.13) & 0.0002 (\(\pm\)0.0000) \\ \hline t-SNE, n=10 & 0.82 (\(\pm\)0.05) & 0.87 (\(\pm\)0.04) & 0.27 (\(\pm\)0.14) & 0.0003 (\(\pm\)0.0003) \\ \hline \end{tabular}
\end{table}
Table 1: The AUROCs (\(\uparrow\)), AUPRs (\(\uparrow\)), and FPR75s (\(\downarrow\)) for the OOD detection. \(\uparrow\) means that higher is better, whereas \(\downarrow\) means lower is better. Computation time is the time it takes to compute the inverse of the covariance matrix in seconds. Bold text denotes the best performance. The baseline experiment is the Mahalanobis distance calculated on the original bottleneck features. AveragePool2D(j, k) represents embeddings that were 2D average pooled with kernel size j and stride k. Similar notation applies for 3D embeddings. UMAP(n) and PCA(n) represent the respective dimensionality reduction technique being performed with n components. Only the best performing average pooling, UMAP, PCA results were included in this table. Refer to Tables S1-S3 in the Supplementary Material for the results of the full hyperparameter searches.
Figure 2: Visualization of embeddings with two components. (Top) PCA projections. (Middle) t-SNE projections. (Bottom) UMAP projections. Projections for all data are in the left column. Projections for the training data by class are in the right column. The black ellipses are the covariance ellipses (one and two standard deviations) for the training distribution.
The best results were achieved with PCA (Table 1). Reducing the dimensionality to only two principal components was sufficient to achieve 90% AUROC, 93% AUPR, and 8% FPR75. Figure 2 demonstrates that most in-distribution test images were mapped within one standard deviation of the mean of the training distribution (the image that was not contained a motion artifact); whereas, most OOD test images were mapped between outside of the first standard deviation. The four OOD images mapped within one standard deviation had an average DSC of 88%; whereas, the OOD images mapped outside of one standard deviation had an average DSC of 79%. Additionaly,18 out of the 23 images that contained MRI artifacts were mapped outside of the first standard deviation. Furthermore, the two principal components visually cluster the different distributions within the training distribution. The 26 images from the AMOS dataset that were mapped outside of the second standard deviation were blurry. While UMAP and t-SNE did not perform as well as PCA, they still clustered the datasets in the training distribution and mapped OOD data outside of the first standard deviation. Notably, both UMAP and t-SNE mapped the data with imaging artifacts far from the training distribution. Figure 3 displays several images with the lowest and highest Mahalanobis distances for PCA with 2 components. Low distances were associated with high segmentation performance.
Figure 3: Segmentations of images that contain low and high Mahalanobis distances (calculated on the PCA-projected embeddings with two components). Examples of images with low Mahalanobis distances are in the top row; whereas, examples with high distances are in the bottom row. Green is the ground truth segmentation; red is the automated segmentation. MD refers to the Mahalanobis distance. A higher DSC corresponds with better segmentation performance, whereas a higher distance corresponds to the image being OOD.
Conclusion
In this work, the Mahalanobis distance was applied to dimensionality-reduced bottleneck features of a Swin UNETR. The resulting pipeline was able to embed an entire 3D medical image into only two principal components. These two components were sufficient to visually cluster datasets drawn from different institutions. Additionally, only two components were required for detecting images that the segmentation model performed poorly on with high performance. In a clinical setting, a warning that the model likely failed could be added to images with large Mahalanobis distances. This would protect against automation bias, which would in turn protect patients whose scans have irregular attributes. The entire pipeline could be added post hoc to any trained segmentation model and would incur minimal computational costs.
## 5 Acknowledgments
Research reported in this publication was supported in part by the Tumor Measurement Initiative through the MD Anderson Strategic Initiative Development Program (STRIDE), the Helen Black Image Guided Fund, the Image Guided Cancer Therapy Research Program at The University of Texas MD Anderson Cancer Center, a generous gift from the Apache Corporation, and the National Cancer Institute of the National Institutes of Health under award numbers R01CA221971, P30CA016672, and R01CA235564. |
2310.09722 | A singlet-triplet hole-spin qubit in MOS silicon | Holes in silicon quantum dots are promising for spin qubit applications due
to the strong intrinsic spin-orbit coupling. The spin-orbit coupling produces
complex hole-spin dynamics, providing opportunities to further optimize spin
qubits. Here, we demonstrate a singlet-triplet qubit using hole states in a
planar metal-oxide-semiconductor double quantum dot. We observe rapid qubit
control with singlet-triplet oscillations up to 400 MHz. The qubit exhibits
promising coherence, with a maximum dephasing time of 600 ns, which is enhanced
to 1.3 us using refocusing techniques. We investigate the magnetic field
anisotropy of the eigenstates, and determine a magnetic field orientation to
improve the qubit initialisation fidelity. These results present a step forward
for spin qubit technology, by implementing a high quality singlet-triplet
hole-spin qubit in planar architecture suitable for scaling up to 2D arrays of
coupled qubits. | S. D. Liles, D. J. Halverson, Z. Wang, A. Shamim, R. S. Eggli, I. K. Jin, J. Hillier, K. Kumar, I. Vorreiter, M. Rendell, J. H. Huang, C. C. Escott, F. E. Hudson, W. H. Lim, D. Culcer, A. S. Dzurak, A. R. Hamilton | 2023-10-15T03:39:29Z | http://arxiv.org/abs/2310.09722v1 | # A singlet-triplet hole-spin qubit in MOS silicon.
###### Abstract
Holes in silicon quantum dots are promising for spin qubit applications due to the strong intrinsic spin-orbit coupling. The spin-orbit coupling produces complex hole-spin dynamics, providing opportunities to further optimize spin qubits. Here, we demonstrate a singlet-triplet qubit using hole states in a planar metal-oxide-semiconductor double quantum dot. We demonstrate rapid qubit control with singlet-triplet oscillations up to 400 MHz. The qubit exhibits promising coherence, with a maximum dephasing time of 600 ns, which is enhanced to 1.3 \(\mu\)s using refocusing techniques. We investigate the magnetic field anisotropy of the eigenstates, and determine a magnetic field orientation to improve the qubit initialisation fidelity. These results present a step forward for spin qubit technology, by implementing a high quality singlet-triplet hole-spin qubit in planar architecture suitable for scaling up to 2D arrays of coupled qubits.
\({}^{1}\)School of Physics, University of New South Wales, Sydney NSW 2052, Australia.
\({}^{2}\)Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland.
\({}^{3}\) RIKEN, 2-1, Hirosawa, Wako-shi, Saitama 351-0198, Japan.
\({}^{4}\)School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney NSW 2052, Australia.
\({}^{5}\)Diraq, Sydney NSW, Australia.
+
Footnote †: \({\dagger}\) Corresponding author - [email protected]
## Introduction
Spin qubits in group IV materials are promising for semiconductor-based quantum computation applications [1, 2, 3]. The most straightforward spin qubit is the single-spin qubit (Loss-DiVincenzo qubit [1]), which encodes information using the \(\left|\uparrow\right\rangle\) and \(\left|\downarrow\right\rangle\) spin states. An alternative is the singlet-triplet qubit, which uses the singlet (S = \((\left|\uparrow\downarrow\right\rangle-\left|\downarrow\uparrow\right\rangle) /\sqrt{2}\)) and unpolarised-triplet (\(T_{0}=(\left|\uparrow\downarrow\right\rangle+\left|\downarrow\uparrow \right\rangle)/\sqrt{2}\)) states of two exchange-coupled spins [4, 5, 6, 7]. While using two spins rather than one increases the fabrication footprint and the complexity of the eigenstates, singlet-triplet qubits offer advantages over single-spin qubits [8]. Singlet-triplet qubits can be operated at very low magnetic fields (<5 mT), which enables compatibility with magnetic-sensitive components such as superconducting resonators [9, 10, 11]. Additionally, singlet-triplet qubits can be controlled using lower frequency control pulses, with spectral components generally not exceeding 100 MHz. This reduces the cost and complexity of control hardware compared with single-spin qubits, which typically require GHz phase-controlled tones. Removing these GHz control tones has advantages since the power they dissipate can degrade qubit quality [12, 13, 14, 15]. Further, developing singlet-triplet qubits provides technological advances, since singlet-triplet systems form the building blocks for novel devices including exchange-only [16] and resonant-exchange qubits [17, 18]. Hole-spins in Group IV materials offer significant opportunities for use as fast coherent spin qubits [19, 20, 21, 22, 23] because of the strong intrinsic spin-orbit coupling, which is not present for electron spins. The intrinsic spin-orbit coupling allows rapid electrical manipulation of hole-spin qubits, without the need for additional bulky device features such as micro-magnets or ESR strip-lines. Further, the g-factor [24, 25, 26, 27] and spin-orbit coupling [28, 29] for holes are both tunable, providing a wide range of in-situ control over hole-qubits. In addition, hole-spins have the potential for enhanced coherence times due to suppressed hyperfine coupling [30], and the potential for configuring decoherence sweet-spots by tuning the spin-orbit interaction [31, 32, 33, 34].
Despite the opportunities holes offer there currently are only limited studies of hole based singlet-triplet qubits. Recently, a hole-spin singlet-triplet qubit was demonstrated in Ge [35], where the strong spin-orbit coupling resulted in non-trivial qubit dynamics [36]. However, the spin-orbit effects in Si devices vary from Ge devices [37, 38], therefore similar investigations in silicon would provide valuable understanding of silicon hole-spin effects. Recent experiments in silicon FinFET's have revealed an anisotropic exchange coupling for
holes due to the spin-orbit interaction [39], which may provide unique functionalities for hole-spin singlet-triplet qubits. Indeed, theoretical predictions have suggested that the non-trivial relationship between spin-orbit coupling and the site dependent g-tensors may allow hole-spin singlet-triplet qubits to avoid leakage errors [33]. However to-date there are no demonstrations of a singlet-triplet qubit using holes in silicon.
In this work we demonstrate a hole-spin singlet-triplet qubit formed in a planar MOS silicon double quantum dot. The planar structure provides a platform suitable for scaling up to the large arrays of coupled qubits needed for quantum circuits and error correction [40, 41, 42]. Additionally, the planar layout enables the straightforward implementation of a charge sensor. Using this charge sensor, we identify the exact hole occupation of the double dot, which is critical for experimental reproducibility and detailed theoretical modelling of this system. In addition to characterising the key parameters of the qubit, we perform an investigation into the anisotropy of the two-hole eigenstates. By comparing the experimental results with a model that includes spin-orbit coupling and anisotropic site dependent g-tensors, we identify key features in the eigenstates that allow the improvement of the initialisation fidelity and reduction in the readout errors.
### Device and operating regime
The hole-spin singlet-triplet qubit is formed using a planar-silicon double quantum dot device, fabricated using industrially compatible CMOS techniques. Figure 1a) shows a model 3D cross section of the double quantum dot region. Multilayer palladium gates define the double quantum dot with P1 and P2 operating as plunger gates, while J\({}_{g}\) provides in-situ control of the interdot tunnel coupling \(t_{c}\)[43]. The device employs ambipolar charge sensing [44], with an adjacent \(n\)MOS SET allowing the absolute charge occupation of each quantum dot to be determined.
Figure 1b) shows a stability diagram measured using the charge sensor. We perform all measurements in the (2,8)-(1,9) configuration, which is equivalent to a
Figure 1: **Device operating point and energy spectrum**. a) A 3D model of the device, showing a cross section through the double quantum dot region. A full SEM image is shown in Extended Figure E1a. The tunneling to the P1 dot was fast (<100 ns), while loading onto P2 dot is slow (>40 \(\mu\)s). This asymmetry in tunneling allows latched readout. b) Stability diagram of the (2,8)-(1,9) transition. The labels (N,M) indicate the number of holes in the P1 and P2 dot respectively. The colour scale is sensor current (I\({}_{\mathrm{z}}\)) in pA, and a full stability diagram down to (0,0) is presented in Extended Data Figure E1b. Zero detuning (\(\epsilon=0\)) is defined as the (2,8) and (1,9) charge degeneracy point, and positive detuning when the spins are separated into (1,9). Full details of the key points are discussed in the methods. c) Eigenstates calculated using the singlet-triplet Hamiltonian \(H_{ST}\) defined in the methods. The coloured arrows indicate the energy transitions observed in the preceding experiments, and \(\Delta_{ST}\) indicates the size of the avoided crossing between \(|S\rangle\) and \(|T_{\pm}\rangle\). Orange dashed lines show how the (1,9) states evolve in absence of spin-orbit coupling. d) The results of a spin funnel experiment with the pulse sequence shown in inset. The spin-funnel experiment was performed by initialising \(|S_{2}\)s\({}_{\mathrm{z}}\)s\({}_{\mathrm{z}}\)s\({}_{\mathrm{}}\), followed by a rapid pulse to a point along the detuning axis (\(\epsilon\)). At each \(B_{x}\) and \(\epsilon\) the state was allowed to evolve for a fixed separation time, \(\tau_{\mathrm{z}}\)=100 ns, followed by a pulse to the readout point. The change in sensor signal (\(\Delta I\)) due to this pulse indicates the likelihood of the returned state being singlet (low \(\Delta I\)) or triplet (high \(\Delta I\)) (described in more detail in the methods). Red arrows indicate the \(\Delta g\)-driven oscillations. e) The same pulse procedure as in d) except the magnetic field is fixed at B\({}_{x}\)=5 mT and we investigate the effect of varying the separation time (\(\tau_{S}\)) at each detuning (\(\epsilon\)). f) The corresponding FFT at each detuning. Transparent lines indicated the best fit of the observed energy splittings to the eigenstates of \(H_{ST}\), and the colours correspond to the transitions indicated in c).
(2,0)-(1,1) spin system due to orbital shell filling [45]. We initialised singlet states by dwelling deep in (2,8) where \(\left|S_{(2,8)}\right\rangle\) is the lowest energy eigenstate (point I). Manipulation of the state was performed by pulsing to a position along the detuning axis (\(\epsilon\)) and dwelling there for a variable time \(\tau_{s}\). Readout of the state was performed by pulsing to point R (following the dashed trajectory), where latched Pauli-Spin-Blockade [46, 47] readout allowed identification of either the blocked triplet or the unblocked singlet states based on the average sensor current (see methods and Extended Data Figure E1).
### System Hamiltonian and eigenenergies
To model the two-spin system we consider a 5\(\times\)5 Hamiltonian, \(H_{ST}\), which includes Zeeman, spin-orbit and orbital terms. The full details of the two-hole singlet-triplet Hamiltonian are provided in Supplementary section S1. For the Zeeman Hamiltonian, we include independent 3x3 symmetric g-tensors for the left and right dot, \(\overleftarrow{\mathcal{G}}_{L}\) and \(\overleftarrow{\mathcal{G}}_{R}\) respectively. Hole-spins in silicon are known to have strongly anisotropic g-tensors [24, 26, 39], where variations in the g-tensor are produced by non-uniform strain [26, 48], spin-orbit coupling [49] and differences in the confinement profile between the two dots [50]. Hence, we do not assume that \(\overleftarrow{\mathcal{G}}_{L}\) and \(\overleftarrow{\mathcal{G}}_{R}\) are correlated or share the same principle spin-axes. For the spin-orbit Hamiltonian we include a spin-orbit vector, \(\vec{t}_{so}=(t_{x},t_{y},t_{z})\), parameterising the effect of spin-orbit coupling in the laboratory reference frame indicated in Figure 1a).
In Figure 1c) we plot the eigenenergies of \(H_{ST}\) as a function of detuning. At negative detuning the eigenstates are the \(\left(\left|T_{+}\right\rangle,\left|T_{0}\right\rangle,\left|T_{-}\right\rangle,\left|S\right\rangle,\left|S_{(2,8)}\right\rangle\right)\) basis states. At large positive detuning the eigenstates evolve into the \(\left|S_{2,8}\right\rangle\) state and the four two-spin states \(\left(\left|\uparrow\uparrow\right\rangle,\left|\uparrow\downarrow\right\rangle,\left|\downarrow\uparrow\right\rangle,\left|\downarrow\downarrow\right\rangle\right)\), which are defined by the sum or difference of the Zeeman energy in the two dots. The \(\left|\uparrow\downarrow\right\rangle\) and \(\left|\downarrow\uparrow\right\rangle\) eigenstates have energy splitting given by
\[E_{ST_{0}}=\sqrt{J(\epsilon,t_{c})^{2}+\Delta E_{z}^{2}} \tag{1}\]
where
\[J(\epsilon) = \sqrt{\frac{\epsilon^{2}}{4}+2t_{c}^{2}}-\frac{\epsilon}{2}\] \[\Delta E_{z} = \left|\Delta g^{*}|\mu_{B}|\vec{B}\right|\]
\(\epsilon\) is the detuning energy, \(t_{c}\) is the interdot tunnel coupling, \(\Delta g^{*}\) is the difference in the effective g-factors for the applied magnetic field vector \(\vec{B}\) (see Supplement S1), and \(\mu_{B}\) is the Bohr magneton. Since strong spin-orbit coupling results in an anisotropy in \(\left|\Delta g^{*}\right|\) with respect to magnetic field orientation, we expect \(E_{ST_{0}}\) to exhibit a non-trivial anisotropy [39]. An avoided crossing occurs between the \(\left|S_{2,8}\right\rangle\) and \(\left|T_{\pm}\right\rangle\), with the amplitude of the avoided crossing (\(\Delta_{ST_{\pm}}\)) determined by the interplay between the spin-orbit vector and the difference in the projection of the g-tensors for the given field orientation [36].
Figure 1d) shows the charge sensor response of a spin-funnel experiment used to characterise the singlet-triplet system [5]. The spin-funnel experiment was performed by allowing a singlet state to time-evolve for \(\tau_{s}\) = 100 ns at each \(B_{x}\) and \(\epsilon\). The change in sensor signal (\(\Delta I\)) then measures the likelihood that the final state is either singlet (low \(\Delta I\)) or triplet (high \(\Delta I\)). A clear funnel edge is visible in \(\Delta I\) when the detuning point coincides with the \(\left|S\right\rangle\) and \(\left|T_{\pm}\right\rangle\) avoided crossing [5]. In addition, on the positive detuning side of the funnel edge we see oscillations that result from \(\Delta g\)-driven \(\left|S\right\rangle\leftrightarrow\)\(\left|T_{0}\right\rangle\) oscillations (red arrows).
Figure 1e) we demonstrate the time evolution of the singlet at each detuning. The experimental procedure is the same as Figure 1d), however here we varied the separation time (\(\tau_{S}\)) at each detuning (\(\epsilon\)) and held the magnetic field constant at \(B_{x}\) = 5 mT. Figure 1f) shows the FFT of \(\Delta I\) at each \(\epsilon\), revealing three clear oscillation frequencies, each with a distinct detuning dependence. Each oscillation frequency results from mixing between the three lowest eigenstates at the separation detuning (\(\epsilon\)). The lowest frequency (red) results from oscillations between \(\left|S\right\rangle\leftrightarrow\)\(\left|T_{0}\right\rangle\) states, the middle frequency (blue) results from oscillations between \(\left|S\right\rangle\leftrightarrow\)\(\left|T_{\pm}\right\rangle\), and the highest frequency (green) results from oscillations between \(\left|T_{0}\right\rangle\leftrightarrow\)\(\left|T_{\pm}\right\rangle\). The corresponding transitions are indicated by coloured arrows in Figure 1c) and a full description is provided in the methods.
We fit the observed frequencies in Figure 1f) to the eigenergies of the singlet-triplet Hamiltonian, \(H_{ST}\) and extract key parameters of the two-hole system. Transparent lines in Figure 1f) show the best fit, demonstrating good agreement between the observed and theoretical eigenenergies. Based on the best fit we extract \(t_{c}=9\pm 1\)\(\mu\)eV and two effective g-factors of \(0.8\pm 0.1\) and \(1.2\pm 0.1\) for \(B_{x}\).
### Anisotropic g-tensors and spin-orbit coupling
To characterise the key parameters of the two-hole system we investigate the effect of magnetic field orientation on the two-hole eigenenergies. Figure 2a) shows \(\Delta I\) as a function of \(\tau_{S}\) for a range of magnetic field orientations in the x-z plane and Figure 2b) shows the resulting FFT of \(\Delta I\). Figures 2c-d) repeat the same experiment for a rotation of the magnetic field through the x-y plane. Clear anisotropy with
respect to magnetic field orientation can be observed, which results from the interplay between spin-orbit coupling and the orientation of the g-tensors. The visibility of the higher frequency (blue and green) oscillations also shows a strong dependence on the magnetic field orientation. In particular, the FFT amplitude of the higher frequency (blue and green) oscillations is suppressed for \(B_{x}\) and enhanced for \(B_{y}\) and \(B_{z}\).
The 3x3 g-tensors for each dot and the spin-orbit orientation can be extracted by fitting the data in Figure 2a-d) to the eigenenergies of \(H_{ST}\). The fitting procedure is discussed in Supplementary sections S3-S5. The transparent lines in Figures 2b) and d) indicate the frequency of the respective FFT peaks for the optimal fit parameters. For the optimal fit we find (\(t_{so}^{x},t_{so}^{y},t_{so}^{z}\)) = (-37 \(\pm\) 2,107 \(\pm\) 4, 0 \(\pm\) 20) neV, giving \(|t_{so}^{x}|\) = 0.12 \(\mu\)eV. Notably, \(\vec{t}_{so}^{z}\) is oriented in-plane with the 2DHG, consistent with expectations for heavy holes in planar silicon [51]. Further, the in plane spin-orbit vector has components in both \(t_{so}^{x}\) and \(t_{so}^{y}\), indicating that a combination of Rashba (oriented perpendicular to the double dot axis) and Dresselhaus (oriented parallel to the double dot axis) spin-orbit components are present [52]. The full g-tensors are presented in the Extended Data Figure E4. We find that the orientation of the g-tensor principle axes for left and right dots are slightly misaligned, which may result from differences in confinement profile or non-uniform strain between the left and right dot. The observation of misalignment in the g-tensor principle axes suggests accurate modelling of multiple quantum dot systems in silicon should incorporate site dependent g-tensors with differing principle axes.
The anisotropy in the FFT amplitudes in Figure 2a-d) is caused by the probability of transitioning from \(|S_{2,8}\rangle\) into \(|T_{-}\rangle\) during the pulse from (2,8) to (1,9). When pulsing from (2,8) to (1,9) the \(\Delta_{ST_{\pm}}\) avoided crossing causes the initial \(|S_{2,8}\rangle\) state to be split between \(|S\rangle\) and \(|T_{-}\rangle\) with a ratio determined by the Landau-Zener transition probability [53] (see methods). Larger \(\Delta_{ST_{\pm}}\) favours \(|T_{-}\rangle\) states, while smaller \(\Delta_{ST_{\pm}}\) favours \(|S\rangle\). In Figure 2e) we plot the energy spectrum of the two-hole system for various in-plane magnetic field orientations. The magnetic field orientation strongly influences the magnitude of \(\Delta_{ST_{\pm}}\) and the position in detuning (\(\epsilon_{\Delta}\)) at which the
Figure 2: **Eigenenergy anisotropy with respect to magnetic field orientation**. a-b) Shows the sensor signal and resulting FFT when using the pulse sequence given in Fig 1d) (Initialise-Separate-Readout). Detuning is fixed at \(\epsilon\) = 1.9 meV and a 10 mT magnetic field is rotated by 180\({}^{\circ}\) through the x-z plane. c-d) Shows the same experiment for a rotation through the sample x-y plane. Transparent solid lines show the optimal fit of \(H_{ST}\) to the experimental data. A zoom in of the first 60 ns of c) is shown in Extended Data E2a), highlighting the multiple frequencies present. See Supplement section S2.B for full 360\({}^{\circ}\) data set in the x-y plane. e) Shows the eigenenergies for \(|B|\) = 10 mT magnetic field is applied at \(\theta\) = 0\({}^{\circ}\) (purple), 110\({}^{\circ}\) (brown), and 180\({}^{\circ}\) (cyan) in the x-y plane respectively. The y-axis ticks are in 0.5 \(\mu\)eV, and x-axis ticks are separated by 1 meV. The energy splitting corresponding to the three FFT peaks in d) are indicated by the red, blue and green vertical lines. The size and location of the \(|S\rangle\)-\(|T_{-}\rangle\) avoided crossing varies with field orientation (black dashed circle), resulting in anisotropy in the Landau-Zener transition probability between \(|S_{2,8}\rangle\)\(\rightarrow\)\(|T_{-}\rangle\) during the ramp-in/ramp-out. The black dashed lines indicated the splitting of initial state when pulsing across the \(\Delta_{ST}\) avoided crossing. f) The solid black line is the calculated probability of the \(|S_{2,8}\rangle\) loading into \(|T_{-}\rangle\) during the separation pulse (\(P_{T_{-}}\), left axis). Blue markers indicate the amplitude of the \(|S\rangle\)\(\leftrightarrow\)\(|T_{-}\rangle\) FFT peak in d) (transparent blue). Both data are plotted as a function of in-plane magnetic field angle. The trend in \(P_{T_{-}}\) correlates with the amplitude of the \(|S\rangle\)\(\leftrightarrow\)\(|T_{-}\rangle\) oscillations in d). The is correlation between \(P_{T_{-}}\) and the \(|S\rangle\)\(\leftrightarrow\)\(|T_{-}\rangle\) oscillation amplitude is expected since the \(|S\rangle\)\(\leftrightarrow\)\(|T_{-}\rangle\) oscillations are enhanced as the probability of loading the \(|T_{-}\rangle\) state increases.
avoided-crossing occurs. As a result, the magnetic field orientation impacts the likelyhood of populating the \(|T_{-}\rangle\) state during the separation ramp and thus impacts the amplitude of the \(|S\rangle{\leftrightarrow}|T_{-}\rangle\) FFT peak.
We simulated the experimental pulse sequence using QuTiP [54] and calculated the \(|S_{2,8}\rangle{\rightarrow}|T_{-}\rangle\) transition probability (\(P_{T_{-}}\)), which yielded good agreement with the measured FFT amplitude. In Figure 2f), the solid line shows the calculated \(P_{T_{-}}\) using the optimal fit parameters for a range of in-plane magnetic field orientations (See Supplement section S3 for details). The circles in Figure 2f) show the observed amplitudes of the \(|S\rangle{\leftrightarrow}|T_{-}\rangle\) FFT peaks from Figure 2d) (blue). The trend in \(P_{T_{-}}\) matches the anisotropy in the measured amplitudes of the \(|S\rangle{\leftrightarrow}|T_{-}\rangle\) FFT peaks from Figure 2d), with both exhibiting a peak around \(\theta\) = 120\({}^{\circ}\), and an asymmetric reduction towards 0\({}^{\circ}\) and 180\({}^{\circ}\). The correlation between FFT amplitude and the calculated \(P_{T_{-}}\) demonstrates that the model \(H_{ST}\) and optimal fit parameters captures the dynamics of the hole-spin qubit well (See Extended Data Figure E2a).
We now consider how hole-spin singlet-triplet qubits can be further optimised by using the anisotropic response of the system to a magnetic field. For the singlet-triplet qubit studied here, optimal initialisation protocol would suppress the likelyhood of the \(|S_{2,8}\rangle\) state loading into the \(|T_{-}\rangle\) leakage state. In the presence of large spin-orbit coupling and/or large \(\Delta g\), even rapid separation pulses may be unable to satisfy the non-adiabatic Landau-Zener requirement imposed by the large \(\Delta_{ST_{-}}\) avoided crossing. However, with knowledge of the orientation of spin-orbit vector (\(\vec{t}_{so}\)), and the g-tensors, we can identify optimum field orientations to minimise \(\Delta_{ST_{-}}\) and thus enhance initialisation fidelity. Indeed, in Figure 2f) we have shown that the B\({}_{x}\) magnetic field suppress loading into the \(|T_{-}\rangle\) state, and therefore is an optimal field orientation for the singlet-triplet qubit in this system.
### Coherent \(\Delta\)g-driven oscillations
We now turn to experiments to characterise the hole-spin qubit. Here, the qubit is defined using the \(|S\rangle\) and \(|T_{0}\rangle\) states of the double quantum dot. The simplified Hamiltonian for this system can be written as
\[H_{ST}=\frac{J}{2}\sigma_{z}+\frac{\Delta E_{z}}{2}\sigma_{x} \tag{2}\]
where \(J\) defines the exchange energy, \(\Delta E_{z}=|\Delta g^{*}|\mu_{B}|B|\), \(|B|\) is the magnitude of the applied field, \(\sigma_{x,z}\) are the respective Pauli matrices and \(\mu_{B}\) is the Bohr magneton. The Bloch sphere for this qubit system is shown in Figure 3a). Rotations around the Bloch sphere can be driven by controlling \(J\) and \(\Delta E_{z}\) at the separation point [7, 55], and Figure 3b) shows a schematic of the pulse sequence used.
Figure 3c) plots the measured singlet probability \(P_{S}\) as a function of separation time \(\tau_{S}\) for three different \(|B_{x}|\), demonstrating oscillations in \(P_{S}\). Solid lines in Figure 3b) show the best fit of the data to the equation
\[P_{S}=A{\rm cos}(2\pi f_{R}\tau_{s}+\phi){\rm Exp}[(\frac{\tau_{s}}{T_{2}^{*}} )^{\alpha}]+C \tag{3}\]
where \(f_{R}\) is the Rabi frequency, \(\tau_{s}\) is the separation time, \(\phi\) is a phase offset, \(T_{2}^{*}\) is the qubit dephasing time, A is the oscillation amplitude, C is an offset, and \(\alpha\) captures the noise colour (see Supplementary section S2.A).
Analysis of the \(|S\rangle{\leftrightarrow}|T_{0}\rangle\) oscillations over a range of \(B_{x}\) was used to characterise the qubit control frequency (\(f_{R}\)) and the coherence time (\(T_{2}^{*}\)). Figures 3d) and e) show a colour map of the \(|S\rangle{\leftrightarrow}|T_{0}\rangle\) oscillations and a corresponding FFT at each \(B_{x}\). We resolve \(|S\rangle{\leftrightarrow}|T_{0}\rangle\) oscillations up to 150 MHz at 30 mT and have observed up to 400 MHz at 80 mT (Extended data Figure E3). To extract \(\Delta g\) and \(J\) we fit \(f_{R}\) for each \(B_{x}\) to Eqn. 1. The fit yields \(J\) = 6 MHz at the separation point (\(\epsilon\) = 1.9 meV), and \(|\Delta g|\) = 0.41 which is in agreement with the effective g-factors extracted from Figure 1f. In Extended data Figure E3d) we show electrical control over \(\Delta g^{*}\) using the \(J_{g}\) gate, with a trend of \(d\Delta g^{*}/dJ_{g}\ \approx\) 0.9 V\({}^{-1}\), demonstrating in-situ electrical control of the qubit control frequency.
We now show the decoherence in this qubit can be explained by fluctuations in both \(\epsilon\) and \(\Delta E_{Z}\) and we quantify their magnitudes. Figure 3f) shows \(T_{2}^{*}\) for a range of \(B_{x}\), where each \(T_{2}^{*}\) has been extracted using a fit to Eqn 3. Dephasing is caused by fluctuations in the energy splitting between the \(|S\rangle\) and \(|T_{0}\rangle\) states [56] (Eqn 1). Hence, the variation in \(T_{2}^{*}\) can be modelled using
\[\frac{1}{T_{2}^{*}}=\frac{\pi\sqrt{2}}{h}\sqrt{(\frac{J}{E_{ST_{0}}}\frac{dJ} {d\epsilon}\delta\epsilon)^{2}+(\frac{\Delta E_{Z}}{E_{ST}}\delta\Delta E_{Z}) ^{2}} \tag{4}\]
where \(\delta\epsilon\) is the noise in detuning and \(\delta\Delta E_{Z}\) is the effective magnetic noise. A common fit of the data in Figures 3f) and 4d) (discussed later) was used to extract \(\delta\epsilon\ =\ 30\ \mu\)eV and \(\Delta E_{Z}\) = 4 neV. These values are comparable to those previously reported for electrons in silicon micro-magnet devices [57] and for holes in planar Ge devices [35]. The similarity in \(\delta\epsilon\) between holes in planar Si and planar Ge suggests that the highly disordered SiO\({}_{2}\) oxide does not significantly enhance the effect of charge noise compared to planar Ge heterostructures, where the quantum dot is buried tens of nanometers below the surface. Further, the similarity in \(\delta\epsilon\) between
this work in \(p\)MOS silicon and studies of electrons in \(n\)MOS silicon [57] suggests that the level of charge noise is not impacted by the polarity of the gate bias.
In Figure 3g) we plot \(T_{2}^{*}\) of the \(\Delta\)g-driven oscillations as a function of the fridge mixing chamber temperature (\(T_{MC}\)). \(T_{2}^{*}\) is approximately independent of \(T_{MC}\) up to 400 mK (\(k_{B}T\) = 34 \(\mu\)eV ), where \(T_{2}^{*}\) begins to drop. The \(T_{2}^{*}\) behaviour shows the same trend at \(B_{x}\) = 15 mT (\(\Delta E_{ST_{0}}\) = 0.35 \(\mu\)eV) and \(B_{x}\) = 10 mT (\(\Delta E_{ST_{0}}\) = 0.23 \(\mu\)eV). Interestingly, we find that the noise colour (\(\alpha\)) appears to whiten as the temperature of the fridge increases, consistent with recent experiments in other hole spin qubits [20] (see Supplement section S2.A).
Finally, in Figure 3h) we show that the quality factor (\(Q=f_{R}\times T_{2}^{*}\)) of the \(\Delta\)g-driven singlet-triplet oscillations increases as the control speed increases. The quality factor quantifies the number of coherent oscillations that can be completed within the coherence time. A promising feature of this qubit is that Q increases with increasing \(f_{R}\), indicating that the qubit control speed can be increased without degrading the quality.
### Coherent exchange-driven oscillations
In order to achieve full control of the singlet-triplet qubit it is necessary to produce control pulses around two orthogonal axes of the Bloch sphere. We use exchange-driven oscillations to rotate the qubit around the z-axis of the Bloch sphere. By combining \(\Delta g\)-driven (x-axis) and exchange-driven (z-axis) rotations, it is possible to achieve full control
Figure 3: \(\Delta g\)**-driven coherent oscillations**. a) Shows a schematic of the Bloch sphere for the singlet-triplet qubit. The Zeeman energy difference \(\Delta E_{z}\) produces oscillations about the x-axis (\(|S\rangle\)\(\leftrightarrow\)\(|T_{0}\rangle\) oscillations), while exchange coupling \(J\) produces oscillations about the z-axis (ie \(|\uparrow\downarrow\rangle\)\(\leftrightarrow\)\(|\downarrow\uparrow\rangle\) oscillations). When both \(J\) and \(\Delta E_{z}\) are non-zero, the angle of rotation with respect to the Bloch sphere z-axis is given by \(\theta^{\prime}\) = arctan(\(\frac{\Delta E_{x}}{J}\)) as indicated by the red trajectory. b) Shows the pulse sequence used to drive the \(\Delta g\)-driven oscillations. We initialised in \(|S_{2,8}\rangle\), then applied a rapid separation pulse along the detuning axis. The system was then held at a fixed positive detuning (\(\epsilon\)) for a separation time (\(\tau_{S}\)), before performing readout. c) Shows the observed \(|S\rangle\)\(\leftrightarrow\)\(|T_{0}\rangle\) oscillations as a function of separation time, for \(B_{x}\) = 20 mT, 5 mT and 2 mT. The solid red line is the best fit of Eqn. 3 to the data. The singlet probability, \(P_{S}\), at each \(\tau_{S}\) is extracted from the change in the normalised sensor signal \(\Delta I\) (shown on right axis, see methods for full details). d) Shows the \(|S\rangle\)\(\leftrightarrow\)\(|T_{0}\rangle\) oscillations over a field range of \(B_{x}\) = \(\pm\)30 mT. e) Shows the FFT of the oscillations observed in d). A linear increase in the oscillation frequency is expected since \(hf\) = \(\Delta E_{ST_{0}}\) (Eqn. 1). The \(B_{x}\) magnetic field orientation was used for these experiments since it results in the least leakage into \(|T_{-}\rangle\) (Figure 2). However, residual loading of the \(|T_{-}\rangle\) leakage state results in weak \(|S\rangle\)\(\leftrightarrow\)\(|T_{-}\rangle\) (blue) and \(|T_{0}\rangle\)\(\leftrightarrow\)\(|T_{-}\rangle\) (green) oscillations for \(B_{x}\). f) Shows the \(T2^{*}\) for different magnetic fields, where the solid line is the best fit of Eqn. 4 to both this data and the data in Figure 4e). g) Shows the effect of mixing chamber temperature on \(T_{2}^{*}\) for two different magnetic fields. h) The qubit quality factor (\(Q\)) as a function of the \(\Delta\)g-driven oscillation frequency (\(f_{R}\)), measured at \(T_{MC}\)=30 mK. All data in Figure 3 was collected with \(J_{g}\) = 1.2 V.
over the qubit Bloch sphere.
Figures 4a) and 4b) show the experimental procedure for exchange-driven oscillations. A separation pulse with calibrated \(\tau_{S}\) is first used to perform a \(\Delta g\)-driven \(\frac{\pi}{2}\) rotation to bring the state to the equator of the Bloch sphere. A rapid pulse to low detuning is then applied, which suddenly increases J, producing a change in qubit rotation axis. The system is held at the exchange-point for \(\tau_{E}\) to drive oscillations around the z-axis, then a second \(\Delta g\)-driven \(\frac{\pi}{2}\) rotation is applied, followed by a pulse to the readout position.
Figure 4c) demonstrates exchange-driven oscillations at three different detuning position (\(\epsilon\)). Reducing \(\epsilon\) at the exchange position increases the exchange energy J, so that the angle of rotation tends towards \(0^{\circ}\) with respect to the Bloch sphere z-axis (since J \(\gg\Delta g\mu_{B}B\)). In Figure 4d) we plot \(J\) as a function of \(\epsilon\) at the exchange point. The solid line shows the best fit of \(J(\epsilon,t_{c})\), allowing the extraction of the tunnel coupling (see Figure caption). This experiment was repeated for a range of \(J_{g}\) gate voltages, and the resulting dependence of the tunnel coupling (\(t_{c}\)) on the \(J_{g}\) gate voltage is shown in Figure 4e), demonstrating smooth control of \(t_{c}\). Therefore, the exchange-driven oscillations are highly tunable, since \(J\) can be electrically tuned either by varying \(\epsilon\) with the plunger gates, or by tuning \(t_{c}\) using the \(J_{g}\)-gate. These results demonstrate coherent exchange-driven z-axis control of the singlet-triplet qubit.
Figure 4f) presents \(T_{2}^{*}\) as a function of \(\epsilon\), which
Figure 4: **Exchange-driven coherent oscillations**. a) A schematic of the state evolution and b) shows the pulse sequence used to achieve exchange-driven oscillations. A \(\Delta g\)-driven \(\pi/2\) pulse brings the qubit to the equator (red dashed trajectory), then a rapid pulse to low detuning is used to suddenly increase J, changing the angle of rotation, resulting in exchange-driven oscillations around the equator (red trajectory). The orientation of the oscillation is tilted by \(\theta^{\prime}\) from the z-axis. c) Exchange-driven oscillations for three different detunings (\(\epsilon\)= 0.5 meV, 0.8 meV, and 1.6 meV). Experiments were performed with a fixed magnetic field of B\({}_{x}\) = 2 mT, such that a \(\Delta g\)-driven \(\pi/2\) pulse takes 65ns. The solid line shows a best fit to Eqn. 3, allowing extraction of J and \(T_{2}^{*}\) at each \(\epsilon\). d) Exchange energy as a function of detuning, measured using the exchange pulse. The solid red lines shows the fit to \(J(\epsilon,t_{c})=\sqrt{\frac{z^{2}}{4}+2t_{c}^{2}-\frac{\pi}{2}}\), allowing the extraction of the tunnel coupling \(t_{c}\). e) Tunnel coupling extracted for a range of different \(J_{g}\) gate voltages, where each \(t_{c}\) is extracted from the \(\epsilon\) dependence of the exchange-oscillation frequency. f) Dephasing time \(T_{2}^{*}\) as a function of the detuning. The solid red line is a joint fit of Eqn. 4 to this data and the data in Figure 3f). All data in c), d) and f) was collected for \(J_{g}\) = 1.2 V, and the inset in f) shows the dephasing time as a function of the detuning for J\({}_{g}\) = -1.4 V.
is used to characterise the coherence time of the exchange oscillations. The solid line shows the best fit to Eqn. 4, obtained by jointly fitting Figure 4e) and Figure 3f). The trend in \(T_{2}^{*}\) is well explained by Eqn. 4, where charge noise (\(\delta\epsilon\)) dominates at low detuning due to enhanced \(dJ/d\epsilon\), while Zeeman noise (\(\delta\Delta E_{Z}\)) dominates at large detuning where \(dJ/d\epsilon\to 0\).
### Spin echo measurement
Finally, we investigate the use of spin-refocusing to enhance the qubit coherence time. Given that J and \(\Delta E_{Z}\) are non-zero, rotations around the Bloch sphere occur at some angle offset from the pure x-axis or z-axis. Since the qubit trajectory is not solely around the x-axis or z-axis, the precise form of the refocusing pulse will vary as the qubit evolves. Therefore, complicated pulse engineering is required for perfect refocusing pulses [58]. Here, we implement a simplified procedure [35], which employs a \(\pi\) rotation using an exchange pulse to enhance the observed coherence analogous to a Hahn echo. Figures 5a) and b) show the qubit evolution and pulse sequence for the refocusing procedure respectively. The spin echo experiment allowed the qubit to freely evolve for a time \(t_{f}\), during which \(N\) exchange-driven \(\pi\) rotations are carefully interlaced to provide the refocusing echo. Full details of the refocusing procedure are provided in the methods.
We demonstrate the enhancement of the qubit coherence by observing the residual \(\Delta g\)-driven singlet-triplet oscillations after the free evolution time, \(t_{f}\). Figure 5c) shows the singlet-triplet oscillations after \(t_{f}\) = 1000 ns for zero (red), one (blue) and two (black) refocusing pulses. When zero refocusing pulses are applied the singlet-triplet
Figure 5: **Spin echo measurement at \(B_{x}=\) 1.6 mT**. a) Shows an example trajectory of the qubit-state for the refocusing pulse schematic of b). Free evolution driven by \(\Delta g\) is allowed for a period of time \(\tau_{S}(n)=(2n+1/2)t_{\pi}\) (green trajectory), such that after any \(\tau_{S}(n)\) the qubit will be at the equator. A refocusing pulse is incorporated as a \(\pi\) exchange pulse, followed by a second period of free evolution for \(\tau_{S}(n)\). The total time to perform this sequence is the free evolution time, \(\tau_{f}\), which can be varied by increasing \(n\), or repeating the cycle to include multiple refocusing pulses. The full pulse sequence results in the qubit refocusing to \(|S\rangle\) after \(\tau_{f}\). c) Residual \(|S\rangle-|T_{0}\rangle\) oscillations after a free evolution time (\(\tau_{f}\)) of 1000 ns for no refocusing pulses (red), one refocusing pulse (blue) and two refocusing pulses (black). d) Normalised peak-to-peak amplitude of the residual \(|S\rangle-|T_{0}\rangle\) oscillations as a function of free evolution time. For one (blue) and two (black) refocusing pulses the oscillations are clearly extended compared with the data for no refocusing pulse. The amplitude is normalised against the shortest free evolution for each data set in order to account for the fidelity of the \(\pi\) exchange pulse. We extract \(T_{Echo}^{*}\) based on a fit to the decay envelope of Eqn. 4, using \(\alpha=2\) (see Supplement section S2.A). The experiments were performed for \(B_{x}\) = 1.6 mT.
oscillations are completely lost after 1000 ns of free evolution. However, with refocusing pulses the singlet-triplet oscillations are visible even after \(t_{f}>\) 1000 ns. Figure 5d) shows the normalised amplitude of the residual \(\Delta g\)-driven singlet-triplet oscillations observed for a range of free evolution times. The application of one and two refocusing pulses clearly enhances the coherence of the qubit. We fit the decay of the peak amplitude to extract \(T_{2}^{Echo}\) = 1220\(\pm\)150 ns for one refocusing pulse, and \(T_{2}^{Echo}\) = 1300\(\pm\)200 ns for two refocusing pulses. When no refocusing pulses are applied we find \(T_{2}^{Echo}\) = 550\(\pm\)50 ns, consistent with the measurements in Figure 3. Although we see an improvement of 120% by applying one pulse, we see no significant improvement when using two refocusing pulses. This suggests the maximum \(T_{2}^{Echo}\) may have been reached for this simplified refocusing procedure.
## Conclusion
In this work we have demonstrated a hole-spin singlet-triplet qubit in planar silicon. We demonstrate rapid \(|S\rangle{\leftrightarrow}|T_{0}\rangle\) oscillations exceeding 400 MHz, two axis control via \(\Delta g\)-driven and exchange-driven oscillations, and enhancement of the qubit coherence time to >1 \(\mu\)s using spin-echo procedures. Developing a complete model of the energy spectrum provided insight into spin-qubit dynamics under rapid pulses across the \(|S\rangle{\leftrightarrow}|T_{-}\rangle\) avoided crossing. The experimentally observed effects were well described by the model Hamiltonian, providing insight into methods to further optimise initialisation protocols in hole-spin qubits.
|
2302.06600 | Task-Specific Skill Localization in Fine-tuned Language Models | Pre-trained language models can be fine-tuned to solve diverse NLP tasks,
including in few-shot settings. Thus fine-tuning allows the model to quickly
pick up task-specific ``skills,'' but there has been limited study of where
these newly-learnt skills reside inside the massive model. This paper
introduces the term skill localization for this problem and proposes a
solution. Given the downstream task and a model fine-tuned on that task, a
simple optimization is used to identify a very small subset of parameters
($\sim0.01$% of model parameters) responsible for ($>95$%) of the model's
performance, in the sense that grafting the fine-tuned values for just this
tiny subset onto the pre-trained model gives performance almost as well as the
fine-tuned model. While reminiscent of recent works on parameter-efficient
fine-tuning, the novel aspects here are that: (i) No further re-training is
needed on the subset (unlike, say, with lottery tickets). (ii) Notable
improvements are seen over vanilla fine-tuning with respect to calibration of
predictions in-distribution ($40$-$90$% error reduction) as well as the quality
of predictions out-of-distribution (OOD). In models trained on multiple tasks,
a stronger notion of skill localization is observed, where the sparse regions
corresponding to different tasks are almost disjoint, and their overlap (when
it happens) is a proxy for task similarity. Experiments suggest that
localization via grafting can assist certain forms of continual learning. | Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora | 2023-02-13T18:55:52Z | http://arxiv.org/abs/2302.06600v2 | # Task-Specific Skill Localization in Fine-tuned Language Models
###### Abstract
Pre-trained language models can be fine-tuned to solve diverse NLP tasks, including in few-shot settings. Thus fine-tuning allows the model to quickly pick up task-specific "skills," but there has been limited study of _where_ these newly-learnt skills reside inside the massive model. This paper introduces the term _skill localization_ for this problem and proposes a solution. Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters (\(\sim 0.01\)% of model parameters) responsible for (\(>95\)%) of the model's performance, in the sense that _grafing_ the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model. While reminiscent of recent works on parameter-efficient fine-tuning, the novel aspects here are that: (i) No further re-training is needed on the subset (unlike, say, with lottery tickets). (ii) Notable improvements are seen over vanilla fine-tuning with respect to calibration of predictions in-distribution (\(40\)-\(90\)% error reduction) as well as the quality of predictions out-of-distribution (OOD). In models trained on multiple tasks, a stronger notion of skill localization is observed, where the sparse regions corresponding to different tasks are almost disjoint, and their overlap (when it happens) is a proxy for task similarity. Experiments suggest that localization via grafting can assist certain forms of continual learning.
Machine Learning, ICML
## 1 Introduction
Pre-trained language models (Liu et al., 2019; Devlin et al., 2019) have shown huge success after fine-tuning (FT) on many downstream tasks. With as few as 32 training examples, fine-tuning these giant models beats head tuning (Gao et al., 2021). Thus fine-tuning is quick to acquire new "skills" like sentiment identification, but locating where these skills reside in the net is an open question. _A priori_, one expects them to be diffused and not be "localized" in a meaningful way. Better understanding of the skill's location could improve fine-tuned models - say, with respect to accuracy, calibration, or out-of-domain generalization-- or help mitigate catastrophic forgetting in multi-task settings. Existing methods for parameter-efficient fine-tuning (e.g., using _lottery tickets_(Frankle and Carbin, 2018; Chen et al., 2020)) suggest existence of more compact descriptions of the fine-tuned model, but they involve re-training the net and do not give insight into where the skills existed after vanilla fine-tuning. (Section 6 discusses past works.).
This paper gives a simple way to pinpoint tiny regions in the pre-trained model where the skills acquired via fine-tuning can be localized. In particular, although fine-tuning could have updated hundreds of millions of parameters, we can identify (Section 2) a tiny subset of (a few thousand) parameters whose values are sufficient to solve the task, in the following sense: "grafing" the values of this tiny subset of parameters onto the pre-trained model without changing any of the other parameters, almost recovers the performance of the original fine-tuned model. We call this tiny subset of parameters a "grafing region." Note that finding sparse grafting regions allows for compact storage of the fine-tuned model, which could be important in settings where the model is being fine-tuned for large number of tasks or users. Crucially we find that grafted models have other desirable properties like calibration and OOD generalization. Our main contributions are as follows.
* Section 2 formalizes skill localization via _grafing_, and gives a simple optimization procedure to find them. Section 3 shows that in a RoBERTa (GPT-2 resp.) model fine-tuned on GLUE tasks, regions with just \(0.01\)% (\(0.05\)% resp.) of parameters can recover \(95\)% accuracy of the fine-tuned models, without further re-training.
* Section 4 shows that our grafted models have much better calibrated outputs than vanilla fine-tuning -- a significant result because calibrating models can be difficult on small datasets. Grafted models, without re-training, also often have much better out-of-distribution (OOD) performance. However re-training a sparse region (including prior parameter-efficient fine-tuning methods like BitFit) does
not afford the same OOD benefits. These findings suggest that retaining sparse grafting regions provide a purer, more transferable way to capture the skill, while avoiding over-fitting to idiosyncrasies of the specific dataset. The section also discusses the generalization mystery of fine-tuning and how the graft regions begin to explain it.
* Section 5 explores consequences of our skill localization in multi-task and continual learning settings. We show that when FT learns multiple tasks together, skills from different tasks localize in somewhat disjoint regions, where the degree of overlap between regions for two tasks seems to correlate with their intuitive similarity. We also observed some degree of compositionality: grafting the pre-trained net using regions of a subset of tasks works well for only those (and related) tasks but not others.
## 2 Skill Localization through Model Grafting
Humans have concise ways of describing their skills in a modular fashion using natural language, typically as a combination of more basic skills. Such descriptions are challenging for skills in deep nets. In context of fine-tuning, recent papers have approached by equating "skill" with the ability to perform a specific fine-tuning task. They correlate this ability with activations of certain subsets of neurons; for instance, Wang et al. (2022) find "skill neurons" in prompt-tuned nets (not FT nets). While interesting, such notions suffer from the limitation that the activations depend on both the input to the net, as well as on a large set of parameters. Ideally, we would pinpoint the skill for the entire task in terms of specific net parameters, and in a compact way.
A naive attempt at a parameter-centered formalization would be to identify parameters that change a lot during fine-tuning. However, this turns out to be neither concise nor closely connected to the task in question; see Figure 1.
### Model Grafting
To formalize skill localization, we introduce _model grafting_. Given a pre-trained model with parameters \(\mathbf{\theta}_{\text{pre}}\), fined-tuned model \(\mathbf{\theta}_{\text{fh}}\), we can think of a binary mask \(\mathbf{\gamma}\in\{0,1\}^{|\mathbf{\theta}_{\text{fh}}|}\) as identifying a subset of parameters, also called a "region." A _grafting_ of \(\mathbf{\theta}_{\text{fh}}\) in the region \(\mathbf{\gamma}\) onto \(\mathbf{\theta}_{\text{pre}}\) is defined as
\[\overline{\mathbf{\theta}_{\text{fh}}}(\mathbf{\gamma})=\mathbf{\gamma}\odot\mathbf{\theta}_{ \text{fh}}+(1-\mathbf{\gamma})\odot\mathbf{\theta}_{\text{pre}}. \tag{1}\]
In other words, for parameters in the region corresponding to \(\mathbf{\gamma}\), \(\overline{\mathbf{\theta}_{\text{fh}}}(\mathbf{\gamma})\) gets its values from \(\mathbf{\theta}_{\text{fh}}\), while all other parameters default to \(\mathbf{\theta}_{\text{pre}}\), yielding a "grafted model." This is reminiscent of model stitching (Bansal et al., 2021), where layers of one model are grafted onto the remaining layers of another model. But we allow any subset of parameters to be grafted, thus potentially affecting very tiny fraction of parameters. We desire two competing properties:
_Good Localization:_ the region \(\mathbf{\gamma}\) is sparse, i.e. \(\|\mathbf{\gamma}\|_{0}\) is tiny
_Skill retention_: \(\mathcal{L}_{\mathcal{T}}(\overline{\mathbf{\theta}_{\text{fh}}}(\mathbf{\gamma})) \approx\mathcal{L}_{\mathcal{T}}(\mathbf{\theta}_{\text{fh}})\) are both small, i.e., grafted model recovers the fine-tuning performance
where \(\mathcal{L}_{\mathcal{T}}\) is some metric for performance on \(\mathcal{T}\) (e.g. classification error or logistic loss). We note that we can rewrite Equation (1) as \(\overline{\mathbf{\theta}_{\text{fh}}}(\mathbf{\gamma})=\mathbf{\theta}_{\text{pre}}+\mathbf{ \gamma}\odot(\mathbf{\theta}_{\text{fh}}-\mathbf{\theta}_{\text{pre}})\). Thus while \(\mathbf{\gamma}\) denotes the location of the skill, \(\mathbf{\gamma}\odot(\mathbf{\theta}_{\text{fh}}-\mathbf{\theta}_{\text{pre}})\) gives a succinct representation of the core skills acquired. This characterization suggests a natural way to learn a grafting region \(\mathbf{\gamma}\) as well; see Section 2.3.
### Differences from Sparsity-based Fine-Tuning
Grafting with sparse regions (aka _sparse grafting_), while reminiscent of efficient FT methods has key differences.
**Lottery tickets:** The lottery ticket hypothesis (LTH) (Frankle and Carbin, 2018) aims to prune to model by finding a sparse sub-network that when re-trained from scratch can recover the performance of vanilla training. Grafting is fundamentally different in two ways: (a) parameters outside the grafting region are set to pre-trained values rather than 0, and (b) no re-training is needed. Our underlying motivation is to find skills in the standard fine-tuned model, whereas re-training can change the training mechanism itself.
**Parameter-efficient fine-tuning:** Methods like BitFit (Ben Zaken et al., 2022) find that updating only a small subset of parameters during fine-tuning (e.g. just biases) is very effective. However, the mechanism of parameter-efficient FT is very different from vanilla FT (e.g. biases are not important for vanilla FT). Furthermore, the sparsity of biases (\(0.05\%\)) is 5x than what task-dependent grafting can achieve. Besides, BitFit fails to provide the calibration benefits of grafting; see Table 1.
### Optimization Procedure to Learn Grafted Models
We now describe a simple optimization procedure to learn a mask \(\mathbf{\gamma}\) such that the grafted model \(\overline{\mathbf{\theta}_{\text{fh}}}(\mathbf{\gamma})\) from Equation (1)
Figure 1: Accuracies of the grafting regions learned using our procedure in Section 2.3 and regions corresponding to the top-\(s\) parameters based on magnitude of movement during FT. The learned region performs much better at low sparsity levels.
retains the skills to do well on the task \(\mathcal{T}\):
\[\operatorname*{argmin}_{\boldsymbol{\gamma}\in\{0,1\}^{|\boldsymbol{\theta}_{ \mathrm{h}}|:\|\boldsymbol{\gamma}\|_{0}\leq s}}\mathcal{L}_{\mathcal{T}}( \boldsymbol{\gamma}\odot\boldsymbol{\theta}_{\mathrm{h}}+(1-\boldsymbol{ \gamma})\odot\boldsymbol{\theta}_{\mathrm{pre}}) \tag{2}\]
Due to optimization considerations, we re-parametrize \(\boldsymbol{\gamma}\) as a sigmoid of a real-valued vector \(\boldsymbol{S}\), i.e. \(\boldsymbol{\gamma}=\sigma(\boldsymbol{S})\). Furthermore to control the sparsity level \(s\), we build the mask \(\boldsymbol{\gamma}\) on top of an initial candidate mask \(\boldsymbol{\gamma}_{\mathrm{base}}\in\{0,1\}^{|\boldsymbol{\theta}_{ \mathrm{h}}|}\). So the general optimization problem reduces to solving
\[\operatorname*{argmin}_{\boldsymbol{S}\in\mathbb{R}^{|\boldsymbol{ \theta}_{\mathrm{h}}|}} \mathcal{L}_{T}(\boldsymbol{\gamma}\odot\boldsymbol{\theta}_{ \mathrm{h}}+(1-\boldsymbol{\gamma})\odot\boldsymbol{\theta}_{\mathrm{pre}}) \tag{3}\] \[\boldsymbol{\gamma}\coloneqq\boldsymbol{\gamma}_{\mathrm{base}} \odot(1-\sigma(\boldsymbol{S}))+(1-\boldsymbol{\gamma}_{\mathrm{base}})\odot \sigma(\boldsymbol{S}) \tag{4}\]
Our optimization procedure aims to make minimal changes (addition or deletion) to \(\boldsymbol{\gamma}_{\mathrm{base}}\) while getting low task loss. We achieve minimal changes by initializing \(\boldsymbol{S}\) such that \(\sigma(\boldsymbol{S})\approx\boldsymbol{0}\) and by taking only a few gradient steps to train \(\boldsymbol{S}\). (One could also use \(\ell_{1}\) regularization on \(\sigma(\boldsymbol{S})\), but using a few gradient steps seems to suffice.) A natural and effective choice for \(\boldsymbol{\gamma}_{\mathrm{base}}\) turns out to be the top few parameters based on their movement \(|\boldsymbol{\theta}_{\mathrm{h}}-\boldsymbol{\theta}_{\mathrm{pre}}|\). While \(\boldsymbol{\gamma}_{\mathrm{base}}\) by itself is not great (see Figure 1), it tends to agree with the final localization in many coordinates.
## 3 Evaluating Grafting for Skill Localization
Experimental Setup.We fine-tuned the pre-trained RoBERTa-base (Liu et al., 2019) model on 13 different tasks from GLUE (Wang et al., 2018), including sentiment analysis, topic classification, natural language inference and paraphrase detection datasets. All experiments, unless specified otherwise, use prompt-based fine-tuning with the human generated prompts from Gao et al. (2021), and SGD optimizer, which achieves similar performance as AdamW (Loshchilov and Hutter, 2017) after fixing the embedding layer (which was also observed in (Kumar et al., 2022) but in the vision setting). For \(64\)-shot experiments, we use 5 few-shot datasets to get one fine-tuned model for each, whereas for \(4096\)-shot we fine-tuned 3 models per task. Unless mentioned otherwise, we always report our performance on 4096-shot setting. Other training and hyperparameter details can be found in Appendix A.2.
Model grafting experiments optimize Equation (3) using gradient descent for \(100\) steps with learning rate \(10^{7}\). For patches of varying sizes, we use \(\boldsymbol{\gamma}_{\mathrm{base}}\) as the top-\(s\) fraction of parameters based their movement \(|\boldsymbol{\theta}_{\mathrm{h}}-\boldsymbol{\theta}_{\mathrm{pre}}|\) for \(s\) in \([0,1]\)
### Sparse Grafting Retain Skills
The first experiment compares the performance of model grafting with sparse regions versus vanilla fine-tuning. For each downstream task and prompt fine-tuned model, we learn a grafting region \(\boldsymbol{\gamma}\) of sparsity at most \(0.01\)% by building on top of \(\boldsymbol{\gamma}_{\mathrm{base}}:=\text{top-}(10^{-5})\), where top-\(s\) selects the top \(s\) fraction parameters based on parameter movement. We report the accuracies of these grafted models in Table 2. The main observation is that sparse grafting can recover at least \(95\)% of FT performance for all datasets. Additionally, the grafted models have a high agreement (on test set labels) with the original FT models: \(93\%\) (resp. \(86\%\)) for single-sentence (resp. two-sentence) experiments, suggesting good skill "localization."
Performance of \(\boldsymbol{\gamma}_{\mathrm{base}}\): We compare the performance of the learned regions \(\gamma\) using optimization versus that of \(\boldsymbol{\gamma}_{\mathrm{base}}\) using top-\(s\) most-changed parameters, at different levels of sparsity in Figure 1. We find that the optimization method is much more effective, especially at lower sparsity levels. Exploring other ways to learn the regions, perhaps directly from the pre-trained model, is an interesting open question.
Gains from re-training the grafting regions.To check whether the learned region forms a meaningful sub-network within the model, we re-train starting from pre-trained initialization, but only make updates to parameters in the learned region \(\boldsymbol{\gamma}\), akin to parameter-efficient FT. Table 1 shows that re-training the sparse regions from scratch performs well, almost always better than the grafted models. This suggests that the sparse sub-network represented by \(\boldsymbol{\gamma}\) is also trainable while being significantly sparser (\(0.01\)%) than the set of biases used in BitFit (\(0.05\)%).
Figure 3: (a) Grafting regions that only contain biases of the model don’t give good skill localization. (b) Localizing with lottery ticket pruning (setting remaining parameters to 0) does not perform well at any sparsity level without re-training.
Figure 2: Testing existence of sparse grafting regions for prompt-based FT and standard FT fine-tuning (uses a linear head on top of [CLS] token). Skill localization is equally good for FT approaches.
**Differences with BitFit and lottery tickets:** Since BitFit succeeds by only training biases, we check whether biases can form good grafting regions. Figure 2(a) answers this in the negative, which implies a stark difference in mechanism between standard fine-tuning and BitFit. Furthermore, we check whether lottery tickets style pruning can work, without re-training, i.e. can we learn a sparse region such that setting other parameters to 0 can yield a good model. We find in Figure 2(b) that even regions as dense as \(90\)% of parameters fail to capture any skill, without re-training. The high density is consistent with prior works on lottery tickets for language model fine-tuning (Chen et al., 2020), where the sparsity is usually higher than \(10\)%, much denser compared to our grafting regions of \(0.01\)%.
**Distribution of graft parameters:** We perform a closer analysis of the distribution of graft parameters for different downstream tasks in Figures 4 and 5. Firstly we observe, in Figure 4, that most of the graft region is concentrated in the middle layers for most tasks; AG News is an exception. Furthermore, a closer look into the distribution of the graft parameters in Figure 5 reveals an interesting pattern. Most of the graft parameters are concentrated in three parameter types: (1) Value parameters of the attention module, (2) the first layer of the feed-forward module, and (3) LayerNorm parameters. This observation could potentially provide a deeper understanding of the mechanism of fine-tuning and the role of pre-training. The LayerNorm parameters in the grafting region are uniformly distributed across layers for all tasks, as evident in the bottom right of Figure 4. The layer-wise distribution of the value parameters and the first
\begin{table}
\begin{tabular}{l|c c|c c|c c c|c c} \hline \hline & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{Grafting} & \multicolumn{2}{c|}{Graft re-training} & \multicolumn{2}{c}{BitFit} \\ \hline Dataset & Acc./F1 & ECE & Acc./F1 & ECE & Acc./F1 & ECE & Acc./F1 & ECE \\ \hline SST-2 & 92.3 (0.3) & 7.4 (0.3) & 92.4 (0.1) & 3.1 (0.4) & 92.2 (0.7) & 3.9 (0.7) & 92.4 (0.6) & 6.7 (0.8) \\ AGNews & 92.7 (0.4) & 6.8 (0.3) & 91.1 (0.9) & 0.9 (0.2) & 91.2 (0.1) & 1.0 (0.2) & 93.0 (0.2) & 4.4 (0.2) \\ QNLI & 88.0 (0.8) & 10.2 (0.0) & 84.7 (0.6) & 1.0 (0.3) & 87.4 (0.5) & 2.0 (1.1) & 87.8 (0.6) & 8.1 (2.3) \\ QQP\({}^{\dagger}\) & 79.6 (0.1) & 10.1 (4.2) & 76.3 (0.6) & 3.5 (0.7) & 78.1 (0.7) & 4.3 (1.3) & 79.4 (0.3) & 9.7 (1.2) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing the accuracy (\({}^{\dagger}\)F1) in % and the calibration error (ECE) for fine-tuned model (FT), model grafting, graft re-training for grafting region with sparsity \(0.01\)%, and BitFit. Results are in the 4096-shot setting. Re-training the grafting regions from scratch is not only good, but performs slightly better than the grafted model, implying that the grafting regions form a sub-network of their own. It also retains the good calibration of non re-trained grafting. BitFit updates all biases (sparsity \(0.05\)%) and achieves slightly better accuracy than grafting, but it has significantly worse calibration error compared to the grafted model.
Figure 4: Distribution of graft parameters in different regions of the model across layers. (TL) All parameters, (TR) Value parameters of attention module, (BL) first layer parameters of the feedforward module, (BR) LayerNorm parameters (feedforward and attention combined). For most of the tasks, the graft parameters are concentrated more in the middle layers.
Figure 5: Distribution of graft parameters in attention and feed-forward modules. For feedforward module, Intermediate denotes its first layer and Output denotes its second layer. Most of the graft parameters are concentrated in the Value parameters of the attention module, the first layer of the feedforward module, and the LayerNorm parameters.
layer parameters of the feedforward module in the grafting region show varying patterns among different tasks.
### Other Fine-Tuning Paradigms
Experiments in the previous sections use prompt-based fine-tuning with SGD optimizer. In this section, we check whether sparse grafting regions (i.e. skill localization) exist in models fine-tuned differently.
**Standard FT.** Instead of prompt-based FT, we consider fine-tuning with a linear head on top of the [CLS] token representation, which was the _standard FT_ approach before prompt-tuning (Liu et al., 2019). Figure 2 confirms that similar sparse localization is also possible for Standard FT.
**AdamW optimizer.** In Figure 6 we test skill localization with AdamW (Loshchilov and Hutter, 2017) optimizer on prompt-based FT. Unlike SGD, we find that fine-tuning with AdamW _does not_ contain sparse grafted models with good performance. However, adding an explicit \(\ell_{1}\) regularization (with strength \(0.001\)) on the parameter movement \(\|\theta_{\text{h}}-\theta_{\text{pre}}\|_{1}\) can recover sparse grafts. This suggests that \(\ell_{1}\) regularization could be a way to encourage skill localization. An extensive exploration of this is left for future work.
### Auto-Regressive Language Model
Given the tremendous recent success of auto-regressive models(Brown et al., 2020), we fine-tune a pre-trained GPT-2 (small) model (Radford et al., 2019) on GLUE tasks using prompts from Holtzman et al. (2021). Firstly, we find that skill localization is possible for GPT-2 based fine-tuning as well, albeit requiring denser regions with \(0.05\)% parameters as opposed to \(0.01\)% required by the RoBERTa model; Table 3 summarizes the results. Overall the performance of GPT-2 fine-tuned models is worse than a similarly sized RoBERTa model, which is consistent with prior work. GPT-2 requiring denser regions and having worse generalization is consistent with our connection between sparsity and generalization from Section 4.3.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c} \hline \hline & \multicolumn{4}{c|}{64-shot} & \multicolumn{4}{c}{4096-shot} \\ \hline & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{Graft} & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{Graft} & \multicolumn{2}{c}{} \\ \hline Dataset & Acc. & ECE & Acc. & ECE & Acc. & ECE & Acc. & ECE & Agreement \\ \hline \hline \multicolumn{10}{c}{Single sentence tasks} \\ \hline SST-2 & 90.5 (0.4) & 9.7 (0.3) & 89.7 (0.2) & 7.8 (0.6) & 92.3 (0.3) & 7.4 (0.3) & 92.4 (0.1) & 3.1 (0.4) & 95.3 (0.6) \\ CR & 90.2 (0.6) & 8.2 (2.3) & 89.5 (1.1) & 5.7 (1.9) & 91.7 (0.2) & 8.0 (0.3) & 91.7 (0.5) & 5.0 (0.3) & 96.6 (0.5) \\ MR & 85.0 (1.2) & 22.9 (2.1) & 85.2 (1.7) & 10.8 (2.3) & 89.7 (0.3) & 9.0 (0.6) & 89.1 (1.1) & 1.5 (0.2) & 93.6 (0.8) \\ MPQA & 85.4 (0.9) & 14.2 (0.9) & 84.1 (1.2) & 11.4 (18) & 88.9 (0.6) & 10.5 (0.6) & 88.1 (0.4) & 3.3 (0.2) & 93.3 (0.2) \\ TREC & 93.1 (1.7) & 6.1 (1.1) & 86.8 (0.7) & 4.8 (1.2) & - & - & - & - & - \\ AGNews & 88.2 (0.3) & 10.1 (0.5) & 86.8 (0.3) & 7.1 (0.5) & 92.7 (0.4) & 6.8 (0.3) & 91.1 (0.2) & 0.9 (0.2) & 95.1 (0.5) \\ Subj & 91.2 (1.2) & 5.9 (1.2) & 91.7 (1.2) & 2.6 (1.2) & 96.7 (0.1) & 3.0 (0.1) & 95.5 (0.1) & 1.2 (0.1) & 97.3 (0.2) \\ \hline \multicolumn{10}{c}{Two sentence tasks} \\ \hline QNLI & 77.8 (0.9) & 21.1 (0.9) & 76.7 (1.5) & 12.3 (0.8) & 88.0 (0.8) & 10.2 (0.0) & 84.7 (0.6) & 1.0 (0.3) & 88.9 (0.3) \\ SNLI & 76.5 (1.7) & 20.8 (1.6) & 72.1 (1.9) & 14.4 (3.1) & 86.4 (0.3) & 10.6 (1.7) & 82.7 (0.5) & 1.1 (0.4) & 87.5 (1.5) \\ MNLI & 67.5 (2.1) & 29.5 (2.0) & 64.6 (2.5) & 20.4 (1.4) & 81.8 (0.1) & 14.8 (0.7) & 78.0 (0.4) & 1.5 (0.0) & 86.4 (0.4) \\ RTE & 66.9 (3.5) & 31.0 (3.5) & 66.2 (5.0) & 21.5 (3.9) & 80.0 (2.2) & 20.2 (1.7) & 77.7 (0.5) & 8.5 (1.7) & 88.6 (1.7) \\ MRPC\({}^{\dagger}\) & 82.5 (1.9) & 22.9 (2.1) & 76.9 (2.4) & 19.1 (3.5) & 90.0 (0.7) & 13.0 (0.8) & 86.2 (0.5) & 6.2 (0.1) & 88.6 (1.7) \\ QQP\({}^{\dagger}\) & 68.7 (1.3) & 26.5 (3.7) & 66.9 (1.0) & 17.3 (2.1) & 79.6 (0.8) & 10.1 (4.2) & 76.3 (0.4) & 3.5 (0.7) & 93.3 (4.9) \\ \hline \hline \end{tabular}
\end{table}
Table 2: For each downstream task, we learn a grafting region \(\mathbf{\gamma}\) using our optimization procedure in Equation (3). The grafting regions for all tasks have sparsity at most \(0.01\%\) (\(<8500\) parameters). We report test accuracy (\({}^{\dagger}\)F1) and the calibration error using ECE of the fine-tuned model and the grafted model for each task. The main findings are (1) The grafted model can retrieve \(>95\%\) of the FT accuracy, while being better calibrated than the original model itself. For single-sentence tasks (4096-shot) the grafted model shows only a \(0.7\%\) drop in accuracy but an improvement of \(5\%\) in the calibration error. Similarly for two-sentence tasks, the grafted model shows a \(3.4\%\) drop in accuracy with an improvement of \(9.9\%\) in the calibration error.
Figure 6: Grafting accuracy for FT with SGD and AdamW. For both SST-2 and QNLI, the AdamW trained model is much worse at skill localization through grafting. However, a small \(\ell_{1}\) regularization on the parameter movement during FT recovers localization.
## 4 Calibration, OOD and Generalization
Usually "skill" denotes flexibility and competence at a task, and machine learning has standard notions for testing this.
### Skill Localization improves Calibration
Human skill in a task usually includes some awareness of when the task has not been performed well. In ML, _calibration_ is an attempt to formalize this. Suppose a classification model outputs, given input \(x\) and each possibly label \(y\), a probability \(\Pr[y|x]\) that the label \(y\) is correct for \(x\). In order to be _well-calibrated_, this probability should be meaningful, i.e., among all \(x,y\) such that \(\Pr[y|x]=a\), the expected fraction of these where \(y\) is the correct (ground truth) label should also be around \(a\). It is well-known that usual softmax outputs in ML models are not well-calibrated. (This can be mitigated given enough held-out data by re-calibrating the output.) Could skill localization help with calibration? This could be interesting in low-data settings where re-calibration is impossible.
Table 2 reports the calibration error using the ECE metric (Naeini et al., 2015) (described in Appendix A.3) in a pre-trained model with sparse grafting using tasks from GLUE dataset. Sparsity levels that cause \(<5\)% reduction in accuracy lead to \(40-90\)% reduction in ECE. Vanilla fine-tuning is highly overconfident in wrong predictions, and grafted models avoid this; see histograms in Figure 7,
**Comparison with re-training.** Our sparse grafting involves no re-training. Does re-training just the grafted parameters affect calibration? Table 1 shows that impressive calibration persists after re-training. This suggests that sparse grafting region identified by our method is, in some sense, fundamentally suited to the task.
Note that several recent papers have tried to sparsify fine-tuned nets by identifying sub-networks of interest and re-training their parameters - one example is BitFit. Table 1 finds that BitFit is better calibrated than vanilla fine-tuning, but worse than our grafted model after re-training. This suggests that the sparse regions identified by our procedure are better at localizing the skill.
### Out-of-Distribution Generalization
Human skills extend at least a bit to new settings; e.g., skill at throwing a baseball should also lead to ability to throw a tennis ball. We evaluated in-distribution (ID) and out-of-distribution (OOD) accuracies for grafted models of varying sparsities in Figure 8. We find that grafted models may suffer a little on ID performance but match or significantly outperform vanilla fine-tuning on OOD. This
\begin{table}
\begin{tabular}{l|c c c|c c c|c c|c} \hline \hline & \multicolumn{4}{c|}{\(64\)-shot} & \multicolumn{4}{c}{\(4096\)-shot} \\ \hline & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{Graft} & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{Graft} & \multicolumn{2}{c|}{} \\ \hline Dataset & Acc. & ECE & Acc. & ECE & Acc. & ECE & Acc. & ECE & Agreement \\ \hline \multicolumn{10}{c}{Single sentence tasks} \\ \hline SST-2 & 85.0 (0.9) & 11.8 (2.0) & 84.8 (0.8) & 7.3 (2.1) & 90.6 (0.6) & 5.8 (1.7) & 88.8 (0.5) & 3.4 (0.8) & 93.6 (0.9) \\ CR & 87.4 (1.0) & 11.1 (1.1) & 87.5 (0.8) & 7.4 (0.9) & 87.3 (1.5) & 11.2 (1.3) & 88.5 (0.6) & 5.4 (1.6) & 92.8 (1.6) \\ MR & 80.2 (2.0) & 17.0 (2.0) & 80.5 (1.6) & 11.0 (1.4) & 86.7 (0.2) & 6.0 (2.2) & 86.0 (0.4) & 2.0 (0.2) & 91.6 (1.0) \\ MPQA & 85.2 (1.2) & 11.9 (1.3) & 85.5 (0.5) & 6.7 (1.9) & 88.4 (0.6) & 7.6 (2.1) & 88.1 (0.2) & 3.7 (1.4) & 93.7 (1.3) \\ AGNews & 87.2 (0.8) & 9.6 (0.5) & 84.3 (0.8) & 6.0 (0.8) & 92.2 (0.4) & 2.8 (0.9) & 88.6 (0.9) & 1.6 (0.3) & 91.9 (0.7) \\ Subj & 89.5 (1.1) & 7.5 (0.4) & 85.0 (2.2) & 2.8 (1.3) & 95.8 (0.2) & 3.3 (0.3) & 93.5 (0.3) & 1.5 (0.6) & 95.2 (0.5) \\ \hline \multicolumn{10}{c}{Two sentence tasks} \\ \hline QNLI & 53.9 (3.3) & 30.5 (13.0) & 54.1 (3.2) & 18.4 (7.3) & 81.9 (0.3) & 6.1 (4.8) & 79.8 (0.2) & 2.5 (0.9) & 87.8 (1.6) \\ SNLI & 56.0 (3.2) & 33.4 (4.3) & 50.4 (3.1) & 23.3 (4.4) & 80.7 (0.6) & 8.2 (2.7) & 76.3 (0.5) & 1.7 (0.7) & 84.0 (1.2) \\ MNLI & 45.1 (0.5) & 41.3 (2.1) & 42.2 (5.0) & 30.7 (3.1) & 72.3 (0.8) & 12.2 (4.5) & 67.6 (0.6) & 4.3 (2.6) & 80.1 (1.2) \\ RTE & 51.8 (5.3) & 36.8 (4.5) & 51.9 (2.9) & 23.8 (2.5) & 66.7 (1.5) & 27.3 (2.9) & 65.7 (1.5) & 16.8 (1.8) & 79.6 (2.9) \\ MRPC\({}^{\dagger}\) & 81.1 (0.2) & 13.2 (7.5) & 78.5 (1.7) & 6.3 (3.1) & 85.2 (0.5) & 15.8 (3.5) & 80.3 (1.5) & 8.3 (2.3) & 82.9 (2.2) \\ QQP\({}^{\dagger}\) & **56.8 (0.7)** & **25.6 (7.9)** & **54.6 (1.8)** & **16.1 (5.9)** & 76.1 (0.2) & **13.5 (2.4)** & 73.0 (0.5) & 5.9 (1.9) & 87.4 (1.1) \\ \hline \hline \end{tabular}
\end{table}
Table 3: We fine-tune GPT-2 models (small) (Radford et al., 2019) on different downstream tasks using prompts from (Holtzman et al., 2021). FT on GPT-2 (small) performs worse than RoBERTa-base in both 64-shot and 4096-shot settings. In fact, we observe that FT gives close to random guessing performance on most two sentence tasks in the 64-shot setting. For each downstream task, we learn a grafting region \(\boldsymbol{\gamma}\) using our optimization procedure in Equation (3). The grafting regions for all tasks have sparsity at most \(0.05\%\) (\(<42500\) parameters). We report test accuracy (\({}^{\dagger}\)F1) and the calibration error using ECE of the fine-tuned model and the grafted model for each task. The main findings are (1) The grafted model can retrieve \(>95\%\) of the FT accuracy while being better calibrated than the original model itself. For single-sentence tasks (4096-shot) the grafted model shows only a \(1.2\%\) drop in accuracy but an improvement of \(3.2\%\) in the calibration error. Similarly for two-sentence tasks, the grafted model shows a \(3.4\%\) drop in accuracy with an improvement of \(7.3\%\) in the calibration error.
suggests that grafting has indeed captured "core" skills.
Small distribution shifts.When the distribution shift between tasks is intuitively small (e.g. SST-2 to IMDb or MNLI to SNLI) vanilla fine-tuned model itself is robust enough --grafting provides little or no advantage. Similar findings appear in Hendrycks et al. (2020).
Large distribution shifts.Sentiment analysis task MPQA uses text from news articles while SST-2/Yelp/Amazon uses reviews. We find that models fine-tuned on MPQA perform poorly when tested on SST-2 and Yelp. However, the grafted model for MPQA performs at least \(5\%\) better than the fine-tuned model. Similar results hold for NLI datasets. QNLI task consists of question/answer pairs whereas MNLI and SNLI1 have pairs of assertions as inputs. This distribution shift is enough to make vanilla fine-tuning on QNLI perform poorly on MNLI and SNLI, but the grafted model for QNLI again performs around \(5\%\) better.
Footnote 1: We consider only contradiction and entailment labels here.
Comparison with WiSE-FT.Often there is no magic bullet for doing well on both ID and OOD generalization. For image data (with model pre-trained on CLIP), it was shown that WiSE-FT (Wortsman et al., 2022), which linearly interpolates between \(\mathbf{\theta}_{\text{ft}}\) and \(\mathbf{\theta}_{\text{pre}}\), does best in the ID-OOD trade-off. Figure 8 explores similar ideas for NLP tasks. Model grafting is better than WiSE-FT for one ID-OOD pair, but the opposite is true for a different pair. Applying the WiSE-FT idea on the grafted model (i.e. interpolating between grafted model and pre-trained model), "WiSE Graft," gets competitive ID-OOD tradeoff to Wise FT.
### Understanding Generalization for Fine-tuning
Fine-tuning a vast pre-trained model on a small dataset seems iffy since we are finding the best model in a very large class of models \(\Theta\), and according to the classic understanding of generalization, the error could be as high as \(\sup_{\mathbf{\theta}\in\Theta}|\mathcal{L}_{\text{test}}(\mathbf{\theta})-\mathcal{L }_{\text{train}}(\mathbf{\theta})|\). This bound is too pessimistic for most deep learning settings (Nagarajan and Kolter, 2019), including fine-tuning, since the training data can be easily fit perfectly in these settings.
Understanding generalization of the grafted model is more tractable because of the small size of the grafting region. Empirically we find that re-training on the grafted parameters fails to make \(|\mathcal{L}_{\text{test}}(\mathbf{\theta})-\mathcal{L}_{\text{train}}(\mathbf{\theta})|\) higher than \(1\%\) once the dataset has a few thousand datapoints, which can be formalized as a "complexity parameter". Appendix C explores this further using classical generalization theory.
## 5 Multi-Task and Continual Learning
Previous sections have shown that sparse grafts can localize skills when fine-tuning on a single task. In this section, we test a stronger version of skill localization involving multiple tasks in the following settings: (i) the model is fine-tuned on many tasks together, (ii) continual learning, where the model is fine-tuned on one task at a time.
### Multi-Task Learning
We perform multi-task learning (MT) by fine-tuning a RoBERTa-base model with SGD on 8 different datasets (\(4096\)-shot setting for each) simultaneously. The datasets represent four different classes of tasks: NLI, sentiment analysis, paraphrasing, and classification. Firstly, the resulting MT model achieves test accuracy comparable with the task-specific FT models, suggesting no _gradient interference_ that is observed in some cases (Yu et al., 2020). For skill localization, we learn task-specific sparse regions: for each task \(i\), we optimize for \(\mathcal{L}_{i}\) (i.e., performance on task \(i\)) from Equation (3) using the MT model parameters as \(\mathbf{\theta}_{\text{ft}}\) and \(\mathbf{\gamma}_{\text{base}}=\mathbf{0}\). (Note that grafted models for \(i,j\) have the same value for parameters that are contained in both \(\mathbf{\gamma}_{i}\) and \(\mathbf{\gamma}_{j}\).) Results are presented in Figure 9.
We find skill localization continues to exist in multi-task models, and now also provides signal about task similarity (through region overlap) and affords interesting compositional properties (through the union of regions).
Region overlap and task similarity.Figure 9(a) shows that the patches for different tasks have very little overlap
\begin{table}
\begin{tabular}{l|c c c} \hline \hline OOD task & Grafting & Graft re-training & BitFit \\ \hline \multicolumn{4}{c}{ID task: SST-2} \\ \hline Yelp & 89.5 (0.3) & 88.9 (1.0) & 89.0 (0.3) \\ IMDb & 81.5 (0.7) & 81.2 (1.4) & 81.3 (0.7) \\ \hline \multicolumn{4}{c}{ID task: QNLI} \\ \hline MNLI(0/1) & 71.8 (1.8) & 67.0 (2.3) & 60.5 (4.9) \\ SNLI(0/1) & 80.1 (2.9) & 66.4 (7.1) & 57.4 (8.0) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparing the OOD performance of model grafting, graft re-training and BitFit. For NLI tasks, the OOD accuracy of graft re-training is \(5\%\) worse than model grafting. BitFit completely fails on OOD tasks when the distribution shift is large.
Figure 7: Histogram of top prediction probabilities for FT model and grafted model on QNLI (4096-shot). (**left**) The model assigns high confidence on most examples. (**right**) The grafted model has diverse confidence levels, explaining its superior calibration.
(defined as \(\frac{|\mathbf{\gamma}_{i}\cap\mathbf{\gamma}_{j}|}{|\mathbf{\gamma}_{j}|}\) for tasks \(i,j\)). However, similar tasks show slightly more overlap compared to other pairs, e.g. (SST-2, CR), and (SNLI, MNLI). This is some evidence of skill-sharing across similar tasks.
**Skill isolation and transfer.** In Figure (b)b we find that grafted models for a single task, which presumably isolate the skills for just that task, indeed only help that particular task and a few closely related tasks. We measure the effect of task \(i\) on task \(j\) by grafting only the parameters in the region \(\gamma_{i}\), and measuring the performance gain on task \(j\) compared to the performance gain of the MT model. For a task \(t\), if \(P_{\mathbf{\gamma},t}\) is the accuracy of the model grafted with \(\mathbf{\gamma}\), \(P_{\mathbf{0},t}\) is the pre-trained accuracy, and \(P_{\mathbf{1},t}\) is the MT model accuracy, then relative performance gain of grafting region \(\mathbf{\gamma}\) is
\[\mathrm{Rel}_{\mathbf{\gamma},t}=(P_{\mathbf{\gamma},t}-P_{\mathbf{0},t})/(P_{\mathbf{1},t}-P _{\mathbf{0},t}) \tag{5}\]
We find that some similar pairs of tasks like (SST-2, CR), and (SNLI, MNLI) show transfer, i.e. grafting the region from one task helps with the other. Interestingly, for some tasks that are seemingly similar (e.g. QQP and MRPC) the effect seems to be asymmetric, i.e. \(\gamma_{\mathrm{MRPC}}\) helps with QQP, but \(\mathbf{\gamma}_{\mathrm{QQP}}\) does not help with MRPC. Furthermore, we observe that, \(\gamma_{\mathrm{QNLI}}\) helps with QQP paraphrasing, presumably because they both have questions in their inputs.
**Skill compositionality through region unions:** Since grafting for a single task works for that task as well as related tasks, we ask a more ambitious question: _Can grafting multiple regions lead to skill isolation for that subset of tasks?_ A priori one would guess "No," because the grafting regions were independently trained on individual tasks without any compositionality requirements. Surprisingly we find the answer is a qualified "yes." Figure (c)c presents compositionality results for 5 groups of tasks. For each group \(G\), we take the union of regions \(\mathbf{\gamma}_{G}=\cup_{i\in G}\gamma_{i}\) and evaluate the relative performance gains for tasks using the grafted model of \(\mathbf{\gamma}_{G}\). We indeed find that composing tasks for a subset in this way retains around \(70\%\) of the accuracy gains for that subset and related tasks, but not for other tasks. We tried slightly fine-tuning the union region \(\mathbf{\gamma}_{G}\) to optimize the joint loss on tasks in \(G\), i.e. \(\sum_{i\in G}\mathcal{L}_{i}(\mathbf{\gamma})\), by only taking 10 gradient steps for quick adaptation. Figure (d)d shows that this causes the accuracy gain on relevant tasks to be even higher (around \(\sim 80\)%) without affecting gains for other tasks by much. The emergence of this compositionality property, we believe, is very interesting and deserves further exploration.
**AdamW MT training:** We also check skill localization in a model trained with AdamW (see Figure 12 in the Appendix). Interestingly, we find that the task-specific grafts show small overlap across tasks, and only perform well on similar tasks, indicating localization even in the absence of an explicit \(\ell_{1}\) regularization during training. This is in stark contrast to single task trained models, which had failed to show any skill localization without an explicit \(\ell_{1}\) regularization (Figures (a)a and (b)b). We speculate that forcing the model to do well on multiple tasks together naturally encourages the model to localize skills.
Figure 8: Comparing the zero-shot OOD performance of the FT model and grafting in various settings. (b,c) We observe at least a \(5\%\) gap between the performance of the two, when the distribution shifts are large. (c) The gap gets worse as the number of available in-distribution samples increases. (e) For transfer in NLI task, the optimal (ID, OOD) point is (80.5, 79.6), and for grafting (84.2, 80.0). (f) For transfer in sentiment task, WiSE-FT on sparse grafted models (WiSE-Graft) gets a competitive ID-OOD curve.
### Forget-free Continual Learning
Continual learning aims to train a model sequentially, seeing one task at a time. A frequent complication is _catastrophic forgetting_ (see chapter 4 in Chen and Liu (2018)): training on new tasks can greatly hurt performance on earlier tasks. Here skill localization could help: once skills for previous tasks have been localized, we can freeze them and only update the rest of the net for new tasks. We use this localization idea, through our grafting procedure, to perform forget-free continual learning, i.e. without forgetting anything about previous tasks.
The main idea is to only train the parameters in the grafting region for a task, that does not intersect with the grafting regions of the previously encountered tasks. During inference, inspired by (Kang et al., 2022), we only use the grafted model that takes the union of the grafting regions of the task and the previous tasks. While this requires resetting parameters to pre-trained values for each evaluation, the total memory needed to retain all skills is propositional to \(Ts\) instead of \(Td\), where \(T\) is the total number of tasks, \(s\) (\(\sim 5000\)) is the sparsity of the grafting regions and \(d\) is the total number of parameters in the model (\(\sim 100\)M). Preliminary experiments in Table 5 on a sequence of three tasks suggest a significant benefit of skill localization. Appendix B.3 provides more details on training and evaluation procedures, and includes a discussion on why grafting helps
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Method & QNLI & AG news & SST-2 \\ \hline FT & 88.0 (0.8) & 93.1 (0.1) & 92.1 (0.1) \\ ContinualFT & 67.5 (5.3) & 87.6 (2.5) & 92.0 (0.5) \\ GraftingContinual & 86.5 (0.7) & 90.8 (0.1) & 92.5 (0.3) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Continual learning on the sequence of tasks QNLI, AG news, SST-2. The naive continual FT leads to a \(20\%\) drop in accuracy for QNLI, owing to catastrophic forgetting. Grafting continual FT (our procedure) can retain the performance on QNLI, while minimally affecting the performance of newer tasks.
Figure 9: Ablations on task-specific grafting region \(\mathbf{\gamma}_{i}\) for task \(i\), learned by optimizing \(\mathcal{L}_{i}\) on the MT model. Section 5.1 has details of experiments and the metrics being reported. In all figures, we evaluate the effect of graft region of task in row \(i\) on the task in row \(j\). Figure (a) measures the assymteric overlap in the regions defined as \(\frac{|\mathbf{\gamma}_{i}\cap\mathbf{\gamma}_{j}|}{|\mathbf{\gamma}_{j}|}\) for tasks in row \(i\) and column \(j\). Figure (b), (c), (d) evaluate the relative accuracy gain of task in column \(j\) using the graft regions of task(s) in row \(i\); refer to Equation (5) for the precise expression. **Observations:** (a) Similar tasks, like (SST-2, CR), and (SNLI, MNLI), show relatively higher overlap in grafting regions. (b) The grafted model of a task only does well on itself and a few similar tasks. (c) The grafted model with union of regions for a subset \(G\) of tasks only does well on the tasks in \(G\) and similar tasks (all values higher than \(0.7\)). (d) Allowing a few steps of GD to purify the union of grafting regions improves the grafted model’s performance on the desired set of tasks.
in this case. Other explorations of skill localization and grafting for continual learning is left for future work.
## 6 Related Work
Knowledge/skills.Li et al. (2022) show that the feed-forward activations are sparse in large pre-trained models. Dai et al. (2021) discover knowledge neurons in BERT, whose activations correlate with specific facts, whereas (Burns et al., 2022) find latent knowledge in the internal representations of language models. Furthermore, Meng et al. (2022); Hase et al. (2023) show that the language models localize knowledge in the feed-forward layers of the pre-trained models. Wang et al. (2022) find specific "skill" neurons highly predictive of the downstream task in soft prompt-tuning (Li & Liang, 2021) of language models.
Parameter-efficient fine-tuning.Houlsby et al. (2019) only train a small set of trainable parameters added between layers of the network. Gordon et al. (2020) use an \(\ell_{0}\) regularizer to update few layers during fine-tuning. BitFit (Ben Zaken et al., 2022) only updates biases during FT, and performs comparably to vanilla FT. (Our graft regions have fewer parameters.)
Lottery ticket hypothesis(Frankle & Carbin, 2018) asserts that a trained neural network can be re-trained using a small sub-network while setting other parameters to 0 and still reach the same performance. Lottery tickets for pre-trained language models are studied in (Chen et al., 2020; Prasanna et al., 2020; Liang et al., 2021). To our best knowledge, LTH results in sub-networks much denser than graft. While Gong et al. (2022) claim to find a much sparser "lottery ticket" that transfers to different GLUE tasks, to the best of our understanding, they set values for parameter outside a ticket to the pre-trained value and not 0. Compared to Gong et al. (2022), our grafted model is post-hoc (no re-training is needed) and is sparser (0.01% on RoBERTa-base for ours and 0.05% on RoBERTa-large for the dominant ticket).
OOD generalization and distribution shifts.Diffenderfer et al. (2021); Zhang et al. (2021); Liu et al. (2022) show that re-trained lottery tickets can be more robust to the distribution shift. Lee et al. (2022) alleviate distribution shift by only fine-tuning specific layers of the whole model. Li et al. (2022) show the efficacy of sparsifying the activations of feed-forward layers in the models.
Multi-task training.Misra et al. (2016) stitch networks from different tasks together, Long et al. (2017) learn relations between tasks to enhance performance, and Lu et al. (2017) apply adaptive weight-sharing between the networks for different tasks. Multi-task training in NLP is also studied in (Collobert & Weston, 2008; Liu et al., 2016; Gupta et al., 2016; Lu et al., 2017; Liu et al., 2019). These involve training additional task-specific parameters whereas our approach does not. Also, we attempt to understand the localization of skills within the model post-hoc.
Saunshi et al. (2021); Wei et al. (2021); Malladi et al. (2022) mathematically study head-tuning, prompt-tuning and fine-tuning of language models for few-shot downstream tasks.
## 7 Conclusions and future directions
By successfully demonstrating the ability to do a sparse"graft" of the skill on top of the pre-trained model, this paper makes a start on localizing newly acquired skills inside fine-tuned language models. We hope our first-cut method will improve with further work, potentially yielding better understanding as well as applications in multi-task and continual learning, which we also begin to address in Section 5.2. We hope these may yield new insights on how to compose skills, decompose the identified skill into finer skills, and give applications to unlearning and federated learning. One open problem for multi-task setting is a method to find, for any subset \(S\subseteq\{1,\dots,m\}\) of tasks, a model that does well on all tasks in \(S\). (The naive method would train models for all \(2^{m}\) subsets of tasks.) Our approach with finding task-specific regions and using their unions shows promise for small \(m\).
Acknowledgments.We thank Danqi Chen for feedback on an earlier version of this work and Saurabh Garg for pointing us to WiSE-FT. We also thank Tianyu Gao and Mengzhou Xia for clarifications on the prompt-based fine-tuning codebase. This work is supported by funding from NSF, ONR, Simons Foundation, DARPA and SRC.
|
2303.11055 | Parameter-Free Channel Attention for Image Classification and
Super-Resolution | The channel attention mechanism is a useful technique widely employed in deep
convolutional neural networks to boost the performance for image processing
tasks, eg, image classification and image super-resolution. It is usually
designed as a parameterized sub-network and embedded into the convolutional
layers of the network to learn more powerful feature representations. However,
current channel attention induces more parameters and therefore leads to higher
computational costs. To deal with this issue, in this work, we propose a
Parameter-Free Channel Attention (PFCA) module to boost the performance of
popular image classification and image super-resolution networks, but
completely sweep out the parameter growth of channel attention. Experiments on
CIFAR-100, ImageNet, and DIV2K validate that our PFCA module improves the
performance of ResNet on image classification and improves the performance of
MSRResNet on image super-resolution tasks, respectively, while bringing little
growth of parameters and FLOPs. | Yuxuan Shi, Lingxiao Yang, Wangpeng An, Xiantong Zhen, Liuqing Wang | 2023-03-20T12:08:58Z | http://arxiv.org/abs/2303.11055v1 | # Parameter-Free Channel Attention for Image Classification and Super-Resolution
###### Abstract
The channel attention mechanism is a useful technique widely employed in deep convolutional neural networks to boost the performance for image processing tasks, _e.g._, image classification and image super-resolution. It is usually designed as a parameterized sub-network and embedded into the convolutional layers of the network to learn more powerful feature representations. However, current channel attention induces more parameters and therefore leads to higher computational costs. To deal with this issue, in this work, we propose a Parameter-Free Channel Attention (PFCA) module to boost the performance of popular image classification and image super-resolution networks, but completely sweep out the parameter growth of channel attention. Experiments on CIFAR-100, ImageNet, and DIV2K validate that our PFCA module improves the performance of ResNet on image classification and improves the performance of MSRResNet on image super-resolution tasks, respectively, while bringing little growth of parameters and FLOPs.
Parameter-free channel attention, image classification, image super-resolution
## I Introduction
Attention mechanisms [1, 2] have been validated to be very effective on boosting the performance of deep neural networks [1, 2, 3]. Among them, channel attention [1] is a commonly used attention mechanism that assign adaptive weights over different feature maps to improve the performance of image classification networks. Therefore, it has become a key component of deep network architectures. Besides, the effectiveness of channel attention [1] on image super-resolution has also been illustrated in the work of deep residual channel attention network [4] and cascading residual network [5].
The parameter-free attention mechanism has been studied in [6] to assign position importance over the feature maps by spatial attention without parameter growth. The parameter-free attention mechanism was also studied from the neuroscience perspective in [7]. Different from current attention mechanism that needs to be trained from data, this work utilizes a fixed perception mechanism to pixel-wise importance over the feature maps without additional parameters. Compared with popular parameterized channel or spatial attention mechanisms [1, 8], parameter free attention mechanisms could achieve better effect [6, 7] while avoiding additional parameter growth, or reduce the model complexity and computational costs while preserving the network performance.
In this paper, we develop a Parameter-Free Channel Attention (PFCA) module to replace the standard channel attention module [1] in popular image classification and image super-resolution networks. Our PFCA module exploits useful statistic information in different channels of a feature maps, and is a plug-and-play module can be embedded into a network to enhance its feature representation capability. By embedding our PFCA into ResNets and MSRResNet, experiments on CIFAR-100, ImageNet, and DIV2K demonstrate that our PFCA module is effective on boosting the performance of ResNet-18, ResNet-50, ResNet-101 and MSRResNet on image classification and super-resolution, respectively, while bringing little computational costs and parameter growth.
The rest of this paper is organized as follows. In Section II, we briefly introduce the related works. In Section III, we present our PFCA module for image classification and super-resolution tasks. Extensive experiments are conducted in Section IV to evaluate the performance of our PFCA module. Section V concludes this work.
## II Related Works
### _Image Classification_
Image classification has been widely tackled by convolutional neural networks (CNNs) ever since AlexNet [9]. Then VGG [10] extends the network's depth and width. GoogleNet [11] provides an inception module to reduce the parameters of convolutional networks. The design of CNN architecture could affect the feature extraction function. In this work, we use deep residual network (ResNet) [12], the most frequently used CNN architecture, as the backbone for image classification tasks. It has been validated by squeeze and excitation network (SE-Net) [1] that attention mechanism helps improve the effectiveness of network to concentrate on special channels of feature maps. Improvements in optimization take effect to boost the network training process [13]. The study of feature representation has also been testified to be useful for classification [14].
### _Image Super-Resolution Task_
Using convolutional neural network to realize image super-resolution (SR) was firstly proposed in SRCNN [15]. The
direct use of original LR image in the following ESPCNN [16] as input of the network instead of the upsampled image in SRCNN is to reduce the computation load. After that, VDSR [17] introduces the concept of residual image super-resolution, which has been proved to be effective in SR tasks. Since then, the residual block was provided in [18] to help design the network architecture of SR network, which was generally used in many following works. However, these methods could hardly be applied to practical scenarios due to their heavy computations, thus pushing the efficient SR network coming out. Some methods are presented including depth-wise convolutions [19] and group convolutions [20, 21, 22] in order to save computation. The attention mechanism is also used to improve SR performance. RCAN [4] uses channel attention module to make its network concentrate on important channels. PAN [3] introduces a pixel attention module to adaptively weight different pixels of feature maps.
### _Attention Mechanisms_
Attention mechanisms have attracted great attention since it was successfully applied in natural language processing tasks [23], which is mainly used to help networks concentrating on partial information of data flow and generating the corresponding dimensional weight maps. When it comes to the image processing task, there are mainly three types of attention mechanisms: channel attention (CA) [1], spatial attention (SA) [2] and pixel attention (PA) [3], depending on their generated weight maps' dimension level. Self-Attention [24] is also very popular apart from the three. Among the above mechanisms, channel attention is the first to propose in SENet [1] and then polished in SK-Net [25]. After that, attempts to incorporate both channel attention and spatial attention for stronger attention capability in Convolutional Block Attention Module (CBAM) [8] are tried several times. Bottleneck Attention module (BAM) [26] also uses double attention modules to add their attention matrices to get the final attention map. SA-Net [27] splits the channel feature maps separately into the channel attention module and spatial attention module. Apart from these, there also exists the temporal attention, i.e. the attention in time level [28].
## III Parameter-Free Channel Attention
### _Preliminary on Channel Attention Mechanism_
To demonstrate the Parameter-Free Channel Attention module (PFCA), we should first review the channel attention mechanism. Consider the input feature map \(\mathbf{X}\in\mathbb{R}^{N\times C\times H\times W}\), where \(N\) is batch, \(C\) is channel and \(H\times W\) is feature map size, the channel attention module is to compute the weighted map from the channel-level vector obtained by a pooling layer on \(\mathbf{X}\), such as the average pooling layer. The squeeze and excitation process [1], including a well-known module which consists of two parameterized Multilayer Perceptrons (MLP), helps to integrate the channel level information and outputs the weighted map. The weighted attention map is multiplied by input \(\mathbf{X}\) to get the final output \(\mathbf{Y}\). This process can be described as
\[\mathbf{Y}=\mathbf{X}\cdot\text{Attention}(\text{Pooling}(\mathbf{X})) \tag{1}\]
Channel attention is like a gate that controls the flow of feature maps in the channel level, making some of these feature maps more important to be processed by the inter network module. In other words, this two MLP parts could function as an "importance" interpreter for feature maps. In this work, we try to provide a method that is similar to channel attention but does not include additional parameters.
### _Parameter-Free Channel Attention_
The Parameter-Free Channel Attention (PFCA) is similar to the channel attention of the channel attention (CA) mechanism. PFCA is inspired by simple attention module [7] which aims to provide a parameter-free pixel-level attention. We apply this idea to the channel-level attention. The attention map \(\mathbf{V}=(\mathbf{V}_{1},\mathbf{V}_{2},...,\mathbf{V}_{C})\in\mathbb{R}^{N \times C\times 1\times 1}\) is attained from its channel-level vector \(\mathbf{U}=(\mathbf{U}_{1},\mathbf{U}_{2},...,\mathbf{U}_{C})\in\mathbb{R}^{ N\times C\times 1\times 1}\) which is also provided by a pooling layer, e.g. average pooling layer. Then the channel-level vector \(\mathbf{U}\) is computed by its element-wise operation below:
\[\mathbf{V}_{j}=\frac{(\mathbf{U}_{j}-\mu)^{2}+2(\sigma^{2}+\lambda)}{4(\sigma ^{2}+\lambda)}\,\ j=1,2,...,C \tag{2}\]
Here \(\mu\) and \(\sigma^{2}\) are respectively the mean value and variance of channel vector \(\mathbf{U}\) computed in channel dimension. \(\lambda\) is a small value to control the variance and is set to 1e-4 in the following tasks. After that, the weighted attention map \(\mathbf{V}\) is generated by the sigmoid activation of the above part and then is multiplied by input \(\mathbf{X}\) to get the output result.
\[\mathbf{Y}=\mathbf{X}\cdot\text{sigmoid}(\mathbf{V}) \tag{3}\]
The overall PFCA module is illustrated in Figure 1. The pooling layer and the activation part is the same with channel attention. We perform a fixed perception to analyze the channel importance.
### _Image Classification and Super-Resolution Networks_
Our PFCA module is designed to make a difference in raising the current network performance, so we could insert the PFCA module into commonly used CNN models. However, the network structures would be different when inserting the proposed module in the object models since they vary in
Fig. 1: Illustration of our Parameter-Free Channel Attention (PFCA) module. The module gets the feature map \(\mathbf{X}\) as the input and outputs feature map \(\mathbf{Y}\). \(\mathbf{U}\) and \(\mathbf{V}\) are the channel level vector generated after the global pooling layer and sigmoid activation respectively. \(\mu\) and \(\sigma^{2}\) are the channel-level mean and variance of \(\mathbf{U}\) and \(\lambda\) is used to control variance.
different tasks. To make the PFCA module more applicable, here we choose the widely used residual blocks [12] to build our image classification and image super-resolution models.
**Image classification**. Here, we use the ResNet-18/50/101 [12] as our image classification backbones. We first embed our PFCA module into the basic residual block in ResNet-18, and into the bottleneck residual block in ResNet-50 and ResNet-101 [12]. The corresponding revised residual block and bottleneck residual block are illustrated in Figure 2. The PFCA module is put before the addition operation in the basic residual block and the bottleneck residual block [12], similar to [1]. Then we replace the basic residual block in ResNet-18 by our revised residual block, and replace the bottleneck residual block in ResNet-50 and ResNet-101 by our revised bottleneck residual block.
**Image super-resolution**. Here, we employ the MSRResNet [29] as the baseline super-resolution model, which uses the SRResNet [18] as the backbone but remove all the Batch Normalization blocks [30]. The baseline model MSRResNet utilizes a compact residual block instead of the original one in [18]. Then we put the PFCA module between the second convolution layer and the addition operation in the compact residual block, as shown in Figure 3. We replace the compact residual block by our revised one in MSRResNet.
## IV Experiments
### _Experiment Details_
In our experiments, image classification task is performed on ImageNet 2012 dataset [33] and CIFAR-100 dataset [31]. ImageNet 2012 dataset contains 1.28 million train images and 50,000 valid images with total 1000 classes. CIFAR-100 dataset contains 50,000 train images and 10,000 test images with 100 classes. Image super-resolution task is performed on DIV2K dataset [34] with contains 800 images to train. Experiments are performed by using Pytorch framework with one NVIDIA RTX 3090 GPU.
### _Image Classification_
When training classification networks on ImageNet dataset, input images are adjusted to size \(224\times 224\). We use the same Data augmentation method in [12, 1]. For optimization process, SGD optimizer is used with momentum 0.9 and a mini-batch size 128. The learning rate is initially set to 0.1 and reduces to \(0.1\) of the previous value for every 30 epochs. Weight decay is used and set to \(10^{-4}\). The network models are trained totally 100 epochs from scratch. Classification test result is presented on the Table I. Flops are computed with a single image with size \(224\times 224\). It is indicated that adding PFCA module could effectively increase the classification accuracy on both top-1 and top-5.
We follow the same optimization strategy in [31] to train ResNet on CIFAR dataset where batch size is 128 and the model trained for 200 epochs. The initial learning rate here is set to 0.05 and divided by 5 at epoch 60, 120, and 160 and weight decay is \(5\times 10^{-4}\). L1 loss function is used to train the network and SGD optimization strategy is used with momentum set to 0.9. We present top-1 and top-5 accuracy as test metrics on both datasets. Classification test result is presented in the Table I. It is indicated that adding PFCA module could effectively increase the classification accuracy. One can see that our PFCA module also helps increase the classification accuracy on CIFAR-100 dataset.
### _Image Super-Resolution_
We further testify the effectiveness of our proposed Parameter-Free Channel Attention module on image super-resolution task. To effectively compare different attention, we provide four candidate models to train: the baseline model MSRResNet [29] with no attention, with PA [3], with CA and with PFCA. The number of residual blocks is 16. We train models on DIV2K train dataset [34] which contains 800 high resolution images. The HR images are randomly cropped with patch size \(256\times 256\). Mini-batch size is set to 32. The cosine annealing learning scheme is used when training model, and learning rate is set with \(2\times 10^{-4}\) initially and the minimum
Fig. 3: Illustration of the modified compact residual block (a), and the proposed residual block with PFCA module (b) in our image super-resolution model. PFCA is inserted after the second convolution layer and before the addition operation in each compact residual block.
Fig. 2: Illustration of the basic block (a) and the bottleneck block (b) with PFCA module used in our image classification model. Both PFCA are inserted before the addition operation.
value to \(10^{-7}\). We set weight decay to \(5\times 10^{-8}\). Restart period is set after each 250k iterations. All models are trained for total 1000k iterations. The corresponding LR images are obtained by downsampling HR images by 4 downscale factor using bicubic method. After the training stage, we use five standard benchmark datasets, Set5 [20], Set14 [35], B100 [36], Urban100 [19] and Manga109 [21], to test the performance. Evaluation metric is set with average peak signal to noise ratio (PSNR) and average structural similarity index (SSIM) on each test set. All test metrics are calculated on the Y channel of images. In addition to the comparison of MSRResNet [29] and our model, we also compare with SRCNN [15] and EDSR [32] results in the paper, since the former is the first CNN model for super-resolution task and the latter one has similar architecture as ours but with much more parameters.
**Objective results** are provided in Table II. We observe that adding PFCA in the residual block helps improve the PSNR and SSIM values compared to the baseline MSRResNet. Adding different attentions could bring effect improvement but with different parameters. Note that the PSNR/SSIM values of PFCA are similar to those of CA. That means PFCA could achieve comparable performance of CA with little parameter growth.
**Comparisons on visual quality** are presented in Figure 4. Images are chosen from the test set. The visual result and their corresponding test metrics are listed below the image. It reveals that the restored quality of image of adding PFCA part is better in the local texture part. Thus the PFCA module could effectively help improve the visual quality.
## V Conclusion
In this paper, we proposed a Parameter-Free Channel Attention (PFCA) that could serve the function of channel importance estimation, while not introducing additional parameters. We put this PFCA module into the image classification model as well as the image super-resolution model and then performed experiments on image classification and image super-resolution. The experiment results validate that our PFCA module could function as an effective attention module in deep networks. According to some experiment results, the fixed structure of PFCA needs to be further improved to achieve better performance compared to CA.
|
2307.04811 | Non-local interference in arrival time | Although position and time have different mathematical roles in quantum
mechanics, with one being an operator and the other being a parameter, there is
a space-time duality in quantum phenomena: a lot of quantum phenomena that were
first observed in the spatial domain were later observed in the temporal domain
as well. In this context, we propose a modified version of the
double-double-slit experiment using entangled atom pairs to observe a non-local
interference in the arrival time distribution, which is analogous to the
non-local interference observed in the arrival position distribution. However,
computing the arrival time distribution in quantum mechanics is a challenging
open problem, and so to overcome this problem we employ a Bohmian treatment.
Based on this approach, we numerically demonstrate that there is a
complementary relationship between the one-particle and two-particle
interference visibilities in the arrival time distribution, which is analogous
to the complementary relationship observed in the position distribution. These
results can be used to test the Bohmian arrival time distribution in a strict
manner, i.e., where the semiclassical approximation breaks down. Moreover, our
approach to investigating this experiment can be applied to a wide range of
phenomena, and it seems that the predicted non-local temporal interference and
associated complementary relationship are universal behaviors of entangled
quantum systems that may manifest in various phenomena. | Ali Ayatollah Rafsanjani, MohammadJavad Kazemi, Vahid Hosseinzadeh, Mehdi Golshani | 2023-07-03T18:26:56Z | http://arxiv.org/abs/2307.04811v2 | # Non-local interference in arrival time
###### Abstract
Although position and time have different mathematical roles in quantum mechanics, with one being an operator and the other being a parameter, there is a space-time duality in quantum phenomena--a lot of quantum phenomena that were first observed in the spatial domain were later observed in the temporal domain as well. In this context, we propose a modified version of the double-double-slit experiment using entangled atom pairs to observe a non-local interference in the arrival time distribution, which is analogous to the non-local interference observed in the arrival position distribution [1; 2]. However, computing the arrival time distribution in quantum mechanics is a challenging open problem [3; 4], and so to overcome this problem we employ a Bohmian treatment. Based on this approach, we numerically demonstrate that there is a complementary relationship between the one-particle and two-particle interference visibilities in the arrival time distribution, which is analogous to the complementary relationship observed in the position distribution [5; 6]. These results can be used to test the Bohmian arrival time distribution in a strict manner, i.e., where the semiclassical approximation breaks down. Moreover, our approach to investigating this experiment can be applied to a wide range of phenomena, and it seems that the predicted non-local temporal interference and associated complementary relationship are universal behaviors of entangled quantum systems that may manifest in various phenomena.
## I Introduction
In quantum theory, several effects that were initially observed in the spatial domain have subsequently been observed in the time domain. These effects include a wide range of phenomena such as diffraction in time [7; 8; 9; 10], interference in time [11; 12; 13; 14], Anderson localization in time [15; 16] and several others [17; 18; 19; 20; 21]. To extend this line of research, we propose a simple experimental setup that can be used to observe a non-local interference in _arrival time_, which is analogous to the non-local interference in arrival position observed in entangled particle systems [2; 22; 23; 24; 25; 26; 27].
The proposed experimental setup involves a double-double-slit arrangement in which a source emits pairs of entangled atoms toward slits [28; 24]. Such entangled atoms can be produced, for example, via a four-wave mixing process in colliding Bose-Einstein condensates [29; 30]. As shown in Fig. 1, the atoms fall due to the influence of gravity, and then they reach horizontal fast single-particle detectors, which record the arrival time and arrival position of the particles. In fact, a similar arrangement has previously been proposed for observing non-local two-particle interference in arrival _position_ distribution [24]. The critical difference between our setup and theirs is that the slits in our setup are not placed at the same height. This leads to height-separated wave packets that spread in space during falling and overlap each other. Moreover, we do not consider the horizontal screens at the same height, so the particles may be detected at completely different timescales. Our study indicates that these apparently small differences lead to significant interference in the two-particle arrival time distribution, which did not exist in the previous versions of the experiment. This phenomenon is experimentally observable, thanks to the current single-atom detection technology. Our numerical study shows that the required space-time resolution in particle detection is achievable using current single-atom detectors, such as the recent delay-line detectors described in [30; 31] or the detector used in [32; 33].
The theoretical analysis of the proposed experiment is more complex than that of the conventional double
Figure 1: Schematic representation of the double-double-slit setup. The source emits pairs of entangled particles, each of which passes through a double-slit and then undergoes a free fall. Two arrays of fast particle detectors are placed on both sides, recording the detection events. \(Y_{L}/Y_{R}\) represent the vertical distance from the origin to the left/right detection screen, \(2l_{y}\) is the vertical distance between slits, and \(2l_{x}\) is the horizontal distance between the slits.
double-slit experiment due to at least two reasons. Firstly, since the two particles are not observed simultaneously, the wave function of the two particles collapses to a single-particle wave function at the time when the first particle is detected in the middle of the experiment. Secondly, the theoretical analysis of arrival time distribution is more complex than that of arrival position distribution. This is because, in the mathematical framework of orthodox quantum mechanics, position is represented by a self-adjoint operator, while time is just treated as a parameter [34]. As a result, the Born rule cannot be directly applied to the time measurements. This fact, coupled with other issues such as the quantum Zeno paradox [35, 36], leads to some ambiguities in calculating the arrival time distribution [37, 38, 39, 40, 41, 42]. In fact, there is no agreed-upon method for calculating the arrival time distribution, although several different proposals have been put forth based on various interpretations of quantum theory [43, 44, 45, 46, 47, 48, 49, 50, 51, 52].
Most of the arrival time distribution proposals are limited to simple cases, such as a free one-particle in one dimension, and are not yet fully extended to more complex situations, such as our double-double-slit setup. Nevertheless, the Bohmian treatment seems suitable for analyzing the proposed setup since it can be unambiguously generalized for multi-particle systems in the presence of external potentials. Thus, in this paper, we investigate the proposed experiment using the recent developments in the Bohmian arrival time for entangled particle systems, including detector back-effect [27, 52]. The results could contribute to a better understanding of the non-local nature of quantum mechanics in the time domain. Moreover, beyond the proposed setup, our theoretical approach has potential applications in related fields such as atomic ghost imaging [30, 53], quantum test of the weak equivalence principle with entangled atoms [54], and state tomography via time-of-flight measurements [55, 56, 33].
This paper is organized as follows. We present the theoretical framework in Sec. II. We then discuss the numerical results and the physical insights derived from them in Sec. III, including the signal locality, the complementarity between one-particle and two-particle interference visibilities, and the screen back-effect. In Sec. IV, we compare the Bohmian approach with the semiclassical approximation. We conclude with a summary and an outlook in Sec. V.
## II Theoretical framework
Bohmian mechanics, also known as pilot wave theory, is a coherent realistic version of quantum theory, which avoids the measurement problem [57, 58]. In the Bohmian interpretation, in contrast to the orthodox interpretation, the wave function does not give a complete description of the quantum system. Instead, the actual trajectories of particles are taken into account as well, and this can provide a more intuitive picture of quantum phenomena [59]. Nonetheless, it has been proved that in the quantum equilibrium condition [60, 61], Bohmian mechanics is experimentally equivalent to orthodox quantum mechanics [62, 63]_insofar as the latter is unambiguous_[64, 65, 66]; e.g., in usual position or momentum measurements at a specific time. In recent years, Bohmian mechanics has gained renewed interest for various reasons [67, 68, 69, 70]. One of these reasons is the fact that Bohmian trajectories can lead to clear predictions for quantum characteristic times, such as tunneling time duration [71, 72] and arrival time distribution [65, 27, 44].
Here, we investigate the proposed double-double slit setup using Bohmian tools. According to Bohmian Mechanics, the state of a two-particle system is determined by the wave function \(\Psi(\mathbf{r}_{1},\mathbf{r}_{2})\) and the particles' actual positions \((\mathbf{R}_{1},\mathbf{R}_{2})\). The time evolution of the wave function is given by a two-particle Schrodinger equation
\[i\hbar\frac{\partial}{\partial t}\Psi_{t}(\mathbf{r}_{1},\mathbf{r}_{2})=\sum_{i=1,2} \frac{\hbar^{2}}{2m_{i}}\nabla_{i}^{2}\Psi_{t}+V_{i}(\mathbf{r}_{i})\Psi_{t}, \tag{1}\]
which in the proposed setup \(V_{i}(\mathbf{r}_{i})=-m_{i}\mathbf{g}.\mathbf{r}_{i}\) and \(\mathbf{g}\) represents the gravitational field. The particles dynamics are given by two first-order differential equations in configuration space, the "guidance equations",
\[\frac{d}{dt}\mathbf{R}_{i}(t)=\mathbf{v}_{i}^{\Psi_{t}}(\mathbf{R}_{1}(t),\mathbf{R}_{2}(t)), \tag{2}\]
where \(i=1,2\) and \(\mathbf{v}_{i}^{\Psi_{t}}\) are the velocity fields associated with the wave function \(\Psi_{t}\); i.e. \(\mathbf{v}_{i}^{\Psi_{t}}\)=\((\hbar/m_{i})\Im(\nabla_{i}\Psi_{t}/\Psi_{t})\)[73]. When the particle 1, for example, is detected at time \(t=t_{c}\), the two-particle wave function collapses _effectively_ to a one-particle wave function, i.e. as \(\Psi_{t_{c}}(\mathbf{r}_{1},\mathbf{r}_{2})\rightarrow\psi_{t_{c}}(\mathbf{r}_{2})\), where [74, 75],
\[\psi_{t_{c}}(\mathbf{r}_{2})=\Psi_{t_{c}}(\mathbf{R}_{1}(t_{c}),\mathbf{r}_{2}), \tag{3}\]
which is known as the "conditional wave function" in Bohmian formalism [73, 76]. For \(t>t_{c}\), the time evolution of the wave function is given by following the one-particle Schrodinger equation
\[i\hbar\frac{\partial}{\partial t}\psi_{t}(\mathbf{r}_{2})=\frac{-\hbar^{2}}{2m_{2}} \nabla_{2}^{2}\psi_{t}(\mathbf{r}_{2})+V_{2}(\mathbf{r}_{2})\psi_{t}(\mathbf{r}_{2}), \tag{4}\]
and the remaining particle motion is determined by the associated one-particle guidance equation,
\[\frac{d}{dt}\mathbf{R}_{2}(t)=\mathbf{v}_{2}^{\psi_{t}}(\mathbf{R}_{2}(t)), \tag{5}\]
where \(\mathbf{v}_{2}^{\psi_{t}}\)=\((\hbar/m_{2})\Im(\nabla_{2}\psi_{t}/\psi_{t})\). It is important to note that, in general, a conditional wave function does not obey the Schrodinger equation [77]. However, in a measurement situation, the interaction of the detected particle with the environment (including the detection screen) cancels any entanglement between undetected and detected particles, due to the decoherence process
[78]. Therefore, in this situation, after the measurement process, the conditional wave function represents the "effective wave function" of the undetected particle [77; 79], which satisfies the one-particle Schrodinger equation [77; 52].
We focus our study on the propagation of the wave function from the slits to the detection screens. Thus, one can consider the initial wave function as follows [80; 6; 81]:
\[\Psi_{t_{0}}(\mathbf{r}_{1},\mathbf{r}_{2})\!=\!N\bigg{[}(\frac{1-\eta}{2})\Psi_{t_{0} }^{(\times)}(\mathbf{r}_{1},\mathbf{r}_{2})\!+\!(\frac{1+\eta}{2})\Psi_{t_{0}}^{(||)} (\mathbf{r}_{1},\mathbf{r}_{2})\bigg{]}\,,\]
in which \(N\) is a normalization constant,
\[\Psi_{t_{0}}^{(\times)}(\mathbf{r}_{1},\mathbf{r}_{2}) = [g_{+}^{+}(\mathbf{r}_{1})g_{-}^{-}(\mathbf{r}_{2})+g_{d}^{+}(\mathbf{r}_{1})g _{u}^{-}(\mathbf{r}_{2})]+1\leftrightarrow 2,\] \[\Psi_{t_{0}}^{(||)}(\mathbf{r}_{1},\mathbf{r}_{2}) = [g_{u}^{+}(\mathbf{r}_{1})g_{u}^{-}(\mathbf{r}_{2})+g_{d}^{+}(\mathbf{r}_{1}) g_{d}^{-}(\mathbf{r}_{2})]+1\leftrightarrow 2,\]
and
\[g_{u}^{\pm}(x,y) = G(x;\sigma_{x},\pm l_{x},\pm u_{x})G(y;\sigma_{y},+l_{y},+u_{y}),\] \[g_{d}^{\pm}(x,y) = G(x;\sigma_{x},\pm l_{x},\pm u_{x})G(y;\sigma_{y},-l_{y},-u_{y}),\]
where \(G\) is a Gaussian wave function
\[G(x;\sigma,l,u)=Ne^{-(x-l)^{2}/4\sigma^{2}+imu(x-l)/\hbar}.\]
The Gaussian-type initial wave function is a minimal model which is commonly used in the literature (e.g., see [81; 82; 83; 6; 25; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 183; 187; 189; 188; 189; 191; 180; 184; 189; 185; 186; 187; 188; 189; 192; 189; 193; 180; 181; 183; 185; 187; 188; 189; 194; 189; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 288; 289; 288; 289; 289; 291; 280; 281; 285; 289; 286; 287; 288; 289; 288; 289; 292; 293; 289; 294; 289; 295; 281; 286; 287; 288; 296; 289; 297; 289; 300; 30; 301; 302; 303; 311; 329; 334; 34; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 41; 425; 43; 44; 44; 45; 46; 47; 48; 49; 410; 405; 406; 407; 41; 43; 44; 45; 46; 47; 49; 42; 48; 49; 43; 408; 41; 44; 45; 47; 49; 44; 46; 48; 49; 409; 41; 44; 43; 45; 49; 44; 41; 44; 46; 47; 48; 49; 40; 42; 49; 40; 43; 41; 44; 45; 46; 47; 48; 49; 425; 49; 40; 44; 41; 45; 47; 49; 48; 41; 49; 40; 44; 42; 41; 46; 49; 41; 426; 43; 427; 41; 48; 42; 49; 40; 44; 45; 47; 46; 48; 49; 42; 43; 44; 45; 48; 49; 40; 44; 45; 49; 41; 40; 44; 46; 47; 48; 49; 40; 45; 49; 40; 46; 41; 47; 49; 42; 42; 43; 44; 45; 46; 48; 47; 49; 40; 48; 49; 40; 49; 41; 40; 41; 41; 42; 44; 43; 46; 47; 48; 49; 40; 49; 42; 45; 49; 40; 41; 43; 42; 44; 45; 46; 47; 49; 43; 48; 49; 40; 45; 49; 40; 41; 44; 45; 46; 49; 42; 47; 48; 49; 40; 48; 41; 45; 49; 40; 42; 43; 44; 46; 47; 48; 49; 45; 48; 49; 40; 49; 40; 41; 45; 49; 41; 46; 47; 49; 40; 42; 48; 49; 40; 43; 44; 41; 45; 46; 48; 49; 40; 44; 45; 49; 41; 46; 47; 48; 49; 42; 49; 43; 44; 47; 49; 45; 48; 49; 40; 46; 49; 43; 47; 49; 40; 48; 49; 40; 49; 40; 49; 41; 41; 44; 49; 42; 45; 46; 47; 49; 48; 49; 40; 49; 42; 47; 48; 49; 40; 49; 43; 48; 41; 49; 45; 49; 40; 41; 46; 49; 42; 47; 49; 43; 49; 44; 45; 46; 48; 47; 49; 48; 49; 45; 49; 40; 47; 49; 48; 49; 49; 41; 49; 40; 49; 45; 49; 40; 41; 46; 49; 43; 49; 4
left/right screen. The probability density behind this distribution can be formally written as
\[P(t_{L},x_{L};t_{R},x_{R}) =\int d\mathbf{R}^{0}\ |\Psi_{0}(\mathbf{R}^{0})|^{2}\] \[\quad\times\prod_{i=L,R}\delta(t_{i}-T_{i}(\mathbf{R}^{0}))\delta(x_{i }-X_{i}(\mathbf{R}^{0})),\]
where \(T_{L,R}(\mathbf{R}_{1}^{0},\mathbf{R}_{2}^{0})\) and \(X_{L,R}(\mathbf{R}_{1}^{0},\mathbf{R}_{2}^{0})\) are the arrival time and position of the particle with initial condition \((\mathbf{R}_{1}^{0},\mathbf{R}_{2}^{0})\) to the left and right screen, respectively. Note, how the above joint distribution and, therefore, any marginal distribution out of it, is sensitive to the Bohmian dynamics through functions \(T\) and \(X\) and also to the Born rule by \(|\Psi_{0}(\mathbf{R}^{0})|^{2}\). The joint two-particle arrival time probability density is then defined as,
\[\Pi(t_{L},t_{R})=\int\int P(t_{L},x_{L};t_{R},x_{R})dx_{L}dx_{R}. \tag{7}\]
The right and left marginal arrival time probability densities are also defined correspondingly as,
\[\Pi_{L}(t_{L}) =\int P(t_{L},t_{R})dt_{R},\] \[\Pi_{R}(t_{R}) =\int P(t_{L},t_{R})dt_{L}.\]
In a practical manner, the trajectories of the particles and the resulting arrival time distributions are numerically computed for an ensemble of particles whose initial positions are sampled from \(|\Psi_{0}|^{2}\), and the corresponding results are described in the next section.
## III Results and Discussions
In the numerical studies of this work, the parameters of the proposed setup have been chosen as \(l_{x}\!=\!5\times 10^{-3}\) m and \(l_{y}\!=\!10^{-5}\) m. Moreover, the initial wave packets widths and velocities are fixed at \(\sigma_{x}\!=\!\sigma_{y}\!=\!10^{-6}\) m, \(u_{x}\!=\!20\) m/s and \(u_{y}\!=\!0\), respectively. These values are consistent with the proposed setup in reference [92], in which colliding helium-4 atoms have been considered for producing an initial entangled state [93]. However, we also consider heavier atom pairs, which lead to a more visible interference pattern for some values of parameters and the locations of the screens.
In Fig. 2, some of the Bohmian trajectories are plotted, for maximally anti-correlated helium atom pairs. In this figure, the cyan trajectories are without considering the collapse effect, and the black ones are with it. One can see that some of the black trajectories start to deviate from the cyan ones as the screens detect the counterpart particles and the conditional wave function now guides undetected particles. It is worth noticing that, the ensemble of trajectories can be experimentally reconstructed using weak measurement techniques [1, 94, 25], which can be used as a test of this result.
Figure 3: One- and two-particle interference patterns of sodium atoms for different entanglement levels \(\eta\), with and without the collapse effect. The scatter plots show the joint distributions of arrival times to the horizontal screens, with the dark-cyan plots ignoring the collapse effect and the black plots considering it. The left and right screens are placed at \(Y_{L}=4\) mm and \(Y_{R}=8\) cm, respectively. The histograms in each panel show the marginal distributions of arrival times to the right screen.
### Complementary relation of visibilities
In Figs. 3, the joint arrival time distribution \(\Pi(t_{L},t_{R})\) and the right marginal distribution \(\Pi_{R}(t_{R})\) are plotted, for sodium atom pairs in two cases: with collapse effect in black and without it in dark-cyan. In this figure, we see the one-particle and two-particle temporal interference patterns for fixed screen locations (\(Y_{L}=4\) mm and \(Y_{R}=80\) mm) and different values of the entanglement parameter \(\eta\). The marginal distributions are generated using \(10^{6}\) particles' trajectories, however, for clarity, only \(10^{4}\) points are shown in the joint scatter plots. As previously mentioned, the maximum entanglement occurs when \(|\eta|=1\); and when \(\eta=0\), the particles are entirely uncorrelated. As one can see in Fig. 3, the visibilities of the joint and marginal distributions have an inverse relation; when the one-particle interference visibility is maximal, the two-particle interference visibility is minimal, and vice versa. This behavior represents a _temporal_ counterpart to the complementarity between the one-particle and two-particle interference visibilities of the arrival _position_ pattern, which can be observed in a double-double-slit configuration [6; 82]. In fact, an analogous behavior has been observed in the momentum distribution of entangled atom pairs [5]. However, it is important to remark that, the "time of flight measurement" technique, which is usually used to measure the momentum distribution in the context of cold-atom experiments, is a position measurement after a specific large time, not an arrival time measurement at a specific position [3].
A quantitative study of such complementary relationship is firstly discussed in the pioneering works of Jaeger, Horne, Shimony, and Vaidman for discrete systems [95; 96]. In these works, it is shown that there is a trade-off between these two visibilities, such that their squares add up to one or less, \(W^{2}+V^{2}\leq 1\), where \(V\) and \(W\) are one-particle and two-particles interference visibility, respectively [95]. Recently, this complementary relationship has been studied for some continuous-variables, i.e., position and momentum, in a double-double-slit configuration [6]. Here we numerically study this complementary relation for arrival time interference patterns. In this regard, the visibilities associated with arrival time interference patterns, which are represented in Fig. 3, are shown in Fig. 4--for more details of the visibility estimation method, see appendix A. As one can see in Fig. 4, the one-particle and two-particle visibilities have opposite behavior and the sum of their squared values satisfies the mentioned complementary relation.
### Collapse effect and signal locality
As one can see in Fig. 3, the correction of the two-particle arrival time distributions due to the collapse effect decreases with the turning off of the entanglement, and in \(\eta=0\), interference patterns with and without correction are the same. In fact, in this case, we have \(\Pi(t_{L},t_{R})=\Pi_{R}(t_{R})\Pi_{L}(t_{L})\). In Fig. 5, the one-particle and two-particle temporal interference patterns for different positions of the left screen are depicted, while the entanglement parameter is fixed to \(\eta=-1\). The difference between patterns is obvious in the cases without collapse effect consideration (dark-cyan plots) and by including the effect (black plots). The closer the left screen is to the slits, the earlier the wave function reduction occurs, and its effect is more visible on the joint distribution. Note that, despite the fact that the collapse effect changes particles' trajectories and resulting joint distribution, this effect does not change the one-particle distribution patterns. This shows the establishment of the no-signaling condition, despite the manifest non-local Bohmian dynamics: The right marginal arrival time distribution, as a local observable quantity, turns out to be independent of whether there is any screen on the left or not, and if there is any, it is not sensitive to the location of that detection screen. Note that this fact is not trivial because the well-known no-signaling theorem is proved for observable distributions provided by a POVM. However, in the general case, the _intrinsic_ Bohmian arrival time distribution cannot be described by a POVM, at least when the detector back-effect is ig
Figure 4: Entanglement entropy and visibilities of one- and two-particle interference patterns of arrival time, as functions of \(\eta\). Panel (a) show the entanglement entropy calculated from the initial wave function given by Eq. (6). Panel (b) shows the one-particle visibility \(V\), the two-particle visibility \(W\), and their squared sum \(V^{2}+W^{2}\), represented by blue square, black diamond, and gray circle markers, respectively. The setup parameters and initial conditions are the same as in Fig. 3, and the error bars are estimated from the counting statistics.
nored [3; 97]. In the next subsection, we discuss more on the detector back-effect.
### Detector back-effect
The arrival distributions computed so far should be called _ideal_ or _intrinsic_ distributions [97], since the influence of the detector, before particles detection, has been ignored in our theoretical manner. Such an idealization is commonly used in most previous studies of Bohmian arrival time distribution (for example, see [12; 97; 98; 99; 100; 101]), and seems more or less to be satisfactory in many applications including the double-slit experiment [94; 56; 101]. Nonetheless, in principle, the presence of the detector could modify the wave function evolution, even before the particle detection [102]. This is called the detector back-effect. To have a more thorough investigation of detection statistics, we should consider this effect. However, due to some fundamental problems, such as the measurement problem and the quantum Zeno effect [35], a complete investigation of the detector effects is problematic at the fundamental level, and it is less obvious how to model an ideal detector [102; 37; 38]. Nonetheless, some phenomenological non-equivalent models are proposed, such as the generalized Feynman path integral approach in the presence of an absorbing boundary [103; 104; 105; 106], the Schrodinger equation with a complex potential [107], the Schrodinger equation with absorbing (or complex Robin) boundary condition [107; 108; 109; 52; 110], and so on. In this section, we merely consider the absorbing boundary rule (ABR), which is compatible with the Bohmian picture and recently developed for multi-entangled particle systems [52]. The results of other approaches may not be the same [4]--See also Section IV. So a detailed study of the differences is an interesting topic, which is left for future works.
_Absorbing Boundary Rule_--According to the ABR, the particle wave function \(\psi\) evolves according to the free Schrodinger equation, while the presence of a detection screen is modeled by imposing the following boundary conditions on the detection screen, \(\mathbf{r}\in\mathbb{S}\),
\[\mathbf{n}\cdot\nabla\psi=i\kappa\psi, \tag{8}\]
where \(\kappa\!>\!0\) is a constant characterizing the type of detector, in which \(\hbar\kappa/m\) represents the momentum that the detector is most sensitive to. This boundary condition ensures that waves with the wave number \(\kappa\) are completely absorbed while waves with other wave numbers are partly absorbed and partly reflected [109; 111]. Note that, the Hille-Yosida theorem implies that the Schrodinger equation with the above boundary condition has a unique solution for every initial wave function defined on one side of the boundary.
Figure 5: Arrival time distributions of sodium atom pairs for different distances of the left screen. The left and right scatter plots in each panel show the joint distributions of arrival times to the horizontal screens for the cases without (dark cyan) and with (black) collapse, respectively. The upper and right histograms in each panel show the marginal distributions of arrival times to the right and left screens, respectively. The right screen is fixed at \(Y_{R}=8\) cm, and \(\eta=-1\).
The boundary condition (8), implies that Bohmian trajectories can cross the boundary \(\mathbb{S}\) only outwards and so there are no multi-crossing trajectories. In the Bohmian picture, a detector clicks when and where the Bohmian particle reaches the detection surface \(\mathbb{S}\). In fact, it is a description of a "hard" detector, i.e., one that detects a particle immediately when it arrives at the surface \(\mathbb{S}\). Nonetheless, it should be noted that the boundary absorbs the particle but not completely the wave. The wave packet moving towards the detector may not be entirely absorbed, but rather partially reflected [109].
The application of the absorbing boundary condition in arrival time problem was first proposed by Werner [108], and recently it is re-derived and generalized by other authors using various methods [52; 107; 109; 110]. Especially, it is recently shown that in a suitable (non-obvious) limit, the imaginary potential approach yields the distribution of detection time and position in agreement with the absorbing boundary rule [107]. Moreover, Dubey, Bernardin, and Dhar [110] have shown that the ABR can be obtained in a limit similar but not identical to that considered in the quantum Zeno effect, involving repeated quantum measurements. Recently the natural extension of the absorbing boundary rule to the \(n\)-particle case is discussed by Tumulka [52]. The key element of this extension is that, upon a detection event, the wave function gets collapsed by inserting the detected position, at the time of detection, into the wave function, thus yielding a wave function of \((n-1)\) particles. We use this formalism for the investigation of detector back-effect in our double-double-slit setup. In this regard, the corresponding Bohmian trajectories and arrival time distributions are presented in Fig. 6.
In our experimental setup, due to the influence of grav
Figure 6: Bohmian trajectories and joint arrival time distributions of helium atoms in the two double-slit setup, with different detector characterizing constants \(\kappa\). The detectors are placed at \(Y_{R}=Y_{L}=40\,\mu\)m, and is considered as an absorbing boundary. The slit distances are \(2l_{y}=20\)\(\mu\)m, the initial wave packets’ dispersions are \(\sigma_{x}=\sigma_{y}=1\)\(\mu\)m, and the initial velocities are \(u_{x}=20\) m/s and \(u_{y}=0\). Panels (a), (b), and (c) show the trajectories for \(\kappa=3\kappa_{0}\), \(\kappa=\kappa_{0}/3\), and \(\kappa=\kappa_{0}\), respectively. Panel (d) shows the trajectories without the detector back-effect, which are only truncated at the detector position. The colors of all trajectories are based on their arrival times on the screen and are labeled by color-bar in the unit of ms. Panels (e), (f), (g), and (h) show the joint arrival time distributions corresponding to the trajectories in panels (a), (b), (c), and (d), respectively.
ity, the reflected portions of the wave packets return to the detector screen, while some of them are absorbed and some are reflected again. This cycle of absorption and reflection is repeated continuously. The associated survival probabilities are plotted in Fig. 7 for some values of detector parameter, \(\kappa=\kappa_{0}\), \(2\kappa_{0}\), \(3\kappa_{0}\), \(\kappa_{0}/3\), where the \(\kappa_{0}\) is defined using classical estimation of particles momentum at the screen as \(\kappa_{0}=m\sqrt{2gY_{R}}/\hbar\). As one can see in Figs. 6 and 7, when \(\kappa=\kappa_{0}\), most of the trajectories are absorbed, and approximately none of them are reflected, which is similar to the case when the detector back effect is ignored. These results show that, at least for the chosen parameters, when we use a proper detector with \(\kappa=\kappa_{0}\), the ideal arrival time distribution computed in the previous section, without considering the detector back effect, produces acceptable results. However, in general, Figs. 6 and 7 show that the detector back effect cannot be ignored and it leads to new phenomena: i.e., a "fractal" in the interference pattern.
## IV Comparition with semiclassical analysis
Despite the absence of an agreed-upon fundamental approach for arrival time computation, a semiclassical analysis is routinely used to analyze observed data. This approach is often sufficient, especially when particle detection is done in the far-field regime [3; 91]. In this approach, it is assumed that particles move along classical trajectories, and the arrival time distribution is computed using the quantum initial momentum distribution [112; 113; 97]. It is important to compare the semiclassical analysis with our Bohmian result. To this end, we need to extend the semiclassical approximation for multi-particles systems in the presence of gravity, which is done as follows:
Using the classical trajectory of a free falling particle, the arrival time is given by
\[T(y_{0},p_{y_{0}};Y)=p_{y_{0}}/|m\mathbf{g}|+\sqrt{(p_{y_{0}}/|m\mathbf{g}|)^{2}+2|Y-y_ {0}|/|\mathbf{g}|}\]
where \(y_{0}\) and \(p_{y_{0}}\) are initial particle position and momentum in \(y\)-direction, respectively, and \(Y\) is the position of the horizontal screen. Therefore, the joint semiclassical arrival time distribution is given by
\[\Pi(t_{L},t_{R}) = \int f(y_{0}^{L},y_{0}^{R},p_{y_{0}}^{L},p_{y_{0}}^{R})\delta \left(t_{L}-T(y_{0}^{L},p_{y_{0}}^{L};Y_{L})\right)\] \[\times \delta\left(t_{R}-T(y_{0}^{R},p_{y_{0}}^{R};Y_{R})\right)\,dy_{0 }^{L}\,dy_{0}^{R}\,dp_{y_{0}}^{L}\,dp_{y_{0}}^{R}\]
where \(f(y_{0}^{L},y_{0}^{R},p_{y_{0}}^{L},p_{y_{0}}^{R})\) is joint initial phase-space distribution, in \(y\)-direction. However, in standard quantum mechanics, the joint phase-space distribution in a given direction is not a well defined concept. Nonetheless, it is routinely expected that in the far-field regime, the arrival time distribution is independent of initial position distribution and can be calculated just by momentum distribution--In fact, this conjecture is established for one-particle systems, in some arrival time approaches [3; 41]. In this regard, at first we consider the initial position of all of the particles at \(y=0\), and the resulted semiclassical arrival time distributions are represented in panel (e) and (f) of Fig. 8, for near- and far-field regime, respectively. As expected, although in the far-filed regime this approximation more or less is in agreement with Bohmian ones, in near filed semiclassical joint and marginal arrival distributions are very different from their Bohmian counterparts. To avoid this issue, as a more accurate approximation, by ignoring initial position-momentum correlation in the \(y\)-direction, one may suggest the following initial phase-space distribution:
\[f(y_{0}^{L},y_{0}^{R},p_{y_{0}}^{L},p_{y_{0}}^{R})=f_{Y}(y_{0}^{L},y_{0}^{R}) f_{P}(p_{y_{0}}^{L},p_{y_{0}}^{R}), \tag{9}\]
where \(f_{Y}\) and \(f_{P}\) are the initial position and momentum distribution of left-right particles pairs in y-direction, respectively, which can be computed from initial wave function as follow
\[f_{Y}(y_{0}^{L},y_{0}^{R})=\int|\Psi_{0}(x_{1},y_{1},x_{2},y_{2})|^{2}\,\,\xi _{Y}\,\,dx_{1}dx_{2}\]
\[f_{P}(p_{y_{0}}^{L},p_{y_{0}}^{R})=\int|\tilde{\Psi}_{0}(x_{1},p_{1y},x_{2},p _{2y})|^{2}\,\,\xi_{P}\,\,dx_{1}dx_{2}\]
where
\[\xi_{Y}=\theta(x_{1})\delta(y_{0}^{L}-y_{1})\delta(y_{0}^{R}-y_{2})+\theta(-x _{1})\delta(y_{0}^{L}-y_{2})\delta(y_{0}^{R}-y_{1}),\]
\[\xi_{P}=\theta(x_{1})\delta(y_{y_{0}}^{L}-p_{1})\delta(p_{y_{0}}^{R}-p_{2})+ \theta(-x_{1})\delta(p_{y_{0}}^{L}-p_{2})\delta(p_{y_{0}}^{R}-p_{1}),\]
Figure 7: Survival probability of helium atoms in the two double-slit setup, with different detector characterizing constants \(\kappa\). The atoms are subject to gravity and are absorbed by the detectors with an absorbing boundary. The setup parameters are the same as in Fig. 6.
and \(\tilde{\Psi}_{0}(x_{1},p_{1y},x_{2},p_{2y})\) is the Fourier transformation of the initial wave function, \(\Psi_{0}(x_{1},y_{1},x_{2},y_{2})\), with respect to \(y_{1}\) and \(y_{2}\) variables. Note that this joint phase-space distribution leads exactly to the quantum initial position and momentum marginal distributions. In the panels (e) and (f) of Fig. 8, the semiclassical distribution resulting from the phase-space distribution (9), are plotted in near- and far-field regime, respectively. As a surprising fact, although in the near-field regime the results of this semiclassical analysis are more similar to the Bohmian ones, in the far-field regime this semiclassical arrival time distribution deviates significantly from the Bohmian result: The central interference fringe of the Bohmian joint distribution does not exist in this semiclassical approximation (see red dashed lines in panels (b) and (f)).
In fact, there are various correlated initial phase-space distributions which are consistent with quantum initial position and momentum marginal distributions, however, they lead to different joint arrival time distributions. Merely as an example, see panels (g) and (h) of Fig. 8, which are generated from another initial phase-space distribution defined as
\[\int|\Psi_{0}(\mathbf{r}_{1}^{\prime},\mathbf{r}_{2}^{\prime})|^{2}\prod_{i=1,2}\delta (\mathbf{r}_{i}-\mathbf{r}_{i}^{\prime})\delta(\mathbf{p}-\mathbf{p}_{i}^{\infty}(\mathbf{r}_{1}^{ \prime},\mathbf{r}_{2}^{\prime}))d^{3}\mathbf{r}_{i}^{\prime}, \tag{10}\]
where, \(\mathbf{p}_{2}^{\infty}(\mathbf{r}_{1}^{\prime},\mathbf{r}_{2}^{\prime})\) and \(\mathbf{p}_{2}^{\infty}(\mathbf{r}_{1}^{\prime},\mathbf{r}_{2}^{\prime})\) are the asymptotic momenta of two _free_ Bohmian particle with initial positions \((\mathbf{r}_{1}^{\prime},\mathbf{r}_{2}^{\prime})\) and initial wave function \(\Psi_{0}\). It is easy to show that above phase-space distribution is consistent with the quantum initial position and momentum distributions [114; 115].
Note that, even in the far-field regime, although all semiclassical _marginal_ arrival time distributions are more or less in agreement with Bohmain results, but the _joint_ semiclassical distributions are very sensitive to assumed initial phase-space distributions. These facts suggest that the multi-particle joint arrival time distributions, for example in our suggested double-double-slit setup, can be used to probe the Bohmian arrival time prediction, which is more sensitive than the previously proposed one-particle experiments [51; 56; 4]. Furthermore, it is important to remark that this deviation from semiclassical analysis is predicted without the presence of the challenging effects that have typically been suggested to distinguish arrival time proposals, namely the back-flow effect [116; 117], multi-crossing Bohmian trajectories [4; 65], and the detector back-effect--as discussed in the previous section.
## V Summary and outlook
In this work, we have proposed a double-double-slit setup to observe non-local interference in the arrival time of entangled particle pairs. Our numerical study shows a complementarity between one-particle visibility and two-particle visibility in the arrival time interference pattern,
Figure 8: Comparison of the semiclassical and Bohmian Joint arrival times distributions for sodium atoms, for different screen positions. The right and left columns correspond to the the screen positions \(Y_{L,R}=1\) cm ( far-field) and \(Y_{L,R}=0.5\) mm (near-field), respectively. Panels (a) and (b) are scatter plots of the Bohmian joint spatio-temporal distribution with considering the wave function collapse effect. Panels (c) and (d) show the semiclassical joint distribution that calculated from classical free fall and quantum initial momentum distribution, \(f_{P}\), with considering the initial positions of all the particles at the \(y=0\). Panels (e) and (f) are the semiclassical joint distributions that obtained from the initial phase-space distributions given in Eq. (9). Panels (g) and (h) are the semiclassical joint distributions that obtained from the initial phase-space distributions given in Eq. (10). Up panels represent the corresponding marginal arrival time distributions.
which is very similar to the complementarity observed for the arrival position interference pattern [6]. Moreover, our results indicate that the two-particle interference visibility in the arrival time distribution can serve as an entanglement witness, thereby suggesting the potential use of temporal observables for tasks related to quantum information processing [118; 119].
As noted in the introduction, the theoretical analysis of the proposed experiment is more complex than that of a typical double-slit experiment due to several connected fundamental problems, including the arrival time problem and the detector back-effect problem. We use a Bohmian treatment to circumvent these problems. This approach can be used for a more accurate investigation of various experiments beyond the double-double-slit experiment, such as atomic ghost imaging [30], interferometric gravitometry [54], atomic Hong-Ou-Mandel experiments [120], and so on [55; 121], which are usually analyzed in a semiclassical approximation. In such situations, the semiclassical analysis may not lead to a unique and unambiguous prediction, as discussed in section IV.
It is worth noting that, based on other interpretations of quantum theory, there are other non-equivalent approaches that, in principle, can be used to investigate the proposed experiment [4; 51]. However, these approaches need to be extended for entangled particle systems first. Comparing the results obtained by these various approaches can be used to test the foundations of quantum theory. Specifically, it appears that measuring the arrival time correlations in entangled particle systems can sharply distinguish between different approaches to the arrival time problem [119]. A more detailed investigation of this subject is left for future works.
## Appendix A Estimation of the interference visibility
A complementarity relationship between one-particle and two-particle interference pattern, \(V^{2}+W^{2}\leq 1\), is disused in the literature [96; 6; 6]. The One-particle interference visibility \(V\) can be obtained as
\[V=(I_{max}-I_{min})/(I_{max}+I_{min}), \tag{10}\]
where \(I_{max}\) and \(I_{min}\) represent the zero-order maximum intensity and first-order minimum intensity, respectively [69]. In the case of two-particle interference, there are some complexities in the definition of the two-particle interference visibility, and in fact there is no agreed upon definition for general cases. However, some proposal have taken place [122; 6]. In the present work, we have used the definition given in Ref. [6], which is particularly suitable for studying the complementarity relationship in a double-double-slit arrangement. This two-particle visibility definition is briefly described in the following. For a symmetric setup in which the distances between slits are equal for the left and right sides, the arrival time distribution of particles, \(\Pi(t_{L},t_{R})\), exhibit grooves and unit visibility in two diagonal directions aligned with the \(t^{\pm}=(t_{L}\pm t_{R})/\sqrt{2}\) and \(t_{R}\) axes in the case of separable states. The rotated marginal distributions projected on \(t_{\pm}\) read as
\[\Pi^{\pm}(t^{\prime})=\int\Pi(t_{L},t_{R})\delta(t^{\prime}-t^{\pm})\ dt_{R}dt_{L}\]
Using visibilities of the above marginal distributions, \(V^{\pm}\), which can be estimated as same as one-particle visibility via Eq. (10), the two-particle visibility is given by [6]:
\[W=|V^{+}-V^{-}|. \tag{11}\]
The one- and two-particle visibilities are depicted in Fig. 4, for various values of \(\eta\). In this figure, the error bars and central values are calculated from the average and standard deviation of a set of 10 arrival time joint distributions that each of them generated from \(10^{4}\) Bohmian trajectories.
|
2310.10273 | Tensor glueball decay into nucleon-antinucleon | In the context of a chiral hadronic model, we compute the decay ratio of a
tensor glueball decaying into a nucleon and antinucleon compared to the decay
into 2 pions. Tensor meson dominance is assumed to also hold for the tensor
glueball in order to relate the coupling constants of the different decay
channels. We find that the decay width to nucleons is slightly larger than the
decay width to pions, but still in the same order of magnitude. | Arthur Vereijken | 2023-10-16T10:58:29Z | http://arxiv.org/abs/2310.10273v2 | # Tensor glueball decay into nucleon-antinucleon
###### Abstract
In the context of a chiral hadronic model, we compute the decay ratio of a tensor glueball decaying into a nucleon and antinucleon compared to the decay into 2 pions. Tensor meson dominance is assumed to also hold for the tensor glueball in order to relate the coupling constants of the different decay channels. We find that the decay width to nucleons is slightly larger than the decay width to pions, but still in the same order of magnitude.
## 1 Introduction
Glueballs, bound states made of only gluons, are one of the oldest predictions of Quantum Chromodynamics (QCD) [1]. Various theoretical (e.g. [2, 3, 4]) and experimental [5] works have made progress, yet their experimental status is not resolved [7, 6, 8, 9]. Different theoretical methods agree on the mass hierarchy of the lowest lying glueball states, with the scalar (\(J^{PC}=0^{++}\)) being the lightest and the tensor (\(J^{PC}=2^{++}\)) the second lightest glueball. In this work, in the context of a chiral model described in [10, 11], we will calculate the decay of the tensor glueball into a nucleon-antinucleon pair, based on arguments used in tensor mesons dominance models. Different glueballs have been studied before in hadronic models, such as the scalar glueball in [12] and the pseudoscalar glueball in [13].
## 2 Decay Amplitude
The decays of the tensor glueball were studied in the extended Linear Sigma Model in [10], where the \(\rho\rho\) and \(K^{*}\bar{K}^{*}\) channels were found as the dominant decays. Here, we extend that model by coupling the tensor glueball to nucleons with the following interaction term [14, 17]
\[{\cal L}_{GNN}=\frac{g_{NN}}{m}G_{\mu\nu}\bar{N}(\gamma^{\mu}\stackrel{{ \leftrightarrow}}{{\partial^{\nu}}}+\gamma^{\nu}\stackrel{{ \leftrightarrow}}{{\partial^{\mu}}})N, \tag{1}\]
where \(G_{\mu\nu}\) is the tensor glueball, \(\bar{N},N\) is the (anti-)nucleon field containing the proton and neutron, i.e. \(N^{T}=(p,n)\), \(m\) is the nucleon mass, and \(\stackrel{{\leftrightarrow}}{{\partial^{\mu}}}=\stackrel{{ \rightarrow}}{{\partial^{\mu}}}-\stackrel{{\leftarrow}}{{ \partial^{\mu}}}\). Writing it out explicitly and using the symmetry of \(G_{\mu\nu}\) we have the Feynman diagram in figure 1 and associated matrix element:
\[{\cal M}(\alpha,r,s)=2\frac{g_{NN}}{m}\epsilon_{\mu\nu}(\vec{p},\alpha)\bar{u} (\vec{k}_{1},r)\gamma^{\mu}q^{\nu}v(\vec{k}_{2},s), \tag{2}\]
with \(\epsilon_{\mu\nu}(p,\alpha)\) the spin 2 polarization tensors, \(\bar{u}(\vec{k}_{1},r),v(\vec{k}_{2},s)\) Dirac spinors, and \(q=k_{1}-k_{2}\) is the difference of outgoing momenta. Using the Casimir trick, the spin-averaged modulus squared matrix element is
\[\left|\bar{\cal M}\right|^{2}=\frac{4}{5}\left(\frac{g_{NN}}{m}\right)^{2} \sum_{\alpha}\epsilon_{\mu\nu}(p,\alpha)\epsilon_{\mu^{\prime}\nu^{\prime}}(p, \alpha)q^{\nu}q^{\nu^{\prime}}{\rm Tr}\left[\gamma^{\mu}(\not{k}_{2}-m)\gamma ^{\mu^{\prime}}(\not{k}_{1}+m)\right]. \tag{3}\]
The polarization tensors fulfill the completeness relation [15]
\[\sum_{\alpha}\epsilon_{\mu\nu}(p,\alpha)\epsilon_{\mu^{\prime}\nu^{\prime}}(p,\alpha)=\frac{1}{2}(A_{\mu\mu^{\prime}}A_{\nu\nu^{\prime}}+A_{\mu\nu^{\prime} }A_{\mu^{\prime}\nu})-\frac{1}{3}A_{\mu\nu}A_{\mu^{\prime}\nu^{\prime}}, \tag{4}\]
with the tensor \(A_{\mu\nu}\) defined as
\[A_{\mu\nu}=g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{M^{2}}, \tag{5}\]
with \(M\) being the mass of the tensor glueball, which we take from lattice QCD to be 2369 MeV [4]. It is useful to note early on that \(p\cdot q=(k_{1}+k_{2})\cdot(k_{1}-k_{2})=k_{1}^{2}-k_{2}^{2}=0\) since
Figure 1: Feynman diagram for tensor glueball decay
it is the mass difference of the two daughter particles. This simplifies things because any \(p_{\nu},p_{\nu^{\prime}}\) from the completeness relation automatically gives 0. The amplitude squared then becomes
\[\left|\bar{\mathcal{M}}\right|^{2}=\frac{4}{5}\left(\frac{g_{NN}}{ m}\right)^{2}\left[\frac{1}{2}(g_{\mu\mu^{\prime}}-\frac{p_{\mu}p_{\mu^{\prime}}}{M^{ 2}})g_{\nu\nu^{\prime}}+\frac{1}{2}g_{\mu\nu^{\prime}}g_{\mu^{\prime}\nu}- \frac{1}{3}g_{\mu\nu}g_{\mu^{\prime}\nu^{\prime}}\right]q^{\nu}q^{\nu^{\prime}}\] \[\left(k_{2\alpha}k_{1\beta}\text{Tr}[\gamma^{\mu}\gamma^{\alpha} \gamma^{\mu^{\prime}}\gamma^{\beta}]-m^{2}\text{Tr}[\gamma^{\mu}\gamma^{\mu^{ \prime}}]\right)\]
Using well-known trace identities for gamma matrices and contracting with \(q\) we find
\[\left|\bar{\mathcal{M}}\right|^{2}= \frac{16}{5}\left(\frac{g_{NN}}{m}\right)^{2}\left[\frac{1}{2}(g _{\mu\mu^{\prime}}-\frac{p_{\mu}p_{\mu^{\prime}}}{M^{2}})q^{2}+\frac{1}{6}q_ {\nu}q_{\nu^{\prime}}\right]\] \[\left(k_{1}^{\mu}k_{2}^{\mu^{\prime}}+k_{1}^{\mu^{\prime}}k_{2}^ {\mu}-(k_{1}\cdot k_{2})g^{\mu\mu^{\prime}}-m^{2}g^{\mu\mu^{\prime}}\right),\]
which simplifies to
\[\left|\bar{\mathcal{M}}\right|^{2}=\frac{16}{5}\left(\frac{g_{NN} }{m}\right)^{2}\left[\frac{q^{2}}{2}\left(-(k_{1}\cdot k_{2})-3m^{2}-2\frac{( p\cdot k_{1})(p\cdot k_{2})}{M^{2}}\right)\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad+\frac{1}{6}\Big{(}2(k_{1} \cdot q)(k_{2}\cdot q)-(k_{1}\cdot k_{2})q^{2}-m^{2}q^{2}\Big{)}\right]\]
Evaluating all dot products in the rest-frame of the tensor glueball the amplitude takes the form
\[\frac{8}{15}\left(\frac{g_{NN}}{m}\right)^{2}\left(3M^{4}-4m^{2}M^{2}-32m^{4} \right). \tag{9}\]
The decay width for a boson decaying into two nucleons is given by [16]
\[\Gamma_{GN\bar{N}}=2\frac{\sqrt{\frac{M^{2}}{4}-m^{2}}}{8\pi M^{2}}\left|\bar {\mathcal{M}}\right|^{2}=\frac{1}{30}\left(\frac{2g_{NN}}{m}\right)^{2}\frac{ \sqrt{\frac{M^{2}}{4}-m^{2}}}{\pi M^{2}}\left(3M^{4}-4m^{2}M^{2}-32m^{4}\right),\]
where the factor 2 counts the \(p\bar{p}\) and \(n\bar{n}\) modes.
## 3 Tensor Meson Dominance and decay ratio
We cannot compute the decay width without knowing the value of \(g_{NN}\). Although this coupling constant is not known experimentally, assuming tensor meson dominance [17] one has certain relations between couplings of different channels for the tensor meson decays. We will assume these for the tensor glueball as well. The Lagrangian for the decay of the tensor glueball to 2 pions is of the form [10, 11]
\[\mathcal{L}_{G\pi\pi}=\frac{g_{\pi\pi}}{M}G_{\mu\nu}\partial^{\mu}\vec{\pi} \partial^{\nu}\vec{\pi}, \tag{11}\]
where \(\vec{\pi}=(\pi^{1},\pi^{2},\pi^{3})\) refers to the isospin triplet. This Lagrangian leads to decay width of the form
\[\Gamma_{G\pi\pi}=6\left(\frac{g_{\pi\pi}}{M}\right)^{2}\frac{\left(\frac{M^{2}}{ 4}-m^{2}\right)^{5/2}}{60\pi M^{2}}, \tag{12}\]
where the factor 6 counts the isospin and identical particle factors. Tensor meson dominance (TMD) states that the dominant contribution to the hadron energy momentum tensor \(\Theta^{\mu\nu}\) is the tensor mesons \(T^{\mu\nu}\). Then, assuming tensor meson dominance leads to the following identity [17]
\[\frac{2g_{NN}}{m}=\frac{g_{\pi\pi}}{M}, \tag{13}\]
which we will assume is also a valid approximation for the tensor glueball. This allows us to calculate the decay ratio of \(G\to N\bar{N}/G\to\pi\pi\) as
\[\frac{\Gamma_{GN\bar{N}}}{\Gamma_{G\pi\pi}}\approx 5.3. \tag{14}\]
The decay ratio is large enough to be a relevant factor in the search for a tensor glueball. In comparison to a chiral hadronic model [10] or a holographic model [18], the decay width is lower than the 2-vector channel widths, but larger than the other channels. For example compared to the dominant \(\rho\rho\) channel found in [10], the ratio is \(\Gamma_{G\rho\rho}/\Gamma_{GN\bar{N}}\approx 9.6\). The applicability of tensor meson dominance to the tensor glueball is not completely clear, so it is at best an approximate result. However, as an order of magnitude estimation, the outcome can be useful.
## 4 Conclusion
In this note we have computed the tensor glueball decay ratio of the nucleon-antinucleon and 2 pion channels. A speculative assumption of tensor meson dominance relations applying also to the tensor glueball has been made. The approximate result we obtained for the decay into nucleon-antinucleon is larger than the 2 pion ratio but the ratio is still of order 1. Compared to previous works it is not the largest decay channel found. Nevertheless it could still be a fruitful process to investigate in glueball searches.
* * *
The author acknowledges Francesco Giacosa and Shahriyar Jafarzade for useful discussions. The author also acknowledges financial support from the Polish National Science Centre (NCN) via the OPUS project 2019/33/B/ST2/00613.
|
2303.08504 | Equidistribution of continued fraction convergents in
$\mathrm{SL}(2,\mathbb{Z}_m)$ with an application to local discrepancy | We study the distribution of the sequence of continued fraction convergents
$p_n/q_n$ to a random irrational number in the group
$\mathrm{SL}(2,\mathbb{Z}_m)$, and in particular the distribution of $p_n
\pmod{m}$ and $q_n \pmod{m}$ with a fixed modulus $m$. Improving the strong law
of large numbers due to Sz\"usz, Moeckel, Jager and Liardet, we establish the
central limit theorem and the law of the iterated logarithm, as well as the
weak and the almost sure invariance principles. As an application, we find the
limit distribution of the maximum and the minimum of the Birkhoff sum for the
irrational rotation with the indicator of an interval as test function. We also
compute the normalizing constant in a classical limit law for the same Birkhoff
sum due to Kesten, and dispel a misconception about its dependence on the test
interval. | Bence Borda | 2023-03-15T10:28:44Z | http://arxiv.org/abs/2303.08504v1 | ###### Abstract
###### Abstract
We study the distribution of the sequence of continued fraction convergents \(p_{n}/q_{n}\) to a random irrational number in the group \(\mathrm{SL}(2,\mathbb{Z}_{m})\), and in particular the distribution of \(p_{n}\pmod{m}\) and \(q_{n}\pmod{m}\) with a fixed modulus \(m\). Improving the strong law of large numbers due to Szusz, Moeckel, Jager and Liardet, we establish the central limit theorem and the law of the iterated logarithm, as well as the weak and the almost sure invariance principles. As an application, we find the limit distribution of the maximum and the minimum of the Birkhoff sum for the irrational rotation with the indicator of an interval as test function. We also compute the normalizing constant in a classical limit law for the same Birkhoff sum due to Kesten, and dispel a misconception about its dependence on the test interval.
**Equidistribution of continued fraction convergents in \(\mathrm{SL}(2,\mathbb{Z}_{m})\) with an application to local discrepancy**
**Bence Borda**
Graz University of Technology
Steyrergasse 30, 8010 Graz, Austria
Email: [email protected]
**Keywords:** convergent denominators mod \(m\), Gauss-Kuzmin problem,
limit law, invariance principle, irrational rotation
**Mathematics Subject Classification (2020):** 11K50, 37A50, 37E10
## 1 Introduction
The statistical properties of the continued fraction expansion \(\alpha=[0;a_{1},a_{2},\ldots]\) of a random real number \(\alpha\in[0,1]\) is a classical topic in metric number theory. The partial quotients \(a_{n}\) form a weakly dependent sequence of random variables, consequently the sum \(\sum_{n=1}^{N}f(a_{n})\) satisfies various probabilistic limit theorems depending on the growth rate of the function \(f:\mathbb{N}\to\mathbb{R}\). The asymptotic behavior of the convergents \(p_{n}/q_{n}=[0;a_{1},a_{2},\ldots,a_{n}]\) is also well known. For instance, we have \(\log q_{n}\sim\frac{\pi^{2}}{12\log 2}n\) for a.e. \(\alpha\), and \(\log q_{n}\) even satisfies the central limit theorem (CLT) and the law of the iterated logarithm (LIL). We refer to the monograph [14] for a comprehensive survey.
The main subject of this paper is the distribution of the sequences \(p_{n}\pmod{m}\) and \(q_{n}\pmod{m}\) in \(\mathbb{Z}_{m}\) with a fixed integer \(m\geq 2\). One of the first results in the area is due to Szusz [26, Satz 3.3], who showed that for any \(a\in\mathbb{Z}_{m}\),
\[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\mathds{1}_{\{q_{n}\equiv a\pmod{ m}\}}=\frac{\prod_{p|\gcd(a,m)}\left(1-\frac{1}{p}\right)}{m\prod_{p|m}\left(1- \frac{1}{p^{2}}\right)}\quad\text{for a.e. }\alpha, \tag{1}\]
where the products are over prime divisors. Contrary to what one might naively expect, certain residue classes are thus attained more frequently than others. For instance, only one third of all convergent denominators are even, while two thirds of them are odd.
The origin of the limit in (1) becomes transparent when we consider the joint distribution of the quadruple \((p_{n-1},p_{n},q_{n-1},q_{n})\pmod{m}\), and work in the group \(\mathrm{SL}(2,\mathbb{Z}_{m})\). The recursions for \(p_{n}\) and \(q_{n}\), with the usual convention \(p_{0}=0\), \(q_{0}=1\), can be written in matrix form as
\[\left(\begin{array}{cc}0&1\\ 1&a_{1}\end{array}\right)\left(\begin{array}{cc}0&1\\ 1&a_{2}\end{array}\right)\cdots\left(\begin{array}{cc}0&1\\ 1&a_{n}\end{array}\right)=\left(\begin{array}{cc}p_{n-1}&p_{n}\\ q_{n-1}&q_{n}\end{array}\right),\]
where the right-hand side has determinant \((-1)^{n}\). Taking the previous formula mod \(m\) entrywise leads to the \(2\times 2\) matrix with entries in \(\mathbb{Z}_{m}\)
\[P_{n}=\left(\begin{array}{cc}p_{n-1}&p_{n}\\ q_{n-1}&q_{n}\end{array}\right)\pmod{m}.\]
For the rest of the paper, let
\[G_{D}=\left\{\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\,:\,a,b,c,d\in\mathbb{Z}_{m},\ ad-bc=D\right\},\qquad D\in \mathbb{Z}_{m}^{*}=\{a\in\mathbb{Z}_{m}\,:\,\gcd(a,m)=1\},\]
and let \(G=G_{1}\cup G_{-1}\). Note that \(G_{1}=\operatorname{SL}(2,\mathbb{Z}_{m})\) is a normal subgroup of \(\cup_{D\in\mathbb{Z}_{m}^{*}}G_{D}=\operatorname{GL}(2,\mathbb{Z}_{m})\), and \(G_{D}\) is a coset of \(G_{1}\). If \(m=2\), then \(G_{1}=G_{-1}=\operatorname{SL}(2,\mathbb{Z}_{2})=\operatorname{GL}(2,\mathbb{ Z}_{2})\), otherwise \(G_{1}\neq G_{-1}\).
It turns out that the sequence \(P_{n}\) equidistributes in \(G\), that is, for any \(g\in G\),
\[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\mathds{1}_{\{P_{n}=g\}}=\frac{1}{ |G|}\quad\text{for a.e. }\alpha. \tag{2}\]
The set of possible rows or columns of matrices in \(G_{D}\) is
\[V=\{(a,b)\in\mathbb{Z}_{m}^{2}\,:\,\gcd(a,b,m)=1\}.\]
In fact, both rows and both columns of a matrix which is uniformly distributed on \(G_{D}\) with some \(D\in\mathbb{Z}_{m}^{*}\) are uniformly distributed on \(V\), see Lemma 4 below. Relation (2) thus immediately implies that for any \((a,b)\in V\),
\[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\mathds{1}_{\{(p_{n},q_{n})\equiv( a,b)\pmod{m}\}}=\frac{1}{|V|}\quad\text{for a.e. }\alpha, \tag{3}\]
and the same holds for the pairs \((q_{n-1},q_{n})\) and \((p_{n-1},p_{n})\). For instance, the "parity type" of the fraction \(p_{n}/q_{n}\) can be even/odd, odd/even or odd/odd; note that the type even/even is impossible as \(p_{n}\) and \(q_{n}\) are coprime. The parity type of \(p_{n}/q_{n}\) thus equidistributes in the set \(\{\text{even/odd, odd/even, odd/odd}\}\).
The four entries of a matrix which is uniformly distributed on \(G_{D}\), however, are not uniformly distributed on \(\mathbb{Z}_{m}\). Instead, the distribution of all four entries is the probability measure \(\nu\) on \(\mathbb{Z}_{m}\) which assigns the measure
\[\nu_{a}=\frac{\prod_{p|\gcd(a,m)}\left(1-\frac{1}{p}\right)}{m\prod_{p|m} \left(1-\frac{1}{p^{2}}\right)},\quad a\in\mathbb{Z}_{m} \tag{4}\]
to the singleton \(\{a\}\), see Lemma 4 below. This explains the value of the limit in (1), and shows that the same relation holds for \(p_{n}\) as well.
Relation (3) for the pair \((q_{n-1},q_{n})\) was first proved by Szusz [26, Satz 3.2] by following Levy's approach to the Gauss-Kuzmin problem about the mixing properties of the Gauss map, but his work seems to have gone mostly unnoticed by the ergodic theory community. Moeckel [19] used the close relationship between continued fractions and the geodesic flow on the modular surface, and almost proved (3); we say almost, as Moeckel worked with \(\operatorname{PSL}(2,\mathbb{Z}_{m})\) instead of \(\operatorname{SL}(2,\mathbb{Z}_{m})\), consequently he only showed equidistribution in the factor \(V/\sim\) with the equivalence relation \((a,b)\sim(-a,-b)\). See [12] for a more recent account and generalizations of Moeckel's approach. Finally, relation (2) and consequently also (3) were proved by Jager and Liardet [15] using the ergodicity of a certain skew product over the Gauss map.
In the terminology of probability theory, relations (1), (2) and (3) correspond to the strong law of large numbers. The main goal of the present paper is to extend these results to more precise limit
theorems, such as the CLT and the LIL. In fact, we will even establish the weak and the almost sure invariance principles. We refer to Billingsley [4] for a general introduction to invariance principles and their relation to the ordinary and the functional CLT and LIL.
Throughout, \(\alpha\) is a random variable with distribution \(\mu\), a Borel probability measure on \([0,1]\) which is absolutely continuous with respect to the Lebesgue measure \(\lambda\). In some of our results we assume that \(\mu\) has a Lipschitz density, that is,
\[\left|\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(x)-\frac{\mathrm{d}\mu}{ \mathrm{d}\lambda}(y)\right|\leq L|x-y|\quad\text{for all }x,y\in[0,1] \tag{5}\]
with some constant \(L\geq 0\). The most important example is the Gauss measure \(\mu_{\mathrm{Gauss}}(A)=\frac{1}{\log 2}\int_{A}\frac{1}{1+x}\,\mathrm{d}x\), \(A\subseteq[0,1]\) Borel. The uniform probability measure on a finite set \(S\) is denoted by \(\mathrm{Unif}(S)\). We write \(X\sim\vartheta\) if the random variable \(X\) has distribution \(\vartheta\), and \(\overset{d}{\to}\) denotes convergence in distribution.
Given an arbitrary function \(f:G\to\mathbb{R}\), we define the constants \(E_{f}\) and \(\sigma_{f}\geq 0\) as follows. Let \(U\sim\mathrm{Unif}(G)\), and define \(E_{f}=\mathbb{E}f(U)\). Let \(\alpha\sim\mu_{\mathrm{Gauss}}\) and \(U_{\pm 1}\sim\mathrm{Unif}(G_{\pm 1})\) be independent random variables, let \(\bar{f}(x)=f(x)-\mathbb{E}f(U_{\pm 1})\) for \(x\in G_{\pm 1}\), and define
\[\sigma_{f}^{2}=\frac{1}{2}\mathbb{E}\bar{f}(U_{1})^{2}+\sum_{n=1}^{\infty} \mathbb{E}\bar{f}(U_{1})\bar{f}(U_{1}P_{n})+\frac{1}{2}\mathbb{E}\bar{f}(U_{-1 })^{2}+\sum_{n=1}^{\infty}\mathbb{E}\bar{f}(U_{-1})\bar{f}(U_{-1}P_{n}). \tag{6}\]
**Theorem 1**.: _Fix an integer \(m\geq 2\), and let \(f:G\to\mathbb{R}\) be arbitrary._
1. _The right-hand side of (_6_) is finite and nonnegative._
2. _Let_ \(\alpha\sim\mu\) _with_ \(\mu\ll\lambda\)_. Then the process_ \(\sum_{1\leq n\leq tN}f(P_{n})\)_,_ \(t\in[0,1]\) _satisfies the functional CLT_ \[\frac{\sum_{1\leq n\leq tN}f(P_{n})-E_{f}tN}{\sqrt{N}}\overset{d}{\to}\sigma_{ f}W(t)\] _in the Skorokhod space_ \(\mathcal{D}[0,1]\)_, where_ \(W(t)\)_,_ \(t\in[0,1]\) _is a standard Wiener process._
3. _Let_ \(\alpha\sim\mu\) _with_ \(\mu\ll\lambda\)_, and assume (_5_). Without changing its distribution, the process_ \(\sum_{1\leq n\leq t}f(P_{n})\)_,_ \(t\geq 0\) _can be redefined on a richer probability space so that_ \[\sum_{1\leq n\leq t}f(P_{n})-E_{f}t=\sigma_{f}W(t)+O(t^{1/2-\eta})\quad\text{a.s.}\] _with a universal constant_ \(\eta>0\)_, where_ \(W(t)\)_,_ \(t\geq 0\) _is a standard Wiener process._
Theorem 1 (ii) immediately implies that the sum \(\sum_{n=1}^{N}f(P_{n})\) satisfies the CLT
\[\frac{\sum_{n=1}^{N}f(P_{n})-E_{f}N}{\sqrt{N}}\overset{d}{\to}\mathcal{N}(0, \sigma_{f}^{2}),\]
where \(\mathcal{N}(a,\sigma^{2})\) denotes the normal distribution with mean \(a\) and variance \(\sigma^{2}\) (or the constant \(a\) in case \(\sigma=0\)), whereas Theorem 1 (iii) implies the LIL
\[\limsup_{N\to\infty}\frac{\sum_{n=1}^{N}f(P_{n})-E_{f}N}{\sqrt{2N\log\log N}}= \sigma_{f}\quad\text{for a.e. }\alpha.\]
Theorem 1 applied to functions \(f\) supported on \(G_{1}\) resp. \(G_{-1}\) describes the quantitative equidistribution of \(P_{2n}\) resp. \(P_{2n-1}\) in \(G_{1}\) resp. \(G_{-1}\).
Given an arbitrary function \(f:V\to\mathbb{R}\), applying Theorem 1 to the map \(\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\mapsto f(b,d)\) shows that the sum \(\sum_{n=1}^{N}f((p_{n},q_{n})\pmod{m})\) satisfies the CLT
\[\frac{\sum_{n=1}^{N}f((p_{n},q_{n})\pmod{m})-EN}{\sqrt{N}}\stackrel{{ d}}{{\to}}\mathcal{N}(0,\sigma^{2})\]
and the LIL
\[\limsup_{N\to\infty}\frac{\sum_{n=1}^{N}f((p_{n},q_{n})\pmod{m})-EN}{\sqrt{2N \log\log N}}=\sigma\quad\text{for a.e. }\alpha,\]
where \(E=\mathbb{E}f(U)\) with \(U\sim\mathrm{Unif}(V)\), and the constant \(\sigma\geq 0\) depends only on \(f\). The same holds for the pairs \((q_{n-1},q_{n})\) and \((p_{n-1},p_{n})\).
Given an arbitrary function \(f:\mathbb{Z}_{m}\to\mathbb{R}\), the correct centering term is defined using an auxiliary random variable \(U\sim\nu\), that is, \(E=\mathbb{E}f(U)=\sum_{a\in\mathbb{Z}_{m}}\nu_{a}f(a)\). Then the sum \(\sum_{n=1}^{N}f(q_{n}\pmod{m}))\) satisfies the CLT
\[\frac{\sum_{n=1}^{N}f(q_{n}\pmod{m})-EN}{\sqrt{N}}\stackrel{{ d}}{{\to}}\mathcal{N}(0,\sigma^{2})\]
and the LIL
\[\limsup_{N\to\infty}\frac{\sum_{n=1}^{N}f(q_{n}\pmod{m})-EN}{\sqrt{2N\log \log N}}=\sigma\quad\text{for a.e. }\alpha\]
with a suitable constant \(\sigma\geq 0\) depending only on \(f\). The same holds for \(p_{n}\).
The key ingredient in the proof of Theorem 1 is a version of the Gauss-Kuzmin-Levy theorem, see Theorem 8 below. As a corollary, in Lemma 14 we show that the sequence \(P_{n}\) is \(\psi\)-mixing with exponential rate. The weak and the almost sure invariance principles then follow from general results of Philipp and Stout [22] on the partial sums of weakly dependent random variables.
In fact, we also prove that the sequence \((a_{n},P_{n})\) is \(\psi\)-mixing with exponential rate, from which various probabilistic limit theorems follow for the sum \(\sum_{n=1}^{N}f(a_{n},P_{n})\) depending on the growth rate of \(f\) in its first variable. As an application, in Section 2 we find the limit distribution of the maximum and the minimum of certain Birkhoff sums for the circle rotation. In Section 3 we gather the necessary facts about the group \(\mathrm{SL}(2,\mathbb{Z}_{m})\). The proofs are given in Sections 4, 5 and 6.
## 2 An application to local discrepancy
The circle rotation \(x\mapsto x+\alpha\pmod{\mathbb{Z}}\) on \(\mathbb{R}/\mathbb{Z}\) with a given irrational \(\alpha\) is perhaps the simplest discrete time dynamical system. The system is uniquely ergodic, consequently Birkhoff sums satisfy \(\sum_{n=1}^{N}f(n\alpha+\beta)=N\int_{0}^{1}f(x)\,\mathrm{d}x+o(N)\) for any starting point \(\beta\) and any \(1\)-periodic function \(f\) which is Riemann integrable on \([0,1]\). The remainder term \(o(N)\) depends sensitively on the continued fraction expansion of \(\alpha\) and the function \(f\), and satisfies various probabilistic limit theorems, see [11] for a survey.
A classical example is \(f(x)=\mathds{1}_{[0,r]}(\{x\})-r\) with a fixed \(r\in(0,1)\), the centered indicator function of the interval \([0,r]\) extended with period \(1\), where \(\{\cdot\}\) denotes the fractional part. The first limit theorem for the corresponding Birkhoff sum is due to Kesten [17, 18], who proved that if \((\alpha,\beta)\) is a random variable uniformly distributed on the unit square, then
\[\frac{\sum_{n=1}^{N}\mathds{1}_{[0,r]}(\{n\alpha+\beta\})-rN}{\sigma\log N} \stackrel{{ d}}{{\to}}\text{Cauchy}. \tag{7}\]
Here "Cauchy" denotes the standard Cauchy distribution, with density function \(1/(\pi(1+x^{2}))\). Because of the random starting point \(\beta\), the same holds for any subinterval of \([0,1]\) of length \(r\).
Kesten gave a complicated but explicit formula for the normalizing constant \(\sigma>0\), see Section 4. He showed that the value of \(\sigma\) is the same for all irrational \(r\), but his formula involves \(r\) if \(r\) is rational. This apparent dependence of \(\sigma\) on \(r\) in the rational case has been cited by several authors. Disproving this long-held view, we show that the dependence is illusory, and (7) holds with the same value of \(\sigma\) for both rational and irrational \(r\).
**Theorem 2**.: _Kesten's limit law (7) holds with \(\sigma=1/(3\pi)\) for all \(r\in(0,1)\)._
The function \(f(x)=\{x\}-1/2\) also leads to the same limit law [17]: if \((\alpha,\beta)\) is a random variable uniformly distributed on the unit square, then
\[\frac{\sum_{n=1}^{N}(\{n\alpha+\beta\}-1/2)}{\sigma^{\prime}\log N}\stackrel{{ d}}{{\to}}\text{Cauchy}. \tag{8}\]
In Section 4 we show that (8) holds with \(\sigma^{\prime}=1/(4\pi)\). The question whether (7) and (8) hold with a fixed starting point \(\beta\) was already raised by Kesten, and remains open.
Remarkable random behavior of the two Birkhoff sums mentioned above with a fixed quadratic irrational \(\alpha\) was first proved by Beck [3], and later generalized by Bromberg and Ulcigrai [7]. The special case \(r=1/2\) is sometimes called the deterministic random walk [1, 2]. Higher dimensional analogues of (7) are due to Dolgopyat and Fayad [9, 10].
In this paper, we consider the Birkhoff sum \(S_{N,r}(\alpha)=\sum_{n=1}^{N}\mathds{1}_{[0,r]}(\{n\alpha\})-rN\) with a rational \(r\), that is, when the starting point is \(\beta=0\), and we find the limit law of its maximum and minimum. Let \(\text{Stab}(1,\pm 1)\) denote the standard stable law of index \(1\) and skewness parameter \(\pm 1\), whose characteristic function is \(\exp(-|x|(1\pm(2i/\pi)\text{sgn}(x)\log|x|))\), and let \(\otimes\) denote the product of two measures.
**Theorem 3**.: _Let \(\alpha\sim\mu\) with \(\mu\ll\lambda\), and let \(r\in(0,1)\) be a fixed rational. Then_
\[\left(\frac{\max_{0\leq N<M}S_{N,r}(\alpha)-E_{M}}{\frac{1}{2\pi}\log M},\frac {\min_{0\leq N<M}S_{N,r}(\alpha)+E_{M}}{\frac{1}{2\pi}\log M}\right) \stackrel{{ d}}{{\to}}\text{Stab}(1,1)\otimes\text{Stab}(1,-1) \quad\text{as }M\to\infty,\]
_where \(E_{M}=(1/\pi^{2})\log M\log\log M-c(r)\log M\) with an explicit constant \(c(r)\) depending only on \(r\)._
In particular,
\[\frac{\max_{0\leq N<M}S_{N,r}(\alpha)-E_{M}}{\frac{1}{2\pi}\log M}\stackrel{{ d}}{{\to}}\text{Stab}(1,1)\qquad\text{and}\qquad\frac{\min_{0\leq N<M}S_{N,r}( \alpha)+E_{M}}{\frac{1}{2\pi}\log M}\stackrel{{ d}}{{\to}}\text{ Stab}(1,-1).\]
The main motivation for deducing the joint limit law in Theorem 3 is that it immediately implies
\[\frac{\max_{0\leq N<M}|S_{N,r}(\alpha)|-E_{M}}{\frac{1}{2\pi}\log M}\stackrel{{ d}}{{\to}}\vartheta,\]
where \(\vartheta\) is the distribution of \(\max\{X,Y\}\), with \(X,Y\sim\text{Stab}(1,1)\) independent. Note that the cumulative distribution function of \(\vartheta\) is the square of that of \(\text{Stab}(1,1)\).
The constant \(c(r)\) in Theorem 3 depends only on the denominator of \(r\) in its reduced form. If \(r=l/m\) with some coprime integers \(l\) and \(m\), then \(c(r)=(6\theta_{m}+\gamma+\log(2\pi))/\pi^{2}\), where \(\gamma\) is the Euler-Mascheroni constant, and1
Footnote 1: Throughout the paper, we use the convention \(0\log 0=0\).
\[\theta_{m}=\sum_{a\in\mathbb{Z}_{m}}\nu_{a}\left\{\frac{a}{m}\right\}\left(1- \left\{\frac{a}{m}\right\}\right)\log\left(\left\{\frac{a}{m}\right\}\left(1- \left\{\frac{a}{m}\right\}\right)\right). \tag{9}\]
We rely on an explicit formula for the maximum and the minimum of \(S_{N,r}(\alpha)\) in terms of the sequences \(q_{n}\pmod{m}\) and \(a_{n}\) due to Rocadas and Schoissengeier [24], see (20) in Section 6. Theorem 3 then follows from the fact that the sequence \((a_{n},P_{n})\) is \(\psi\)-mixing with exponential rate, as established in Lemma 14. It would be interesting to see whether Theorem 3 holds with a fixed irrational \(r\) as well. The maximum and the minimum of the Birkhoff sum \(\sum_{n=1}^{N}(\{n\alpha\}-1/2)\) satisfies the same limit law as in Theorem 3 with suitable normalizing constants, but the proof is based on the theory of quantum modular forms instead [5].
Relation (7) and Theorem 3 concern \(|S_{N,r}(\alpha)|\), which is sometimes called the local discrepancy of the Kronecker sequence \(\{n\alpha\}\) at \(r\in(0,1)\). Taking the supremum over all subintervals \(I\subseteq[0,1]\) leads to the notion of discrepancy:
\[D_{N}(\alpha)=\sup_{I\subseteq[0,1]}\left|\sum_{n=1}^{N}\mathds{1}_{I}(\{n \alpha\})-\lambda(I)N\right|.\]
Kesten [16] proved that if \(\alpha\sim\lambda\), then
\[\frac{D_{N}(\alpha)}{\log N\log\log N}\to\frac{2}{\pi^{2}} \quad\text{in measure, as }N\to\infty,\] \[\frac{\max_{0\leq N<M}D_{N}(\alpha)}{\log M\log\log M}\to\frac{3} {\pi^{2}} \quad\text{in measure, as }M\to\infty.\]
See also [25]. It remains a challenging open problem to show whether
\[\frac{D_{N}(\alpha)-(2/\pi^{2})\log N\log\log N}{\log N}\quad\text{and}\quad \frac{\max_{0\leq N<M}D_{N}(\alpha)-(3/\pi^{2})\log M\log\log M}{\log M}\]
have nondegenerate limit distributions.
## 3 The group \(\operatorname{SL}(2,\mathbb{Z}_{m})\)
In this section, we recall some basic facts about the group \(\operatorname{SL}(2,\mathbb{Z}_{m})\). Fix an integer \(m\geq 2\), let \(G_{D}\) be the set of \(2\times 2\) matrices with entries in \(\mathbb{Z}_{m}\) of determinant \(D\in\mathbb{Z}_{m}^{*}\), and let \(V=\{(a,b)\in\mathbb{Z}_{m}\,:\,\gcd(a,b,m)=1\}\), as in the Introduction. Let \(\nu\) be the probability measure on \(\mathbb{Z}_{m}\) defined in (4).
**Lemma 4**.: _Let \(D\in\mathbb{Z}_{m}^{*}\). We have \(|G_{D}|=m|V|=m^{3}\prod_{p|m}\Big{(}1-\frac{1}{p^{2}}\Big{)}\). If \(U=\left(\begin{array}{cc}u_{1}&u_{2}\\ u_{3}&u_{4}\end{array}\right)\sim\operatorname{Unif}(G_{D})\), then \((u_{1},u_{2}),(u_{3},u_{4}),(u_{1},u_{3}),(u_{2},u_{4})\sim\operatorname{Unif }(V)\), and \(u_{1},u_{2},u_{3},u_{4}\sim\nu\)._
**Proof.** One readily checks that the value of \(\gcd(a,b,m)\) is invariant under multiplying the row vector \((a,b)\in\mathbb{Z}_{m}^{2}\) by a matrix in \(\operatorname{GL}(2,\mathbb{Z}_{m})\) from the right. In particular, the group \(\operatorname{GL}(2,\mathbb{Z}_{m})\) acts on the set \(V\) by multiplication.
First, let \(D=1\). Recall that given \(a,b,e\in\mathbb{Z}_{m}\), the linear congruence \(ax+by=e\) has a solution \(x,y\in\mathbb{Z}_{m}\) if and only if \(\gcd(a,b,m)\mid e\). In particular, given any \((a,b)\in V\), there exist \(c,d\in\mathbb{Z}_{m}\) such that \(ad-bc=1\). Therefore we can obtain any \((a,b)\in V\) by multiplying the row vector \((1,0)\in V\) by a suitable matrix in \(G_{1}\), showing that \(G_{1}\) acts transitively on \(V\). It immediately follows that if \(U\sim\operatorname{Unif}(G_{1})\), then its two rows are uniformly distributed on \(V\). Since transposition is a bijection of \(G_{1}\), the two columns of \(U\) are also uniformly distributed on \(V\).
The stabilizer of the row vector \((1,0)\in V\) under the action of \(G_{1}\) is \(\left\{\left(\begin{array}{cc}1&0\\ a&1\end{array}\right)\,:\,a\in\mathbb{Z}_{m}\right\}\), which has size \(m\). Hence \(|G_{1}|=m|V|\).
Now fix \(a\in\mathbb{Z}_{m}\). Since \(\gcd(a,b,m)=\gcd(b,\gcd(a,m))\), we have
\[|\{b\in\mathbb{Z}_{m}\,:\,(a,b)\in V\}|=\frac{m}{\gcd(a,m)}\varphi(\gcd(a,m))=m \prod_{p\mid\gcd(a,m)}\left(1-\frac{1}{p}\right),\]
where \(\varphi\) is the Euler totient function. Given a divisor \(d\mid m\), we have \(|\{a\in\mathbb{Z}_{m}\,:\,\gcd(a,m)=d\}|=\varphi(m/d)\), hence
\[|V|=\sum_{a\in\mathbb{Z}_{m}}\frac{m}{\gcd(a,m)}\varphi(\gcd(a,m))=\sum_{d \mid m}\frac{m}{d}\varphi(d)\varphi\left(\frac{m}{d}\right)=m^{2}\prod_{p\mid m }\left(1-\frac{1}{p^{2}}\right).\]
The last step can be seen using the prime factorization of \(m\). Since the rows and columns of \(U\) are uniformly distributed on \(V\), the previous two formulas show that each entry \(u_{i}\) of \(U\) attains \(a\in\mathbb{Z}_{m}\) with probability \(\nu_{a}\). This finishes the proof for the case \(D=1\). The claims for general \(D\in\mathbb{Z}_{m}^{*}\) follow immediately from the fact that \(G_{D}\) is a coset of \(G_{1}\) in \(\operatorname{GL}(2,\mathbb{Z}_{m})\).
**Lemma 5**.: _We have_
\[\left\{\prod_{i=1}^{4}\left(\begin{array}{cc}0&1\\ 1&x_{i}\end{array}\right)\,:\,x_{1},x_{2},x_{3},x_{4}\in\mathbb{Z}_{m}\right\} =G_{1}.\]
_Let \(m^{\prime}=\prod_{p\mid m}p\) denote the greatest square-free divisor of \(m\). More precisely, for any \(g\in G_{1}\) there exist at least \(\varphi(m^{\prime})\) elements \(y\in\mathbb{Z}_{m^{\prime}}\) such that whenever \(x_{4}\in\mathbb{Z}_{m}\) satisfies \(x_{4}\equiv y\pmod{m^{\prime}}\), the equation \(\prod_{i=1}^{4}\left(\begin{array}{cc}0&1\\ 1&x_{i}\end{array}\right)=g\) has a solution in the variables \(x_{1},x_{2},x_{3}\in\mathbb{Z}_{m}\)._
Proof.: Fix \(a,b,c,d\in\mathbb{Z}_{m}\), \(ad-bc=1\), and consider the equation \(\prod_{i=1}^{4}\left(\begin{array}{cc}0&1\\ 1&x_{i}\end{array}\right)=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\). Multiplying the four matrices, we find that this matrix equation is equivalent to the system of four equations
\[1+x_{2}x_{3} =a, x_{2}+x_{4}+x_{2}x_{3}x_{4} =b,\] \[x_{1}+x_{3}+x_{1}x_{2}x_{3} =c, 1+x_{1}x_{2}+x_{1}x_{4}+x_{3}x_{4}+x_{1}x_{2}x_{3}x_{4} =d.\]
Substituting \(x_{2}x_{3}=a-1\) into the second and third equation, we see that \(x_{2}=b-ax_{4}\) and \(x_{3}=c-ax_{1}\). We are left with two variables \(x_{1},x_{4}\), and using the assumption \(ad-bc=1\), the system turns out to be equivalent to the single equation
\[ax_{1}x_{4}-bx_{1}-cx_{4}+d-1=0.\]
It will thus be enough to prove that there exist at least \(\varphi(m^{\prime})\) elements \(y\in\mathbb{Z}_{m^{\prime}}\) such that whenever \(x_{4}\in\mathbb{Z}_{m}\) satisfies \(x_{4}\equiv y\pmod{m^{\prime}}\), then \(ax_{4}-b\in\mathbb{Z}_{m}^{*}\). Indeed, then the remaining linear congruence \((ax_{4}-b)x_{1}=cx_{4}-d+1\) has a solution in the variable \(x_{1}\in\mathbb{Z}_{m}\).
Given any \(p\mid m\), the congruence \(ax-b\equiv 0\pmod{p}\) has at most one solution in the variable \(x\in\mathbb{Z}_{p}\). Indeed, if \(p\nmid a\), then this follows from the fact that \(\mathbb{Z}_{p}\) is a field. If \(p\mid a\), then the assumption \(ad-bc=1\) implies that \(p\nmid b\), and there are no solutions. By the Chinese remainder theorem, there are at least \(\prod_{p\mid m}(p-1)=\varphi(m^{\prime})\) elements \(y\in\mathbb{Z}_{m^{\prime}}\) such that \(ay-b\not\equiv 0\pmod{p}\) for all \(p\mid m\). Whenever \(x_{4}\in\mathbb{Z}_{m}\) satisfies \(x_{4}\equiv y\pmod{m^{\prime}}\), we have \(ax_{4}-b\not\equiv 0\pmod{p}\) for all \(p\mid m\), and in particular, \(ax_{4}-b\in\mathbb{Z}_{m}^{*}\).
**Remark**.: It follows that
\[\left\{\prod_{i=1}^{5}\left(\begin{array}{cc}0&1\\ 1&x_{i}\end{array}\right)\,:\,x_{1},x_{2},x_{3},x_{4},x_{5}\in\mathbb{Z}_{m} \right\}=G_{-1}.\]
In fact, we could even prescribe the value of \(x_{5}\). The number of factors \(4\) resp. \(5\) needed to generate \(G_{1}\) resp. \(G_{-1}\) is sharp: it is easy to check that whenever \(m>2\),
\[\left\{\prod_{i=1}^{2}\left(\begin{array}{cc}0&1\\ 1&x_{i}\end{array}\right)\,:\,x_{1},x_{2}\in\mathbb{Z}_{m}\right\}\neq G_{1} \quad\text{and}\quad\left\{\prod_{i=1}^{3}\left(\begin{array}{cc}0&1\\ 1&x_{i}\end{array}\right)\,:\,x_{1},x_{2},x_{3}\in\mathbb{Z}_{m}\right\}\neq G _{-1}.\]
For the sake of completeness, we mention that in the case \(m=2\) the group \(G_{1}=G_{-1}\) can be generated using \(3\) factors.
## 4 The normalizing constants in Kesten's limit laws
Kesten [18, Section 4] gave the following explicit formula for the value of \(\sigma\) in (7). Let
\[V(x,u,y)=\frac{2}{\pi^{2}}\sum_{k=1}^{\infty}\frac{\sin(2\pi kx)\sin(\pi ku) \cos(2\pi ky)}{k^{2}},\]
and let \(F(u)=\int_{0}^{1}\int_{0}^{1}|V(x,u,y)|\,\mathrm{d}x\,\mathrm{d}y\). Then
\[\sigma=\left\{\begin{array}{ll}(6/\pi)\sum_{a\in\mathbb{Z}_{m}}\nu_{a}F(al /m)&\text{if $r=l/m$ with some coprime integers $l$ and $m$},\\ (6/\pi)\int_{0}^{1}F(u)\,\mathrm{d}u&\text{if $r$ is irrational}.\end{array}\right.\]
Kesten even gave the hint to use the classical Fourier series
\[\sum_{k=1}^{\infty}\frac{\cos(2\pi kx)}{2\pi^{2}k^{2}}=\frac{1}{2}\{x\}^{2}- \frac{1}{2}\{x\}+\frac{1}{12} \tag{10}\]
in order to compute \(F(u)\), although he did not follow through his own advice. This is exactly what we shall do.
**Proof of Theorem 2.** Trigonometric identities show that
\[4\sin(2\pi kx)\sin(\pi ku)\cos(2\pi ky)=\] \[\cos(2\pi k(x+y-u/2))+\cos(2\pi k(x-y-u/2))-\cos(2\pi k(x+y+u/2))- \cos(2\pi k(x-y+u/2)).\]
Letting \(B(x)=\{x\}^{2}/2-\{x\}/2+1/12\) denote the second Bernoulli polynomial, (10) thus gives
\[V(x,u,y)=B(x+y-u/2)+B(x-y-u/2)-B(x+y+u/2)-B(x-y+u/2).\]
Using the fact that \((x,y)\mapsto(x+y,x-y)\) is a measure preserving map of \(\mathbb{R}^{2}/\mathbb{Z}^{2}\), we obtain
\[F(u) =\int_{0}^{1}\int_{0}^{1}|B(x-u/2)+B(y-u/2)-B(x+u/2)-B(y+u/2)|\, \mathrm{d}x\,\mathrm{d}y\] \[=\int_{0}^{1}\int_{0}^{1}|B(x)+B(y)-B(x+u)-B(y+u)|\,\mathrm{d}x\, \mathrm{d}y.\]
In particular, \(F(u)\) is \(1\)-periodic. Assuming now \(x,y,u\in[0,1)\), we have
\[B(x)-B(x+u)=\frac{1}{2}(\{x\}-\{x+u\})(\{x\}+\{x+u\}-1)=\left\{\begin{array}[ ]{ll}-\frac{u}{2}(2x+u-1)&\text{if $0\leq x<1-u$},\\ \frac{1-u}{2}(2x+u-2)&\text{if $1-u\leq x<1$},\end{array}\right.\]
and a similar formula holds for \(B(y)-B(y+u)\). Elementary calculations yield
\[\int_{0}^{1-u}\int_{0}^{1-u}|B(x)+B(y)-B(x+u)-B(y+u)|\,\mathrm{d} x\,\mathrm{d}y =u\int_{0}^{1-u}\int_{0}^{1-u}|x+y+u-1|\,\mathrm{d}x\,\mathrm{d}y\] \[=\frac{u(1-u)^{3}}{3},\]
and
\[\int_{1-u}^{1}\int_{1-u}^{1}\left|B(x)+B(y)-B(x+u)-B(y+u)\right| \mathrm{d}x\,\mathrm{d}y =(1-u)\int_{1-u}^{1}\int_{1-u}^{1}\left|x+y+u-2\right|\mathrm{d}x\, \mathrm{d}y\] \[=\frac{u^{3}(1-u)}{3}.\]
Further,
\[\int_{0}^{1-u}\int_{1-u}^{1}\left|B(x)+B(y)-B(x+u)-B(y+u)\right| \mathrm{d}x\,\mathrm{d}y =\int_{0}^{1-u}\int_{1-u}^{1}\left|(1-u)x-uy-(1-u)^{2}\right| \mathrm{d}x\,\mathrm{d}y\] \[=\frac{u^{2}(1-u)^{2}}{3},\]
and by symmetry the integral on \([0,1-u]\times[1-u,1]\) is the same. Therefore
\[F(u)=\frac{u(1-u)^{3}}{3}+\frac{u^{3}(1-u)}{3}+\frac{2u^{2}(1-u)^{2}}{3}=\frac {u(1-u)}{3},\quad u\in[0,1),\]
and by periodicity, \(F(u)=\{u\}(1-\{u\})/3\), \(u\in\mathbb{R}\). For irrational \(r\in(0,1)\) we thus have
\[\sigma=\frac{6}{\pi}\int_{0}^{1}\frac{u(1-u)}{3}\,\mathrm{d}u=\frac{1}{3\pi},\]
as claimed.
Now let \(r=l/m\) with some coprime integers \(l\) and \(m\). Since \(\nu_{a}\) is invariant under multiplication by \(l\in\mathbb{Z}_{m}^{*}\), we have
\[\sigma=\frac{2}{\pi}\sum_{a\in\mathbb{Z}_{m}}\nu_{a}\left\{\frac{a}{m}\right\} \left(1-\left\{\frac{a}{m}\right\}\right).\]
It remains to show that \(\sigma\) does not depend on the denominator \(m\) either.
Let \(\mathcal{P}\) denote the set of prime divisors of \(m\). For any \(I\subseteq\mathcal{P}\), let
\[w(I)=\prod_{p\in I}\left(1-\frac{1}{p}\right)=\sum_{J\subseteq I}\frac{(-1)^{ \left|J\right|}}{\prod_{p\in J}p},\quad\text{and}\quad s(I)=\sum_{\begin{subarray} {c}a\in\mathbb{Z}_{m}\\ \prod_{p\in I}p\mid a\end{subarray}}\left\{\frac{a}{m}\right\}\left(1-\left\{ \frac{a}{m}\right\}\right).\]
In particular, \(\nu_{a}=w(I)/(m\prod_{p\mid m}(1-1/p^{2}))\) with \(I=\{p\in\mathcal{P}\,:\,p\mid a\}\). By inclusion-exclusion,
\[\sum_{a\in\mathbb{Z}_{m}}\nu_{a}\left\{\frac{a}{m}\right\}\left(1-\left\{ \frac{a}{m}\right\}\right)=\frac{1}{m\prod_{p\mid m}\left(1-\frac{1}{p^{2}} \right)}\sum_{I\subseteq\mathcal{P}}s(I)\sum_{J\subseteq I}(-1)^{\left|I \right|\cdot J\mid}w(J). \tag{11}\]
Observe that given any divisor \(d\mid\gcd(a,m)\),
\[\sum_{\begin{subarray}{c}a\in\mathbb{Z}_{m}\\ d\mid a\end{subarray}}\left\{\frac{a}{m}\right\}\left(1-\left\{\frac{a}{m} \right\}\right)=\sum_{k=1}^{m/d-1}\frac{kd}{m}\left(1-\frac{kd}{m}\right)= \frac{1}{6}\left(\frac{m}{d}-\frac{d}{m}\right).\]
In particular,
\[s(I)=\frac{1}{6}\left(\frac{m}{\prod_{p\in I}p}-\frac{\prod_{p\in I}p}{m} \right).\]
Another application of inclusion-exclusion shows that
\[\sum_{J\subseteq I}(-1)^{\left|I\right|\cdot J\mid}w(J)=\frac{(-1)^{\left|I \right|}}{\prod_{p\in I}p}.\]
The previous two formulas simplify (11) to
\[\begin{split}\sum_{a\in\mathbb{Z}_{m}}\nu_{a}\left\{\frac{a}{m} \right\}\left(1-\left\{\frac{a}{m}\right\}\right)&=\frac{1}{m\prod_ {p\mid m}\left(1-\frac{1}{p^{2}}\right)}\sum_{I\subseteq\mathcal{P}}\frac{1}{ 6}\left(\frac{m}{\prod_{p\in I}p}-\frac{\prod_{p\in I}p}{m}\right)\frac{(-1)^{ \mid I\mid}}{\prod_{p\in I}p}\\ &=\frac{1}{6\prod_{p\mid m}\left(1-\frac{1}{p^{2}}\right)}\sum_{I \subseteq\mathcal{P}}\frac{(-1)^{\mid I\mid}}{\prod_{p\in I}p^{2}}=\frac{1}{6}. \end{split} \tag{12}\]
Note that we used \(\sum_{I\subseteq\mathcal{P}}(-1)^{\mid I\mid}=0\). Therefore \(\sigma=1/(3\pi)\) for rational \(r\) as well, as claimed.
The explicit formula for \(\sigma^{\prime}\) in (8) is [17]
\[\sigma^{\prime}=\frac{6}{\pi^{3}}\int_{0}^{1}\int_{0}^{1}\left|\sum_{k=1}^{ \infty}\frac{\sin(2\pi kx)\sin(2\pi ky)}{k^{2}}\right|\,\mathrm{d}x\,\mathrm{ d}y.\]
A trigonometric identity and the Fourier series (10) lead to
\[\sum_{k=1}^{\infty}\frac{\sin(2\pi kx)\sin(2\pi ky)}{k^{2}}=\sum_{k=1}^{ \infty}\frac{\cos(2\pi k(x-y))-\cos(2\pi k(x+y))}{2k^{2}}=\pi^{2}(B(x-y)-B(x+y )),\]
hence
\[\begin{split}\sigma^{\prime}&=\frac{6}{\pi}\int_{0 }^{1}\int_{0}^{1}\left|B(x-y)-B(x+y)\right|\mathrm{d}x\,\mathrm{d}y=\frac{6}{ \pi}\int_{0}^{1}\int_{0}^{1}\left|B(x)-B(y)\right|\mathrm{d}x\,\mathrm{d}y\\ &=\frac{3}{\pi}\int_{0}^{1}\int_{0}^{1}\left|x^{2}-x-y^{2}+y \right|\mathrm{d}x\,\mathrm{d}y.\end{split}\]
The two diagonals partition the unit square into four right triangles. The sign of \(x^{2}-x-y^{2}+y=(x-y)(x+y-1)\) is constant on these triangles, making them convenient domains for integrating the function \(\left|x^{2}-x-y^{2}+y\right|\). Elementary calculations then yield \(\sigma^{\prime}=1/(4\pi)\), as claimed.
## 5 The sequence \(P_{n}\)
### Preliminaries
Let \(\mathcal{B}\) denote the family of Borel subsets of \([0,1]\). Writing each \(\alpha\in[0,1]\) in continued fraction form \(\alpha=[0;a_{1},a_{2},\ldots]\), the partial quotients \(a_{k}\) are thus measurable functions on \(([0,1],\mathcal{B})\). Let \(\mathcal{A}_{k}^{\ell}\) denote the \(\sigma\)-algebra generated by \(a_{i}\), \(k\leq i\leq\ell\), and similarly let \(\mathcal{A}_{k}^{\infty}\) be the \(\sigma\)-algebra generated by \(a_{i}\), \(i\geq k\). By convention, \(\mathcal{A}_{1}^{k}\) with \(k=0\) is the trivial \(\sigma\)-algebra. Let \(T:[0,1)\to[0,1)\), \(Tx=\{1/x\}\) if \(x\neq 0\), \(T0=0\) denote the Gauss map. Note that \(\mathcal{A}_{k+1}^{\infty}=\{T^{-k}B\,:\,B\in\mathcal{B}\}\), where \(T^{-k}\) denotes the preimage with respect to the \(k\)th iterate of \(T\).
Given a Borel probability measure \(\mu\) on \([0,1]\), the partial quotients \(a_{k}\) become random variables, and we interpret, say, \(\mu(a_{k}=5)=\mu(\{\alpha\in[0,1]\,:\,a_{k}=5\})\) as the probability of the event \(a_{k}=5\). We also use the conditional probability notation \(\mu(B\mid A)=\mu(A\cap B)/\mu(A)\). Let
\[P_{k}=\prod_{i=1}^{k}\left(\begin{array}{cc}0&1\\ 1&a_{i}\end{array}\right)\quad(\text{mod }m)\qquad\text{and}\qquad P_{k,\ell}=\prod_{i=k}^{\ell} \left(\begin{array}{cc}0&1\\ 1&a_{i}\end{array}\right)\quad(\text{mod }m),\]
with the convention that \(P_{k+1,k}\) is the identity element of the group \(G_{1}\). Let \(p_{k}/q_{k}=[0;a_{1},a_{2},\ldots,a_{k}]\) be the convergents. Recall the identity \(q_{k-1}/q_{k}=[0;a_{k},a_{k-1},\ldots,a_{1}]\).
We start with two preparatory lemmas.
**Lemma 6**.: _For any \(A,A^{\prime}\in\mathcal{A}_{1}^{k}\) and \(B\in\mathcal{A}_{k+1}^{\infty}\) such that \(\lambda(A),\lambda(A^{\prime}),\lambda(B)>0\), we have_
\[\frac{1}{2}\leq\frac{\lambda(B\mid A)}{\lambda(B\mid A^{\prime})}\leq 2.\]
_In particular, \(\frac{1}{2}\lambda(A)\lambda(B)\leq\lambda(A\cap B)\leq 2\lambda(A)\lambda(B)\)._
**Proof.** Let \(b_{1},b_{2},\ldots,b_{k+n}\in\mathbb{N}\) be arbitrary, and let \(p_{\ell}/q_{\ell}=[0;b_{1},\ldots,b_{\ell}]\), \(\ell=k-1,k\). The set \(\{a_{1}=b_{1},a_{2}=b_{2},\ldots,a_{k+n}=b_{k+n}\}\) is an interval with endpoints \(\frac{p_{k}r+p_{k-1}}{q_{k}r+q_{k-1}}\) and \(\frac{p_{k}r^{*}+p_{k-1}}{q_{k}r^{*}+q_{k-1}}\), where \(r=[b_{k+1};b_{k+2},\ldots,b_{k+n}]\) and \(r^{*}=[b_{k+1};b_{k+2},\ldots,b_{k+n}+1]\). Using the identity \(q_{k}p_{k-1}-q_{k-1}p_{k}=(-1)^{k}\), the Lebesgue measure of this interval simplifies to
\[\lambda(a_{1}=b_{1},a_{2}=b_{2},\ldots,a_{k+n}=b_{k+n})=\frac{|r-r^{*}|}{(q_{k }r+q_{k-1})(q_{k}r^{*}+q_{k-1})}.\]
The set \(\{a_{1}=b_{1},a_{2}=b_{2},\ldots,a_{k}=b_{k}\}\) is also an interval, with endpoints \(\frac{p_{k}+p_{k-1}}{q_{k}+q_{k-1}}\) and \(\frac{p_{k}}{q_{k}}\), and has Lebesgue measure
\[\lambda(a_{1}=b_{1},a_{2}=b_{2},\ldots,a_{k}=b_{k})=\frac{1}{q_{k}(q_{k}+q_{k -1})}.\]
Hence
\[\lambda(a_{k+1}=b_{k+1},\ldots,a_{k+n}=b_{k+n}\mid a_{1}=b_{1},\ldots,a_{k}=b_ {k})=|r-r^{*}|\frac{1+\frac{q_{k-1}}{q_{k}}}{\left(r+\frac{q_{k-1}}{q_{k}} \right)\left(r^{*}+\frac{q_{k-1}}{q_{k}}\right)}.\]
Here \(r,r^{*}\) do not depend on \(b_{1},\ldots,b_{k}\). Therefore for any \(b_{1}^{\prime},\ldots,b_{k}^{\prime}\in\mathbb{N}\), we have
\[\frac{\lambda(a_{k+1}=b_{k+1},\ldots,a_{k+n}=b_{k+n}\mid a_{1}=b_{1},\ldots,a _{k}=b_{k})}{\lambda(a_{k+1}=b_{k+1},\ldots,a_{k+n}=b_{k+n}\mid a_{1}=b_{1}^{ \prime},\ldots,a_{k}=b_{k}^{\prime})}=\frac{\frac{1+\frac{q_{k-1}}{q_{k}}}{ \left(r+\frac{q_{k-1}}{q_{k}}\right)\left(r^{*}+\frac{q_{k-1}}{q_{k}}\right)} }{\frac{1+\frac{q_{k-1}^{\prime}}{q_{k}^{\prime}}}{\left(r+\frac{q_{k-1}^{ \prime}}{q_{k}^{\prime}}\right)\left(r^{*}+\frac{q_{k-1}^{\prime}}{q_{k}^{ \prime}}\right)}},\]
where \(p_{\ell}^{\prime}/q_{\ell}^{\prime}=[0;b_{1}^{\prime},\ldots,b_{\ell}^{\prime}]\), \(\ell=k-1,k\).
We claim that the right hand side of the previous formula lies in the interval \([1/2,2]\). Letting
\[g(x)=\frac{1+x}{\left(r+x\right)\left(r^{*}+x\right)},\qquad x\in[0,1],\]
it will be enough to show that \(g(x)/g(y)\leq 2\) for all \(x,y\in[0,1]\). Note that \(r,r^{*}\geq 1\), and that \(g^{\prime}(x)\geq 0\Leftrightarrow-1-\sqrt{(r-1)(r^{*}-1)}\leq x\leq-1+\sqrt{(r-1) (r^{*}-1)}\).
**Case 1.** Assume that \((r-1)(r^{*}-1)\leq 1\). Then \(g(x)\) is decreasing on \([0,1]\), hence
\[\frac{g(x)}{g(y)}\leq\frac{g(0)}{g(1)}=\frac{(r+1)(r^{*}+1)}{2rr^{*}}\leq 2.\]
**Case 2.** Assume that \(1<(r-1)(r^{*}-1)\leq 2\). Then \(g(x)\) attains its minimum at \(x=1\) and its maximum at \(x=-1+\sqrt{(r-1)(r^{*}-1)}\). Hence
\[\frac{g(x)}{g(y)}\leq\frac{g(-1+\sqrt{(r-1)(r^{*}-1)})}{g(1)}=\frac{(r+1)(r^{*} +1)}{2\left(\sqrt{r-1}+\sqrt{r^{*}-1}\right)^{2}}\leq\frac{2+2(r-1)+2(r^{*}-1)+4 }{2(r-1+r^{*}-1+2)}\leq 2.\]
**Case 3.** Assume that \(2<(r-1)(r^{*}-1)\leq 4\). Then \(g(x)\) attains its minimum at \(x=0\) and its maximum at \(x=-1+\sqrt{(r-1)(r^{*}-1)}\). Hence
\[\frac{g(x)}{g(y)}\leq\frac{g(-1+\sqrt{(r-1)(r^{*}-1)})}{g(0)}=\frac{rr^{*}}{ \left(\sqrt{r-1}+\sqrt{r^{*}-1}\right)^{2}}\leq\frac{4+r-1+r^{*}-1+1}{r-1+r^{*} -1+2\sqrt{2}}\leq 2.\]
**Case 4.** Assume that \((r-1)(r^{*}-1)>4\). Then \(g(x)\) is increasing on \([0,1]\), hence
\[\frac{g(x)}{g(y)}\leq\frac{g(1)}{g(0)}=\frac{2rr^{*}}{(r+1)(r^{*}+1)}\leq 2.\]
This finishes the proof of \(g(x)/g(y)\leq 2\) for all \(x,y\in[0,1]\). In particular,
\[\frac{1}{2}\leq\frac{\lambda(a_{k+1}=b_{k+1},\ldots,a_{k+n}=b_{k+n}\mid a_{1}= b_{1},\ldots,a_{k}=b_{k})}{\lambda(a_{k+1}=b_{k+1},\ldots,a_{k+n}=b_{k+n} \mid a_{1}=b_{1}^{\prime},\ldots,a_{k}=b_{k}^{\prime})}\leq 2.\]
By \(\sigma\)-additivity, for all \(A,A^{\prime}\in\mathcal{A}_{1}^{k}\) and all \(B\in\mathcal{A}_{k+1}^{k+n}\) of positive Lebesgue measure,
\[\frac{1}{2}\leq\frac{\lambda(B\mid A)}{\lambda(B\mid A^{\prime})}\leq 2.\]
Since \(\mathcal{A}_{k+1}^{k+n}\), \(n\in\mathbb{N}\) generate \(\mathcal{A}_{k+1}^{\infty}\), the same holds for all \(B\in\mathcal{A}_{k+1}^{\infty}\) of positive Lebesgue measure.
**Lemma 7**.:
1. _For any_ \(0\leq n\leq 3\) _and_ \(g\in G_{(-1)^{n}}\)_, we have either_ \(\lambda(P_{n}=g)=0\) _or_ \(\lambda(P_{n}=g)\geq 1/(m+1)^{6}\)_._
2. _For any_ \(g\in G_{1}\)_, we have_ \[\lambda(P_{4}=g)\geq\frac{\pi^{6}}{216}\cdot\frac{\varphi(m^{\prime})}{(m+1)^ {6}(m^{\prime}+1)(m^{\prime}-\varphi(m^{\prime})+1)}.\]
3. _For any_ \(n\geq 5\) _and_ \(g\in G_{(-1)^{n}}\)_, we have_ \[\lambda(P_{n}=g)\geq\frac{\pi^{6}}{432}\cdot\frac{\varphi(m^{\prime})}{(m+1)^ {6}(m^{\prime}+1)(m^{\prime}-\varphi(m^{\prime})+1)}.\]
**Proof.****(i)** Let \(0\leq n\leq 3\). Either \(\lambda(P_{n}=g)=0\), or there exist positive integers \(b_{1},b_{2},b_{3}\leq m\) such that
\[\lambda(P_{n}=g)\geq\lambda(a_{1}=b_{1},a_{2}=b_{2},a_{3}=b_{3})\geq\frac{1}{ (m+1)^{6}}.\]
**(ii)** Let \(g\in G_{1}\), and consider the equation
\[\prod_{i=1}^{4}\left(\begin{array}{cc}0&1\\ 1&b_{i}\end{array}\right)\pmod{m}=g \tag{13}\]
in the variables \(b_{1},b_{2},b_{3},b_{4}\in\mathbb{N}\). We have
\[\lambda(P_{4}=g)=\sum_{b_{1},b_{2},b_{3},b_{4}}\lambda(a_{1}=b_{1},a_{2}=b_{2},a_{3}=b_{3},a_{4}=b_{4})=\sum_{b_{1},b_{2},b_{3},b_{4}}\frac{1}{q_{4}(q_{4}+q _{3})},\]
where the sums are over the set of solutions of the equation (13), and \(q_{4}\) resp. \(q_{3}\) is the denominator of \([0;b_{1},b_{2},b_{3},b_{4}]\) resp. \([0;b_{1},b_{2},b_{3}]\). Let \(\ell_{1},\ell_{2},\ell_{3}\in\mathbb{N}\). Lemma 5 implies that there exist at least \(\varphi(m^{\prime})\) integers \(1\leq b_{4}\leq m^{\prime}\) for which the equation (13) has an integer solution \((\ell_{i}-1)m<b_{i}\leq\ell_{i}m\)
\(i=1,2,3\). One readily checks that for all such solutions of (13), we have \(q_{3}\leq\ell_{1}\ell_{2}\ell_{3}(m+1)^{3}\) and \(q_{4}\leq\ell_{1}\ell_{2}\ell_{3}(m+1)^{3}b_{4}\). Hence
\[\lambda(P_{4}=g) \geq\sum_{b_{4}}\sum_{\ell_{1},\ell_{2},\ell_{3}=1}^{\infty} \frac{1}{\ell_{1}^{2}\ell_{2}^{2}\ell_{3}^{2}(m+1)^{6}b_{4}(b_{4}+1)}\geq\left( \frac{\pi^{2}}{6}\right)^{3}\frac{1}{(m+1)^{6}}\sum_{j=m^{\prime}-\varphi(m^{ \prime})+1}^{m^{\prime}}\frac{1}{j(j+1)}\] \[=\frac{\pi^{6}}{216}\cdot\frac{\varphi(m^{\prime})}{(m+1)^{6}(m^ {\prime}+1)(m^{\prime}-\varphi(m^{\prime})+1)}.\]
**(iii)** Let \(n\geq 5\) and \(g\in G_{(-1)^{n}}\). Lemma 6 and part (ii) show that
\[\lambda(P_{n}=g) =\sum_{h\in G_{(-1)^{n}}}\lambda(P_{4}=gh^{-1},P_{5,n}=h)\geq \sum_{h\in G_{(-1)^{n}}}\frac{1}{2}\lambda(P_{4}=gh^{-1})\lambda(P_{5,n}=h)\] \[\geq\frac{\pi^{6}}{432}\cdot\frac{\varphi(m^{\prime})}{(m+1)^{6} (m^{\prime}+1)(m^{\prime}-\varphi(m^{\prime})+1)}.\]
### A Gauss-Kuzmin-Levy theorem
In this section, we prove a version of the Gauss-Kuzmin-Levy theorem, which will serve as the main tool of this paper. We refer to [14, Chapter 2] for a comprehensive account of the Gauss-Kuzmin problem. Theorem 8 below generalizes results of Kesten [17, Lemma 2.5] and Szusz [26]. They both considered the pair \((q_{k-1},q_{k})\pmod{m}\) and \(\alpha\sim\lambda\), whereas we work with the matrix \(P_{k}\), and allow the distribution of \(\alpha\) to have a Lipschitz density.
**Theorem 8**.: _Let \(\mu\ll\lambda\), and assume (5). For any \(A\in\mathcal{A}_{1}^{k}\), any \(g\in G_{(-1)^{n}}\) and any \(B\in\mathcal{A}_{k+n+1}^{\infty}\),_
\[\left|\mu\left(A\cap\{P_{k+1,k+n}=g\}\cap B\right)-\frac{\mu(A)\mu_{\rm Gauss} (B)}{|G_{1}|}\right|\leq C\lambda(A)\lambda(B)e^{-\tau n},\]
_where, with \(m^{\prime}=\prod_{p\mid m}p\) denoting the greatest square-free divisor of \(m\),_
\[C=4L+3\quad\text{and}\quad\tau=\frac{\varphi(m^{\prime})}{12(m+1)^{6}(m^{ \prime}+1)(m^{\prime}-\varphi(m^{\prime})+1)}\geq\frac{1}{12(m+1)^{8}}. \tag{14}\]
**Proof.** Throughout the proof we fix \(k\geq 0\) and a set of the form \(A=\{a_{1}=b_{1},\ldots,a_{k}=b_{k}\}\), with the convention \(A=[0,1]\) if \(k=0\). It will be enough to prove the theorem for this set \(A\), as the claim for a general \(A\in\mathcal{A}_{1}^{k}\) then follows by \(\sigma\)-additivity.
Given a Lipschitz function \(F:[0,1]\to\mathbb{R}\), let
\[\|F\|_{\rm Lip}=\sup_{\begin{subarray}{c}x,y\in[0,1]\\ x\neq y\end{subarray}}\frac{|F(x)-F(y)|}{|x-y|}=\operatorname*{ess\,sup}_{x \in[0,1]}|F^{\prime}(x)|\]
denote the Lipschitz constant. We implicitly use the fact that Lipschitz functions are a.e. differentiable, and satisfy the fundamental theorem of calculus.
For any \(n\geq 0\) and \(g\in G_{(-1)^{n}}\), the measure \(B\mapsto\mu(A\cap\{P_{k+1,k+n}=g\}\cap T^{-(k+n)}B)\), \(B\in\mathcal{B}\) is absolutely continuous. Let \(f_{n,g}\) denote its density with respect to \(\lambda\). It will be enough to prove that
\[\sup_{\begin{subarray}{c}g\in G_{(-1)^{n}}\\ x\in[0,1]\end{subarray}}\left|f_{n,g}(x)-\frac{\mu(A)}{|G_{1}|(\log 2)(1+x)} \right|\leq C\lambda(A)e^{-\tau n}. \tag{15}\]
Let \(F_{n,g}(x)=(\log 2)(1+x)f_{n,g}(x)\), \(x\in[0,1]\). We start with the case \(n=0\).
**Lemma 9**.: _For any \(g\in G_{1}\), the functions \(f_{0,g}\) and \(F_{0,g}\) are Lipschitz, and we have_
\[\|F_{0,g}\|_{\mathrm{Lip}}\leq(\log 2)(3L+2)\lambda(A)\quad\text{and}\quad 0 \leq F_{0,g}(x)\leq(\log 2)(L+2)\lambda(A).\]
**Proof.** Assumption (5) implies that for all \(x\in[0,1]\),
\[\left|\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(x)-1\right|=\left|\int_{0}^{1} \left(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(x)-\frac{\mathrm{d}\mu}{ \mathrm{d}\lambda}(y)\right)\,\mathrm{d}y\right|\leq\int_{0}^{1}L|x-y|\, \mathrm{d}y\leq\frac{L}{2}.\]
In particular, \(\max_{[0,1]}\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\leq\frac{L}{2}+1\).
By construction,
\[\int_{0}^{x}f_{0,g}(t)\,\mathrm{d}t=\left\{\begin{array}{ll}\mu(a_{1}=b_{1}, \ldots,a_{k}=b_{k},[0;a_{k+1},a_{k+2},\ldots]\leq x)&\text{if $g=1\in G_{1}$},\\ 0&\text{if $g\neq 1\in G_{1}$}.\end{array}\right.\]
The set \(\{a_{1}=b_{1},\ldots,a_{k}=b_{k},[0;a_{k+1},a_{k+2},\ldots]\leq x\}\) is an interval with endpoints \(p_{k}/q_{k}\) and \((p_{k}+p_{k-1}x)/(q_{k}+q_{k-1}x)\). Differentiating the previous formula with respect to \(x\) thus gives
\[f_{0,g}(x)=\left\{\begin{array}{ll}\frac{\mathrm{d}\mu}{\mathrm{d}\lambda} \left(\frac{p_{k}+p_{k-1}x}{q_{k}+q_{k-1}x}\right)\frac{1}{(q_{k}+q_{k-1}x)^{ 2}}&\text{if $g=1\in G_{1}$},\\ 0&\text{if $g\neq 1\in G_{1}$}.\end{array}\right.\]
Strictly speaking, the density function \(f_{0,g}\) is only defined up to a.e. equivalence, and Lebesgue's differentiation theorem yields the previous formula for a.e. \(x\). However, the right-hand side is a Lipschitz function by assumption, thus \(f_{0,g}\) can be chosen to be Lipschitz. The previous formula thus holds for all \(x\in[0,1]\), \(F_{0,g}\) is Lipschitz, and for a.e. \(x\) we have
\[|F_{0,g}^{\prime}(x)| =(\log 2)\left|\left(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda} \right)^{\prime}\left(\frac{p_{k}+p_{k-1}x}{q_{k}+q_{k-1}x}\right)\frac{(-1)^ {k}(1+x)}{(q_{k}+q_{k-1}x)^{4}}+\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\left( \frac{p_{k}+p_{k-1}x}{q_{k}+q_{k-1}x}\right)\frac{q_{k}-(2+x)q_{k-1}}{(q_{k}+q _{k-1}x)^{3}}\right|\] \[\leq\frac{\log 2}{(q_{k}+q_{k-1}x)^{2}}\left(L\frac{1+x}{q_{k}+q_{ k-1}x}+\left(\frac{L}{2}+1\right)\frac{|q_{k}-(2+x)q_{k-1}|}{q_{k}+q_{k-1}x}\right)\] \[\leq\frac{\log 2}{q_{k}^{2}}\left(L\frac{2}{q_{k}+q_{k-1}}+\frac{L }{2}+1\right)\leq\frac{(\log 2)(3L+2)}{q_{k}(q_{k}+q_{k-1})}=(\log 2)(3L+2) \lambda(A).\]
In particular, \(\|F_{0,g}\|_{\mathrm{Lip}}\leq(\log 2)(3L+2)\lambda(A)\), as claimed. Further,
\[0\leq F_{0,g}(x)\leq\max_{[0,1]}\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\cdot \frac{2\log 2}{q_{k}(q_{k}+q_{k-1})}\leq(\log 2)(L+2)\lambda(A).\]
This finishes the proof of Lemma 9.
We now prove that \(f_{n,g}\) can be chosen to be Lipschitz by induction on \(n\), the base case \(n=0\) having been established in Lemma 9. Let \(n\geq 1\) and \(b\in\mathbb{N}\), and note that (ignoring endpoints)
\[[0;a_{k+n+1},a_{k+n+2},\ldots]\leq x\text{ and }a_{k+n}=b\iff\left[0;a_{k+n},a_{ k+n+1},\ldots\right]\in\left[\frac{1}{b+x},\frac{1}{b}\right].\]
Using the partition \(\{a_{k+n}=b\}\), \(b\in\mathbb{N}\) thus leads to
\[\mu(A\cap\{P_{k+1,k+n}=g\}\cap T^{-(k+n)}[0,x])=\sum_{b=1}^{\infty}\mu\left(A \cap\{P_{k+1,k+n-1}=gh(b)^{-1}\}\cap T^{-(k+n-1)}\left[\frac{1}{b+x},\frac{1}{ b}\right]\right),\]
where \(h(b)=\left(\begin{array}{cc}0&1\\ 1&b\end{array}\right)\ (\mathrm{mod}\ m)\). Equivalently,
\[\int_{0}^{x}f_{n,g}(t)\,\mathrm{d}t=\sum_{b=1}^{\infty}\int_{\frac{1}{b+x}}^{ \frac{1}{b}}f_{n-1,gh(b)^{-1}}(t)\,\mathrm{d}t.\]
One readily checks that the series on the right-hand side can be differentiated term by term using the inductive hypothesis that \(f_{n-1,gh(b)^{-1}}\) is Lipschitz, and we obtain
\[f_{n,g}(x)=\sum_{b=1}^{\infty}f_{n-1,gh(b)^{-1}}\left(\frac{1}{b+x}\right)\frac{1 }{(b+x)^{2}}.\]
The right-hand side is easily seen to be Lipschitz, hence \(f_{n,g}\) can be chosen to be Lipschitz, and the previous formula holds for all \(x\in[0,1]\). This finishes the induction. The recursion above should be compared to the Perron-Frobenius operator of the Gauss map [14, Chapter 2].
The recursion can be written in terms of \(F_{n,g}\) as
\[F_{n,g}(x)=\sum_{b=1}^{\infty}F_{n-1,gh(b)^{-1}}\left(\frac{1}{b+x}\right) \frac{1+x}{(b+x)(b+1+x)}\qquad\text{for all }x\in[0,1]. \tag{16}\]
Taking the derivative leads to
\[\begin{split} F_{n,g}^{\prime}(x)=\sum_{b=1}^{\infty}\left(F_{n -1,gh(b)^{-1}}^{\prime}\left(\frac{1}{b+x}\right)\frac{-(1+x)}{(b+x)^{3}(b+1+ x)}\right.\\ \left.\qquad\qquad+F_{n-1,gh(b)^{-1}}\left(\frac{1}{b+x}\right) \frac{b(b-1)-(1+x)^{2}}{(b+x)^{2}(b+1+x)^{2}}\right)\qquad\text{for a.e. }x\in[0,1].\end{split} \tag{17}\]
For comparison, note the identities
\[\sum_{b=1}^{\infty}\frac{1+x}{(b+x)(b+1+x)}=1\qquad\text{and}\qquad\sum_{b=1} ^{\infty}\frac{b(b-1)-(1+x)^{2}}{(b+x)^{2}(b+1+x)^{2}}=0. \tag{18}\]
Define \(L_{n}=\max_{g\in G_{(-1)^{n}}}\|F_{n,g}\|_{\text{Lip}}\), and
\[W_{n}^{-}=\min_{\begin{subarray}{c}g\in G_{(-1)^{n}}\\ x\in[0,1]\end{subarray}}F_{n,g}(x),\qquad W_{n}^{+}=\max_{\begin{subarray}{c }g\in G_{(-1)^{n}}\\ x\in[0,1]\end{subarray}}F_{n,g}(x),\qquad\delta_{n}=W_{n}^{+}-W_{n}^{-}.\]
The recursion (16) and the first identity in (18) immediately show that \(W_{n-1}^{-}\leq W_{n}^{-}\) and \(W_{n}^{+}\leq W_{n-1}^{+}\), hence \(\delta_{n}\leq\delta_{n-1}\).
**Lemma 10**.: _We have \(L_{n}\leq(1-\zeta(2)+\zeta(3))L_{n-1}+(1/4)\delta_{n-1}\), where \(\zeta\) is the Riemann zeta function. The sum of the coefficients \(1-\zeta(2)+\zeta(3)=0.5571\ldots\) and \(1/4\) is less than \(1\)._
**Proof.** The recursion (17) and the second identity in (18) show that
\[|F_{n,g}^{\prime}(x)|\leq L_{n-1}\sum_{b=1}^{\infty}\frac{1+x}{(b+x)^{3}(b+1+ x)}+\sum_{b=1}^{\infty}\left|F_{n-1,gh(b)^{-1}}\left(\frac{1}{b+x}\right)-c \right|\frac{|b(b-1)-(1+x)^{2}|}{(b+x)^{2}(b+1+x)^{2}}\]
with any \(c=c(g,x,y)\) which does not depend on \(b\). Choosing \(c=F_{n-1,gh(1)^{-1}}(1/(1+x))\), the \(b=1\) term cancels, and by the definition of \(\delta_{n-1}\) we obtain
\[|F_{n,g}^{\prime}(x)|\leq L_{n-1}\sum_{b=1}^{\infty}\frac{1+x}{(b+x)^{3}(b+1+ x)}+\delta_{n-1}\sum_{b=2}^{\infty}\frac{|b(b-1)-(1+x)^{2}|}{(b+x)^{2}(b+1+x)^{2}}.\]
The derivative of the first series satisfies
\[\left(\sum_{b=1}^{\infty}\frac{1+x}{(b+x)^{3}(b+1+x)}\right)^{\prime} =-\frac{3x+5}{(1+x)^{3}(2+x)^{2}}+\sum_{b=2}^{\infty}\frac{b^{2}-3 b-3-(2b+6)x-3x^{2}}{(b+x)^{4}(b+1+x)^{2}}\] \[\leq-\frac{1}{9}+\sum_{b=4}^{\infty}\frac{b^{2}-3b-3}{b^{4}(b+1)^ {2}}=-0.1098\ldots<0.\]
Therefore the maximum is attained at \(x=0\), and
\[\sum_{b=1}^{\infty}\frac{1+x}{(b+x)^{3}(b+1+x)}\leq\sum_{b=1}^{\infty}\frac{1}{b^ {3}(b+1)}=\sum_{b=1}^{\infty}\left(\frac{1}{b(b+1)}-\frac{1}{b^{2}}+\frac{1}{b^{ 3}}\right)=1-\zeta(2)+\zeta(3).\]
Considering \(b=2\) and \(b\geq 3\) separately, we check that each term in the second series attains its maximum at \(x=0\) as well, hence
\[\sum_{b=2}^{\infty}\frac{|b(b-1)-(1+x)^{2}|}{(b+x)^{2}(b+1+x)^{2}}\leq\sum_{b=2 }^{\infty}\frac{b^{2}-b-1}{b^{2}(b+1)^{2}}=\sum_{b=2}^{\infty}\left(\frac{1}{b (b+1)}-\frac{1}{b^{2}}+\frac{1}{(b+1)^{2}}\right)=\frac{1}{4}.\]
This finishes the proof of Lemma 10.
For the sake of readability, let \(\kappa_{m}=\varphi(m^{\prime})/((m+1)^{6}(m^{\prime}+1)(m^{\prime}-\varphi(m ^{\prime})+1))\).
**Lemma 11**.: _We have_
\[\delta_{n}\leq\left(1-\frac{\pi^{6}}{432}\kappa_{m}\right)\delta_{n-4}+\frac{ \pi^{6}}{432}\kappa_{m}L_{n-4}.\]
**Proof.** If \(L_{n-4}\geq\delta_{n-4}\), then the desired upper bound is greater or equal than \(\delta_{n-4}\), and the claim follows from the fact that \(\delta_{n}\) is nondecreasing. We may thus assume that \(L_{n-4}<\delta_{n-4}\).
Let \(g\in G_{1}\) and \(B\in\mathcal{B}\) be arbitrary. The partition \(\{P_{k+n-3,k+n}=h\}\), \(h\in G_{1}\) leads to
\[\mu(A\cap\{P_{k+1,k+n}=g\}\cap T^{-(k+n)}B)=\sum_{h\in G_{1}}\mu(A\cap\{P_{k+1,k+n-4}=gh^{-1}\}\cap T^{-(k+n-4)}(\{P_{4}=h\}\cap T^{-4}B)).\]
By the definition of \(f_{n,g}\) and \(F_{n,g}\), this is equivalent to
\[\int_{B}F_{n,g}(x)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x)=\sum_{h\in G_{1}}\int_{ \{P_{4}=h\}\cap T^{-4}B}F_{n-4,gh^{-1}}(x)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x).\]
Fix a pair \((g_{0},x_{0})\in G_{1}\times[0,1]\) at which the minimum \(F_{n-4,g_{0}}(x_{0})=W_{n-4}^{-}\) is attained. Using the bound \(F_{n-4,gh^{-1}}(x)\leq W_{n-4}^{+}\) for all \(h\neq g_{0}^{-1}g\) yields
\[\int_{B}F_{n,g}(x)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x)\leq W_{n-4}^{+}\mu_{ \mathrm{Gauss}}(B)+\int_{\{P_{4}=g_{0}^{-1}g\}\cap T^{-4}B}\left(F_{n-4,g_{0}}( x)-W_{n-4}^{+}\right)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x).\]
Here \(F_{n-4,g_{0}}(x)\leq F_{n-4,g_{0}}(x_{0})+L_{n-4}|x-x_{0}|\leq W_{n-4}^{-}+L_ {n-4}\), thus
\[\int_{B}F_{n,g}(x)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x)\leq W_{n-4}^{+}\mu_{ \mathrm{Gauss}}(B)+(L_{n-4}-\delta_{n-4})\mu_{\mathrm{Gauss}}(\{P_{4}=g_{0}^{ -1}g\}\cap T^{-4}B).\]
Note that \(L_{n-4}-\delta_{n-4}<0\) by assumption. Lemmas 6 and 7 show that
\[\lambda(\{P_{4}=g_{0}^{-1}g\}\cap T^{-4}B)\geq\frac{1}{2}\lambda(P_{4}=g_{0}^{ -1}g)\lambda(T^{-4}B)\geq\frac{\pi^{6}}{432}\kappa_{m}\lambda(T^{-4}B),\]
and as the density of \(\mu_{\mathrm{Gauss}}\) lies between \(1/(2\log 2)\) and \(1/\log 2\),
\[\mu_{\mathrm{Gauss}}(\{P_{4}=g_{0}^{-1}g\}\cap T^{-4}B)\geq\frac{\pi^{6}}{864 }\kappa_{m}\mu_{\mathrm{Gauss}}(B).\]
Therefore
\[\int_{B}F_{n,g}(x)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x)\leq W_{n-4}^{+}\mu_{ \mathrm{Gauss}}(B)+(L_{n-4}-\delta_{n-4})\frac{\pi^{6}}{864}\kappa_{m}\mu_{ \mathrm{Gauss}}(B).\]
A similar proof shows the lower bound
\[\int_{B}F_{n,g}(x)\,\mathrm{d}\mu_{\mathrm{Gauss}}(x)\geq W_{n-4}^{-}\mu_{ \mathrm{Gauss}}(B)-(L_{n-4}-\delta_{n-4})\frac{\pi^{6}}{864}\kappa_{m}\mu_{ \mathrm{Gauss}}(B).\]
As the previous two formulas hold for all \(B\in\mathcal{B}\), we have
\[W_{n-4}^{-}-(L_{n-4}-\delta_{n-4})\frac{\pi^{6}}{864}\kappa_{m}\leq F_{n,g}(x) \leq W_{n-4}^{+}+(L_{n-4}-\delta_{n-4})\frac{\pi^{6}}{864}\kappa_{m},\]
hence
\[\delta_{n}\leq\delta_{n-4}+(L_{n-4}-\delta_{n-4})\frac{\pi^{6}}{432}\kappa_{m}.\]
This finishes the proof of Lemma 11.
Let \(z=1-\zeta(2)+\zeta(3)\). Iterating Lemma 10 five times and using the fact that \(\delta_{n}\) is nondecreasing shows that
\[L_{n}\leq z^{5}L_{n-5}+(z^{4}+z^{3}+z^{2}+z+1)\frac{1}{4}\delta_{n-5},\]
where the sum of the coefficients is \(z^{5}+(z^{4}+z^{3}+z^{2}+z+1)/4=0.5878\ldots\). Lemmas 11 and 10 yield
\[\delta_{n}\leq\left(1-\frac{\pi^{6}}{432}\kappa_{m}\right)\delta_{n-5}+\frac{ \pi^{6}}{432}\kappa_{m}\left(zL_{n-5}+\frac{1}{4}\delta_{n-5}\right),\]
where the sum of the coefficients is
\[1-\frac{\pi^{6}}{432}\kappa_{m}+\frac{\pi^{6}}{432}\kappa_{m}\left(z+\frac{1}{ 4}\right)=1-0.08584\ldots\cdot(5\kappa_{m})>0.5879.\]
We thus have the recursive upper bound \(\max\{L_{n},\delta_{n}\}\leq(1-0.08584\cdot(5\kappa_{m}))\max\{L_{n-5},\delta _{n-5}\}\). Lemma 9 implies that \(\max\{L_{i},\delta_{i}\}\leq(\log 2)(3L+2)\lambda(A)\) for \(i=0\), and four applications of Lemma 10 shows that the same holds for \(i=1,2,3,4\). Iterating the recursive upper bound \(\lfloor n/5\rfloor\) times thus leads to
\[\max\{L_{n},\delta_{n}\}\leq(1-0.08584\cdot(5\kappa_{m}))^{\lfloor n/5\rfloor }\,(\log 2)(3L+2)\lambda(A)\leq C_{0}e^{-0.08584\kappa_{m}n}\]
with \(C_{0}=e^{0.08584\cdot(5\kappa_{m})}(\log 2)(3L+2)\). By the definition of \(\delta_{n}\), this means that \(F_{n,g}(x)\) lies in a given interval of length \(C_{0}e^{-0.08584\kappa_{m}n}\) for all \(g\in G_{(-1)^{n}}\) and \(x\in[0,1]\). Then the average value
\[\frac{1}{|G_{1}|}\sum_{g\in G_{(-1)^{n}}}\int_{0}^{1}F_{n,g}(x)\,\mathrm{d}\mu _{\mathrm{Gauss}}(x)=\frac{\mu(A)}{|G_{1}|}\]
lies in the same interval, hence \(|F_{n,g}(x)-\mu(A)/|G_{1}||\leq C_{0}e^{-0.08584\kappa_{m}n}\). Formula (15) follows with
\[C=\frac{C_{0}}{\log 2}\leq 4L+3\quad\text{and}\quad\tau=0.08584\kappa_{m}> \frac{\kappa_{m}}{12}.\]
This finishes the proof of Theorem 8.
We now show that the limit relation in Theorem 8 without the exponential rate remains true for an arbitrary absolutely continuous measure, without assuming that the density is Lipschitz.
**Corollary 12**.: _Let \(\mu\ll\lambda\). Then_
\[\lim_{n\to\infty}\sup_{k\geq 0}\sup_{\begin{subarray}{c}A\in\mathcal{A}_{1}^{ \kappa},\;B\in\mathcal{A}_{k+n+1}^{\infty}\\ g\in G_{(-1)^{n}}\end{subarray}}\left|\mu\left(A\cap\{P_{k+1,k+n}=g\}\cap B \right)-\frac{\mu(A)\mu_{\mathrm{Gauss}}(B)}{|G_{1}|}\right|=0.\]
**Proof.** Let \(f(x)=\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(x)\) denote the density function. Using a positive mollifier on the circle group \(\mathbb{R}/\mathbb{Z}\), we deduce that for any \(\varepsilon>0\) there exists a smooth function \(f_{\varepsilon}\) on \([0,1]\) such that
\[\int_{0}^{1}f_{\varepsilon}(x)\,\mathrm{d}x=1,\qquad\int_{0}^{1}|f(x)-f_{ \varepsilon}(x)|\,\mathrm{d}x<\varepsilon,\qquad\min_{x\in[0,1]}f_{\varepsilon }(x)>0.\]
Let \(\mu_{\varepsilon}\) be the Borel probability measure on \([0,1]\) with density \(f_{\varepsilon}\). In particular, \(|\mu(A)-\mu_{\varepsilon}(A)|<\varepsilon\) for all Borel sets \(A\subseteq[0,1]\). Since \(\mu_{\varepsilon}\) has a positive smooth density, the claim holds for \(\mu_{\varepsilon}\) by Theorem 8. As \(\varepsilon\) was arbitrary, the claim holds also for \(\mu\).
### Weak convergence and mixing properties of \(P_{n}\)
Theorem 8 and Corollary 12 immediately imply that \(P_{2n}\) resp. \(P_{2n-1}\) converges to the uniform distribution on \(G_{1}\) resp. \(G_{-1}\).
**Corollary 13**.: _Let \(\alpha\sim\mu\) with \(\mu\ll\lambda\). Then \(P_{2n}\stackrel{{ d}}{{\to}}\mathrm{Unif}(G_{1})\) and \(P_{2n-1}\stackrel{{ d}}{{\to}}\mathrm{Unif}(G_{-1})\). Under the assumption (5), we also have \(\max_{g\in G_{(-1)^{n}}}|\mu(P_{n}=g)-1/|G_{1}||\leq Ce^{-\tau n}\), where \(C\) and \(\tau\) are as in (14)._
By Lemma 4 we thus have \((p_{n},q_{n})\pmod{m}\stackrel{{ d}}{{\to}}\mathrm{Unif}(V)\), and the same holds for \((q_{n-1},q_{n})\) and \((p_{n-1},p_{n})\). Further, \(q_{n}\pmod{m}\stackrel{{ d}}{{\to}}\nu\) and \(p_{n}\pmod{m}\stackrel{{ d}}{{\to}}\nu\). If the density function \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\) is Lipschitz, the same hold with exponential rate. See also [8] for the special case \(\mu=\lambda\) of Corollary 13.
Theorem 8 and Corollary 12 also imply certain mixing properties of \(P_{n}\). Let us first recall two classical ways of quantifying mixing, see [6] for more context. Let \(X_{n}\), \(n\in\mathbb{N}\) be a sequence of random variables on a probability space \((\Omega,\mathcal{F},P)\) taking values from a measurable space. Let \(\mathcal{F}_{k}^{\ell}\) denote the \(\sigma\)-algebra generated by \(X_{i}\), \(k\leq i\leq\ell\), and similarly let \(\mathcal{F}_{k}^{\infty}\) be the \(\sigma\)-algebra generated by \(X_{i}\), \(i\geq k\). The \(\alpha\)-mixing (or strong mixing) coefficients of the sequence \(X_{n}\) are defined as2
Footnote 2: The term \(\alpha\)-mixing and the notation \(\alpha(\ell)\) are unrelated to the random real number \(\alpha\in[0,1]\).
\[\alpha(\ell)=\sup_{k\in\mathbb{N}}\sup_{\begin{subarray}{c}A\in\mathcal{F}_{k }^{k}\\ B\in\mathcal{F}_{k+\ell}^{\infty}\end{subarray}}\left|P(A\cap B)-P(A)P(B) \right|,\qquad\ell\in\mathbb{N},\]
whereas the \(\psi\)-mixing coefficients are
\[\psi(\ell)=\sup_{k\in\mathbb{N}}\sup_{\begin{subarray}{c}A\in\mathcal{F}_{k}^ {k},\ P(A)>0\\ B\in\mathcal{F}_{k+\ell}^{\infty},\ P(B)>0\end{subarray}}\left|\frac{P(A\cap B )}{P(A)P(B)}-1\right|,\qquad\ell\in\mathbb{N}.\]
**Lemma 14**.: _Let \(\alpha\sim\mu\) with \(\mu\ll\lambda\), and assume (5). Then the \(\alpha\)-mixing coefficients of the sequence \(X_{n}=(a_{n},P_{n})\) satisfy \(\alpha(\ell)\ll e^{-\tau\ell}\), \(\ell\in\mathbb{N}\), where \(\tau\) is as in (14), and the implied constant depends only on \(L\) and \(m\). Under the additional assumption \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(x)\geq K>0\) for all \(x\in[0,1]\) with some constant \(K>0\), we also have \(\psi(\ell)\ll e^{-\tau\ell}\), \(\ell\in\mathbb{N}\), with an implied constant depending also on \(K\)._
**Remark**.: Exactly as in the proof of Corollary 12, we can prove that under the sole assumption \(\mu\ll\lambda\), we have \(\lim_{\ell\to\infty}\alpha(\ell)=0\) without an estimate for the rate.
**Proof of Lemma 14.** Fix \(k,\ell\geq 1\). Observe that \((X_{1},X_{2},\ldots,X_{k})\) is a function of \((a_{1},a_{2},\ldots,a_{k})\), and that \((X_{k+\ell},X_{k+\ell+1},\ldots)\) is a function of \((P_{k+\ell-1},a_{k+\ell},a_{k+\ell+1},\ldots)\). In particular, \(\mathcal{F}_{1}^{k}\subseteq\mathcal{A}_{1}^{k}\), and any \(B\in\mathcal{F}_{k+\ell}^{\infty}\) is of the form \(B=\cup_{g\in G_{(-1)^{k+\ell-1}}}\{P_{k+\ell-1}=g\}\cap B_{g}\) with \(B_{g}\in\mathcal{A}_{k+\ell}^{\infty}\).
Let \(A=\{a_{1}=b_{1},\ldots,a_{k}=b_{k}\}\), and let \(h=\prod_{i=1}^{k}\left(\begin{array}{cc}0&1\\ 1&b_{i}\end{array}\right)\pmod{m}\). An application of Theorem 8 shows that
\[\mu(A\cap\{P_{k+\ell-1}=g\}\cap B_{g})=\mu(A\cap\{P_{k+1,k+\ell-1}=h^{-1}g\} \cap B_{g})=\frac{\mu(A)\mu_{\mathrm{Gauss}}(B_{g})}{|G_{1}|}+O(\lambda(A) \lambda(B_{g})e^{-\tau\ell}).\]
By \(\sigma\)-additivity, the same holds with any \(A\in\mathcal{A}_{1}^{k}\). In particular,
\[\mu(\{P_{k+\ell-1}=g\}\cap B_{g})=\frac{\mu_{\text{Gauss}}(B_{g})}{|G_{1}|}+O( \lambda(B_{g})e^{-\tau\ell}).\]
The previous two formulas and the fact that \(\mu(A)\ll\lambda(A)\) yield
\[|\mu(A\cap\{P_{k+\ell-1}=g\}\cap B_{g})-\mu(A)\mu(\{P_{k+\ell-1}=g\}\cap B_{g} )|\ll\lambda(A)\lambda(B_{g})e^{-\tau\ell}.\]
It is enough to sum over those \(g\) for which \(\lambda(P_{k+\ell-1}=g)>0\), as otherwise \(\mu(P_{k+\ell-1}=g)=0\). Thus
\[|\mu(A\cap B)-\mu(A)\mu(B)|\ll\lambda(A)e^{-\tau\ell}\sum_{\begin{subarray}{c }g\in G_{(-1)k+\ell-1}\\ \lambda(P_{k+\ell-1}=g)>0\end{subarray}}\lambda(B_{g}).\]
Lemmas 6 and 7 show that each term on the right-hand side satisfies
\[\lambda(B_{g})\leq 2\frac{\lambda(\{P_{k+\ell-1}=g\}\cap B_{g})}{\lambda(P_{k+ \ell-1}=g)}\ll\lambda(\{P_{k+\ell-1}=g\}\cap B_{g}).\]
Hence
\[\sum_{\begin{subarray}{c}g\in G_{(-1)k+\ell-1}\\ \lambda(P_{k+\ell-1}=g)>0\end{subarray}}\lambda(B_{g})\ll\lambda(B),\]
and we obtain \(|\mu(A\cap B)-\mu(A)\mu(B)|\ll\lambda(A)\lambda(B)e^{-\tau\ell}\). In particular, \(\alpha(\ell)\ll e^{-\tau\ell}\). Under the additional assumption \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(x)\geq K>0\) we have \(\lambda(A)\lambda(B)\ll\mu(A)\mu(B)\), and \(\psi(\ell)\ll e^{-\tau\ell}\) follows.
### Invariance principles for \(P_{n}\)
We now find the variance of the sum \(\sum_{n=M+1}^{M+N}f(P_{n})\), and then prove Theorem 1.
**Lemma 15**.: _Fix an integer \(m\geq 2\), and let \(f:G\to\mathbb{R}\) be arbitrary._
1. _The right-hand side of (_6_) is finite and nonnegative._
2. _Let_ \(\alpha\sim\mu\) _with_ \(\mu\ll\lambda\)_, and assume (_5_). For any integers_ \(M\geq 0\) _and_ \(N\geq 1\)_,_ \[\mathbb{E}\left(\sum_{n=M+1}^{M+N}f(P_{n})-E_{f}N\right)^{2}=\sigma_{f}^{2}N+ O(\log(N+1))\] _with an implied constant depending only on_ \(L\) _and_ \(f\)_._
**Proof.** **(i)** Let \(\alpha\sim\mu_{\text{Gauss}}\) and \(U_{\pm 1}\sim\text{Unif}(G_{\pm 1})\) be independent random variables. By Corollary 13,
\[\left|\mathbb{E}\bar{f}(U_{\pm 1})\bar{f}(U_{\pm 1}P_{n})\right|=\left|\sum_{ \begin{subarray}{c}g\in G_{\pm 1}\\ h\in G_{(-1)n}\end{subarray}}\bar{f}(g)\bar{f}(gh)\frac{\mu_{\text{Gauss}}(P_{n} =h)}{|G_{1}|}\right|=\left|\sum_{\begin{subarray}{c}g\in G_{\pm 1}\\ h\in G_{(-1)n}\end{subarray}}\bar{f}(g)\bar{f}(gh)\frac{1}{|G_{1}|^{2}}\right|+O( e^{-\tau n}).\]
The remaining sum is zero since \(\sum_{g\in G_{\pm 1}}\bar{f}(g)=0\). Hence \(|\mathbb{E}\bar{f}(U_{\pm 1})\bar{f}(U_{\pm 1}P_{n})|\ll e^{-\tau n}\), so both series in (6) are absolutely convergent. The fact that the right-hand side of (6) is nonnegative will follow from (ii).
**(ii)** We may assume that \(\mathbb{E}f(U_{\pm 1})=0\). Expanding the square leads to
\[\mathbb{E}\left(\sum_{n=M+1}^{M+N}f(P_{n})\right)^{2}=\sum_{n=M+1}^{M+N}\mathbb{ E}f(P_{n})^{2}+2\sum_{\ell=1}^{N-1}\sum_{n=M+1}^{M+N-\ell}\mathbb{E}f(P_{n})f(P_{n+ \ell}). \tag{19}\]
Corollary 13 shows that \(\mathbb{E}f(P_{n})^{2}=\mathbb{E}f(U_{(-1)^{n}})^{2}+O(e^{-\tau n})\), hence the first sum in (19) is
\[\sum_{n=M+1}^{M+N}\mathbb{E}f(P_{n})^{2}=\left(\frac{1}{2}\mathbb{E}f(U_{1})^{ 2}+\frac{1}{2}\mathbb{E}f(U_{-1})^{2}\right)N+O(1).\]
Let \(1\leq R\leq N-1\) be a parameter to be chosen, and consider the second sum in (19). Lemma 14 implies that the \(\alpha\)-mixing coefficients of the sequence \(P_{n}\) satisfy \(\alpha(\ell)\ll e^{-\tau\ell}\), thus
\[|\mathbb{E}f(P_{n})f(P_{n+\ell})-\mathbb{E}f(P_{n})\mathbb{E}f(P_{n+\ell})| \ll e^{-\tau\ell}.\]
Here \(|\mathbb{E}f(P_{n+\ell})|\ll e^{-\tau(n+\ell)}\) and \(|\mathbb{E}f(P_{n})|\ll 1\). Hence \(|\mathbb{E}f(P_{n})f(P_{n+\ell})|\ll e^{-\tau\ell}\), and
\[\left|\sum_{\ell=R}^{N-1}\sum_{n=M+1}^{M+N-\ell}\mathbb{E}f(P_{n})f(P_{n+\ell} )\right|\ll\sum_{\ell=R}^{N-1}Ne^{-\tau\ell}\ll Ne^{-\tau R}.\]
Now let \(1\leq\ell\leq R\), and consider
\[\mathbb{E}f(P_{n})f(P_{n+\ell})=\sum_{\begin{subarray}{c}g\in G_{(-1)^{n}}\\ h\in G_{(-1)^{\ell}}\end{subarray}}f(g)f(gh)\mu\left(\{P_{n}=g\}\cap\{P_{n+1,n+ \ell}=h\}\right).\]
Theorem 8 shows that here \(\mu(\{P_{n}=g\}\cap\{P_{n+1,n+\ell}=h\})=\mu_{\text{Gauss}}(P_{\ell}=h)/|G_{1} |+O(e^{-\tau n})\). Therefore
\[\mathbb{E}f(P_{n})f(P_{n+\ell})=\sum_{\begin{subarray}{c}g\in G_{(-1)^{n}}\\ h\in G_{(-1)^{\ell}}\end{subarray}}f(g)f(gh)\frac{\mu_{\text{Gauss}}(P_{\ell}=h )}{|G_{1}|}+O(e^{-\tau n})=\mathbb{E}f(U_{(-1)^{n}})f(U_{(-1)^{n}}P_{\ell})+O( e^{-\tau n}),\]
and
\[\sum_{\ell=1}^{R}\sum_{n=M+1}^{M+N-\ell}\mathbb{E}f(P_{n})f(P_{n+ \ell}) =\sum_{\ell=1}^{R}\frac{N-\ell}{2}\left(\mathbb{E}f(U_{1})f(U_{1} P_{\ell})+\mathbb{E}f(U_{-1})f(U_{-1}P_{\ell})\right)+O(R)\] \[=\frac{N}{2}\sum_{\ell=1}^{\infty}\left(\mathbb{E}f(U_{1})f(U_{1} P_{\ell})+\mathbb{E}f(U_{-1})f(U_{-1}P_{\ell})\right)+O(R+Ne^{-\tau R}).\]
The previous estimates for the right-hand side of (19) and the definition (6) of \(\sigma_{f}\) lead to
\[\mathbb{E}\left(\sum_{n=M+1}^{M+N}f(P_{n})\right)^{2}=\sigma_{f}^{2}N+O(R+Ne^ {-\tau R}),\]
and the claim follows by choosing \(R\approx\log(N+1)\).
**Proof of Theorem 1.** Claim (i) was proved in Lemma 15. Now let \(\alpha\sim\mu\) with \(\mu\ll\lambda\), and assume (5). Lemmas 14 and 15 show that the sequence of random variables \(f(P_{n})\) is \(\alpha\)-mixing with exponential rate, and satisfy \(\mathbb{E}(\sum_{n=M+1}^{M+N}f(P_{n})-E_{f}N)^{2}=\sigma_{f}^{2}N+O(\log(N+1))\) uniformly in \(M\geq 0\). Assuming \(\sigma_{f}>0\), a general result of Philipp and Stout [22, Theorem 7.1] on partial
sums of nonstationary \(\alpha\)-mixing random variables shows that without changing its distribution, the process \(\sum_{1\leq n\leq t}f(P_{n})\) can be redefined on a richer probability space so that \(\sum_{1\leq n\leq t}f(P_{n})-E_{f}t=\sigma_{f}W(t)+O(t^{1/2-\eta})\) a.s. with a universal constant \(\eta>0\). This proves (iii) in the case \(\sigma_{f}>0\).
As Philipp and Stout assume \(\sigma_{f}>0\) throughout their treatise, for the sake of completeness we include a proof of (iii) in the case \(\sigma_{f}=0\), i.e. when \(\mathbb{E}(\sum_{n=M+1}^{M+N}f(P_{n})-E_{f}N)^{2}\ll\log(N+1)\) uniformly in \(M\geq 0\). Fix a small \(\varepsilon>0\). An application of the Chebyshev inequality gives
\[\mu\left(\left|\sum_{n=1}^{N^{2}}f(P_{n})-E_{f}N^{2}\right|\geq N^{1/2+ \varepsilon}\right)\ll\frac{\log(N+1)}{N^{1+2\varepsilon}}.\]
Since \(f(P_{n})\) is bounded and \(\alpha\)-mixing with exponential rate, for all \(p>2\) we have \(\mathbb{E}|\sum_{n=M+1}^{M+N}f(P_{n})-E_{f}N|^{p}\ll N^{p/2}\) uniformly in \(M\geq 0\) and \(N\geq 1\)[23]. The Erdos-Stechkin inequality [20] strengthens this to \(\mathbb{E}\max_{1\leq k\leq N}|\sum_{n=M+1}^{M+k}f(P_{n})-E_{f}k|^{p}\ll N^{ p/2}\). Choosing a suitably large \(p>2\) thus leads to
\[\mu\left(\max_{1\leq k\leq 2N}\left|\sum_{n=N^{2}+1}^{N^{2}+k}f(P_{n})-E_{f}k \right|\geq N^{1/2+\varepsilon}\right)\ll\frac{N^{p/2}}{N^{p(1/2+\varepsilon) }}\ll\frac{1}{N^{2}}.\]
An application of the Borel-Cantelli lemma then shows that
\[\left|\sum_{n=1}^{N^{2}}f(P_{n})-E_{f}N^{2}\right|\ll N^{1/2+ \varepsilon}\quad\text{and}\quad\max_{1\leq k\leq 2N}\left|\sum_{n=N^{2}+1}^{N^{2}+ k}f(P_{n})-E_{f}k\right|\ll N^{1/2+\varepsilon}\quad\text{for a.e. }\alpha.\]
In particular, \(|\sum_{n=1}^{N}f(P_{n})-E_{f}N|\ll N^{1/4+\varepsilon}\) for a.e. \(\alpha\). This proves (iii) in the case \(\sigma_{f}=0\).
The almost sure approximation by a Wiener process in part (iii) immediately implies the functional CLT under the assumption (5). See Peligrad [21] for a direct proof of the functional CLT under even weaker mixing assumptions. Exactly as in the proof of Corollary 12, we can easily remove assumption (5) on the density from the functional CLT. This proves (ii).
## 6 Limit laws for the local discrepancy
We rely on an explicit formula of Rocadas and Schoissengeier [24], who showed that for any irrational \(\alpha\in[0,1]\), any \(r\in(0,1)\) and any \(k\geq 0\),
\[\max_{0\leq N<q_{k+1}}S_{N,r}(\alpha) =\sum_{\begin{subarray}{c}j=0\\ j\text{ even}\end{subarray}}^{k}\left\{q_{j}r\right\}((1-\left\{q_{j}r\right\} )\,a_{j+1}+\left\{q_{j+1}r\right\}-\left\{q_{j-1}r\right\})+O(1), \tag{20}\] \[\min_{0\leq N<q_{k+1}}S_{N,r}(\alpha) =-\sum_{\begin{subarray}{c}j=0\\ j\text{ odd}\end{subarray}}^{k}\left\{q_{j}r\right\}((1-\left\{q_{j}r\right\} )\,a_{j+1}+\left\{q_{j+1}r\right\}-\left\{q_{j-1}r\right\})+O(1)\]
with universal implied constants. We give the proof of Theorem 3 after a preparatory lemma. Let \(\theta_{m}\) be as in (9), and recall that \(\gamma\) denotes the Euler-Mascheroni constant.
**Lemma 16**.: _Let \(\alpha\sim\mu\) with \(\mu\ll\lambda\), and assume that \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\) is positive and Lipschitz. Let \(l/m\in(0,1)\) be a reduced fraction, and let_
\[X_{j}=\left\{q_{j-1}\frac{l}{m}\right\}\left(1-\left\{q_{j-1}\frac{l}{m} \right\}\right)a_{j}.\]
_Then_
\[\left(\frac{\sum_{j=1}^{k}X_{2j-1}-A_{k}}{s_{k}},\frac{\sum_{j=1}^{k}X_{2j}-A_{k}} {s_{k}}\right)\stackrel{{ d}}{{\rightarrow}}\mathrm{Stab}(1,1) \otimes\mathrm{Stab}(1,1),\]
_where \(A_{k}=\frac{1}{6\log 2}k\log k-\frac{1}{6\log 2}\left(6\theta_{m}+\gamma+\log\frac{12 \log 2}{\pi}\right)k\) and \(s_{k}=\frac{\pi}{12\log 2}k\)._
**Proof.** Fix real numbers \(x_{1},x_{2}\) such that \((x_{1},x_{2})\neq(0,0)\), and let \(Y_{j}=(x_{1}/k)X_{2j-1}+(x_{2}/k)X_{2j}\), \(1\leq j\leq k\). Throughout, implied constants are allowed to depend on \(x_{1},x_{2}\). The random variable \(Y_{j}\) is a function of \((a_{2j-1},a_{2j},P_{2j-1},P_{2j})\), therefore by Lemma 14 the sequence \(Y_{j}\) is \(\psi\)-mixing with exponential rate. One readily checks that if \(x_{1}\neq 0\), then with a suitable constant \(B>0\),
\[\mathbb{E}(1-\cos Y_{j}) \geq\mu\left(\frac{\pi}{2}\leq|Y_{j}|\leq\frac{3\pi}{2}\right)\] \[\gg\mu\left(\{q_{2j-2}\equiv 1\pmod{m}\}\cap\left\{\frac{B}{2}k \leq a_{2j-1}\leq Bk\right\}\cap\{a_{2j}=1\}\right)\] \[\gg\mu_{\mathrm{Gauss}}\left(\frac{B}{2}k\leq a_{2j-1}\leq Bk \right)\gg\frac{1}{k}.\]
A similar argument shows that \(\mathbb{E}(1-\cos Y_{j})\gg 1/k\) holds in the case \(x_{1}=0\), \(x_{2}\neq 0\) as well. Further, we have
\[\mathbb{E}|e^{iY_{j}}-1|\leq\mathbb{E}\min\{|Y_{j}|,2\}\ll\mathbb{E}\min\left\{ \frac{a_{2j-1}+a_{2j}}{k},1\right\}\ll\frac{\log k}{k}.\]
An application of [13, Lemma 1] thus leads to
\[\mathbb{E}\exp\left(i\sum_{j=1}^{k}Y_{j}\right)=\exp\left(\sum_{j=1}^{k} \mathbb{E}(e^{iY_{j}}-1)\right)+O\left(\frac{(\log k)^{2}}{k}\right). \tag{21}\]
Here \(\sum_{j\in\log k}\mathbb{E}(e^{iY_{j}}-1)=O((\log k)^{2}/k)\), hence it will be enough to consider the terms \(j\gg\log k\).
We can express \(Y_{j}\) as \(Y_{j}=F(P_{2j-2},a_{2j-1},a_{2j})\) with the function
\[F\left(\left(\begin{array}{cc}a&b\\ c&d\end{array}\right),b_{1},b_{2}\right)=\frac{x_{1}}{k}\left\{d\frac{l}{m} \right\}\left(1-\left\{d\frac{l}{m}\right\}\right)b_{1}+\frac{x_{2}}{k}\left\{ (b_{1}d+c)\frac{l}{m}\right\}\left(1-\left\{(b_{1}d+c)\frac{l}{m}\right\} \right)b_{2}.\]
Theorem 8 yields
\[\mathbb{E}(e^{iY_{j}}-1) =\sum_{\begin{subarray}{c}g\in G_{1}\\ b_{1},b_{2}\in\mathbb{N}\end{subarray}}(e^{iF(g,b_{1},b_{2})}-1)\mu\left(P_{2j -2}=g,a_{2j-1}=b_{1},a_{2j}=b_{2}\right)\] \[=\sum_{\begin{subarray}{c}g\in G_{1}\\ b_{1},b_{2}\in\mathbb{N}\end{subarray}}(e^{iF(g,b_{1},b_{2})}-1)\frac{\mu_{ \mathrm{Gauss}}(a_{1}=b_{1},a_{2}=b_{2})}{|G_{1}|}+O(e^{-\tau j}).\]
For a fixed \(g=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in G_{1}\), we have \(F(g,b_{1},b_{2})=t_{1}b_{1}+t_{2}(b_{1})b_{2}\) with
\[t_{1}=\frac{x_{1}}{k}\left\{d\frac{l}{m}\right\}\left(1-\left\{d\frac{l}{m} \right\}\right),\qquad t_{2}(b_{1})=\frac{x_{2}}{k}\left\{(b_{1}d+c)\frac{l}{m }\right\}\left(1-\left\{(b_{1}d+c)\frac{l}{m}\right\}\right).\]
In [5, Lemma 20] it was shown that for any \(t_{1},t_{2}\in(-1/2,1/2)\), we have
\[\sum_{b_{1},b_{2}\in\mathbb{N}}\left(e^{i(t_{1}b_{1}+t_{2}b_{2})}-1\right) \mu_{\mathrm{Gauss}}(a_{1}=b_{1},a_{2}=b_{2})\] \[= -\frac{1}{\log 2}\left(i\gamma t_{1}+\frac{\pi}{2}|t_{1}|+it_{1}\log|t_ {1}|+i\gamma t_{2}+\frac{\pi}{2}|t_{2}|+it_{2}\log|t_{2}|\right)\] \[+O\left(t_{1}^{2}\log\frac{1}{|t_{1}|}+t_{2}^{2}\log\frac{1}{|t_{2 }|}+|t_{1}t_{2}|\log\frac{1}{|t_{1}|}\log\frac{1}{|t_{2}|}\right).\]
The proof actually gives that more generally, for any constant \(t_{1}\) and any sequence \(t_{2}(n)\), \(n\in\mathbb{N}\) with \(t_{1},t_{2}(n)\in(-1/2,1/2)\), we have
\[\sum_{b_{1},b_{2}\in\mathbb{N}}\left(e^{i(t_{1}b_{1}+t_{2}(b_{1}) b_{2})}-1\right)\mu_{\text{Gauss}}(a_{1}=b_{1},a_{2}=b_{2})\] \[= -\frac{1}{\log 2}\left(i\gamma t_{1}+\frac{\pi}{2}|t_{1}|+it_{1} \log|t_{1}|-i\sum_{b_{1},b_{2}\in\mathbb{N}}t_{2}(b_{1})b_{2}R(b_{1},b_{2})+ \sum_{b_{1}\in\mathbb{N}}\frac{\frac{\pi}{2}|t_{2}(b_{1})|+it_{2}(b_{1})\log| t_{2}(b_{1})|}{b_{1}(b_{1}+1)}\right)\] \[+O\left(t_{1}^{2}\log\frac{1}{|t_{1}|}+T_{2}^{2}\log\frac{1}{T_{ 2}}+|t_{1}|T_{2}\log\frac{1}{|t_{1}|}\log\frac{1}{T_{2}}\right),\]
where \(T_{2}=\sup_{b_{1}\in\mathbb{N}}|t_{2}(b_{1})|\), and
\[R(b_{1},b_{2})=\mu_{\text{Gauss}}(a_{1}=b_{1},a_{2}=b_{2})-\frac{1}{b_{1}(b_{ 1}+1)b_{2}(b_{2}+2)}.\]
As we observed in [5, Lemma 20] using telescoping sums, \(\sum_{b_{1},b_{2}\in\mathbb{N}}b_{2}R(b_{1},b_{2})=-\gamma\). In particular, we obtain a formula for \(\mathbb{E}(e^{iY_{j}}-1)\) in the form of an average over \(G_{1}\). Lemma 4 and formula (12) show that
\[\frac{1}{|G_{1}|}\sum_{g\in G_{1}}\left\{d\frac{l}{m}\right\} \left(1-\left\{d\frac{l}{m}\right\}\right)=\sum_{a\in\mathbb{Z}_{m}}\nu_{a} \left\{\frac{a}{m}\right\}\left(1-\left\{\frac{a}{m}\right\}\right)=\frac{1}{6},\] \[\frac{1}{|G_{1}|}\sum_{g\in G_{1}}\left\{d\frac{l}{m}\right\} \left(1-\left\{d\frac{l}{m}\right\}\right)\log\left(\left\{d\frac{l}{m}\right\} \left(1-\left\{d\frac{l}{m}\right\}\right)\right)=\theta_{m}.\]
Note that the previous two formulas do not depend on \(l\) since \(\nu_{a}\) is invariant under multiplication by elements of \(\mathbb{Z}_{m}^{*}\). We saw in the proof of Lemma 4 that \(G_{1}\) acts transitively on the set of row vectors \(V\), and that transposition is a bijection of \(G_{1}\). Consequently for any fixed \(b_{1}\in\mathbb{N}\), the row vector \((1,b_{1}\pmod{m})\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)^{\top}\) is uniformly distributed on \(V\), and its second coordinate, \(b_{1}d+c\pmod{m}\) has distribution \(\nu\). In particular, for any fixed \(b_{1}\in\mathbb{N}\) we have the same averages
\[\frac{1}{|G_{1}|}\sum_{g\in G_{1}}\left\{(b_{1}d+c)\frac{l}{m} \right\}\left(1-\left\{(b_{1}d+c)\frac{l}{m}\right\}\right)=\sum_{a\in\mathbb{ Z}_{m}}\nu_{a}\left\{\frac{a}{m}\right\}\left(1-\left\{\frac{a}{m}\right\} \right)=\frac{1}{6},\] \[\frac{1}{|G_{1}|}\sum_{g\in G_{1}}\left\{(b_{1}d+c)\frac{l}{m} \right\}\left(1-\left\{(b_{1}d+c)\frac{l}{m}\right\}\right)\log\left(\left\{ (b_{1}d+c)\frac{l}{m}\right\}\left(1-\left\{(b_{1}d+c)\frac{l}{m}\right\} \right)\right)=\theta_{m}.\]
After some simplification, we arrive at
\[\mathbb{E}(e^{iY_{j}}-1)= i\frac{\log k-6\theta_{m}-\gamma}{6\log 2}\cdot\frac{x_{1}}{k}-\frac{ \pi}{(12\log 2)k}\left(|x_{1}|+\frac{2i}{\pi}x_{1}\log|x_{1}|\right)\] \[+i\frac{\log k-6\theta_{m}-\gamma}{6\log 2}\cdot\frac{x_{2}}{k}- \frac{\pi}{(12\log 2)k}\left(|x_{2}|+\frac{2i}{\pi}x_{2}\log|x_{2}|\right)+O \left(\frac{(\log k)^{2}}{k^{2}}+e^{-\tau j}\right).\]
We can compute the right-hand side of (21) by summing over \(\log k\ll j\leq k\), thus
\[\mathbb{E}\exp\left(i\sum_{j=1}^{k}Y_{j}\right)= \exp\left(i\frac{\log k-6\theta_{m}-\gamma}{6\log 2}x_{1}-\frac{\pi}{12 \log 2}\left(|x_{1}|+\frac{2i}{\pi}x_{1}\log|x_{1}|\right)\right.\] \[\left.\hskip 28.452756pt+i\frac{\log k-6\theta_{m}-\gamma}{6\log 2}x_{2}- \frac{\pi}{12\log 2}\left(|x_{2}|+\frac{2i}{\pi}x_{2}\log|x_{2}|\right)+O\left(\frac{( \log k)^{2}}{k}\right)\right)\] \[+O\left(\frac{(\log k)^{2}}{k}\right).\]
Letting \(A_{k}\) and \(s_{k}\) be as in the claim and replacing \(x_{n}\) by \(\frac{12\log 2}{\pi}x_{n}\), \(n=1,2\), we obtain
\[\mathbb{E}\exp\left(i\frac{\sum_{j=1}^{k}X_{2j-1}-A_{k}}{s_{k}}x_{1 }+i\frac{\sum_{j=1}^{k}X_{2j}-A_{k}}{s_{k}}x_{2}\right)\] \[\qquad\qquad=\exp\left(-\left(|x_{1}|+\frac{2i}{\pi}x_{1}\log|x_{ 1}|\right)\right)\exp\left(-\left(|x_{2}|+\frac{2i}{\pi}x_{2}\log|x_{2}| \right)\right)+O\left(\frac{(\log k)^{2}}{k}\right).\]
This implies the pointwise convergence of the characteristic function of the random vector in the claim to that of \(\operatorname{Stab}(1,1)\otimes\operatorname{Stab}(1,1)\), which proves the desired convergence in distribution.
Proof of Theorem 3.: Fix a reduced fraction \(r=l/m\in(0,1)\). We may assume that the density \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\) is positive and Lipschitz. This assumption can be removed exactly as in the proof of Corollary 12.
For any \(M\geq 1\), let \(k_{M}^{*}=k_{M}^{*}(\alpha)\) denote the random index for which \(q_{k_{M}^{*}}\leq M<q_{k_{M}^{*}+1}\), and let \(k_{M}\) be the odd integer closest to \(\frac{12\log 2}{\pi^{2}}\log M\). Using the fact that \(\log q_{k}\) satisfies the CLT with centering term \(\frac{\pi^{2}}{12\log 2}k\) and scaling term \(k^{1/2}\)[14, p. 194], we immediately obtain \(\mu(|\log q_{k_{M}}-\log M|\geq(\log M)^{1/2+\varepsilon})\to 0\) with any \(\varepsilon>0\). Consequently, \(\mu(|k_{M}^{*}-k_{M}|\geq(\log M)^{1/2+\varepsilon})\to 0\). The explicit formula (20) thus yields
\[\max_{0\leq N<M}S_{N,r}(\alpha)= \sum_{j=0}^{\frac{k_{M}-1}{2}}\{q_{2j}r\}\left((1-\{q_{2j}r\})a_{ 2j+1}+\{q_{2j+1}r\}-\{q_{2j-1}r\}\right)+\xi_{M}(\alpha), \tag{22}\] \[\min_{0\leq N<M}S_{N,r}(\alpha)= -\sum_{j=0}^{\frac{k_{M}-1}{2}}\{q_{2j+1}r\}\left((1-\{q_{2j+1}r \})a_{2j+2}+\{q_{2j+2}r\}-\{q_{2j}r\}\right)+\xi_{M}^{\prime}(\alpha)\]
with error terms \(\xi_{M}(\alpha)\), \(\xi_{M}^{\prime}(\alpha)\) which, outside a set of \(\mu\)-measure \(o(1)\), satisfy
\[|\xi_{M}(\alpha)|,|\xi_{M}^{\prime}(\alpha)|\leq\sum_{j=k_{M}-(\log M)^{1/2+ \varepsilon}}^{k_{M}+(\log M)^{1/2+\varepsilon}}a_{j}+O(1).\]
A classical result of Khintchine [14, p. 204] states that \(\sum_{j=1}^{k}a_{j}/(k\log k)\to 1/\log 2\) in measure. Hence
\[\mu\left(\sum_{j=k_{M}-(\log M)^{1/2+\varepsilon}}^{k_{M}+(\log M)^{1/2+ \varepsilon}}a_{j}\geq(\log M)^{1/2+2\varepsilon}\right)\ll\mu_{\text{Gauss}} \left(\sum_{1\leq j\ll(\log M)^{1/2+\varepsilon}}a_{j}\geq(\log M)^{1/2+2 \varepsilon}\right)\to 0.\]
In particular, \(\xi_{M}(\alpha),\xi_{M}^{\prime}(\alpha)=o(\log M)\) in \(\mu\)-measure, and are thus negligible.
Relation (2) shows that
\[\sum_{j=0}^{\frac{k_{M}-1}{2}}\{q_{2j}r\}\cdot\{q_{2j-1}r\}=\rho k_{M}+o(k_{M })\quad\text{and}\quad\sum_{j=0}^{\frac{k_{M}-1}{2}}\{q_{2j}r\}\cdot\{q_{2j+1} r\}=\rho k_{M}+o(k_{M})\]
hold for a.e. \(\alpha\) with the same constant \(\rho\). In particular, the error terms in the previous formula are \(o(\log M)\) in \(\mu\)-measure, and (22) simplifies to
\[\max_{0\leq N<M}S_{N,r}(\alpha)= \sum_{j=0}^{\frac{k_{M}-1}{2}}\{q_{2j}r\}(1-\{q_{2j}r\})a_{2j+1} +o(\log M)\quad\text{in $\mu$-measure},\] \[\min_{0\leq N<M}S_{N,r}(\alpha)= -\sum_{j=0}^{\frac{k_{M}-1}{2}}\{q_{2j+1}r\}(1-\{q_{2j+1}r\})a_{2j +2}+o(\log M)\quad\text{in $\mu$-measure}.\]
Lemma 16 yields the desired limit law with centering term
\[\frac{1}{6\log 2}.\frac{k_{M}}{2}\log\frac{k_{M}}{2}-\frac{6\theta_{m}+\gamma+ \log\frac{12\log 2}{\pi}}{6\log 2}.\frac{k_{M}}{2}=\frac{1}{\pi^{2}}\log M\log\log M- \frac{6\theta_{m}+\gamma+\log(2\pi)}{\pi^{2}}\log M+O(1)\]
and scaling term \(\frac{\pi}{12\log 2}\cdot\frac{k_{M}}{2}=\frac{1}{2\pi}\log M+O(1)\).
## Acknowledgments
The author is supported by the Austrian Science Fund (FWF) project M 3260-N.
|
2306.12382 | Phase structure of the on-shell parametrized 2+1 flavor Polyakov
quark-meson model | Augmenting the improved chiral effective potential of the on-shell
renormalized 2+1 flavour quark-meson (RQM) model with the Polyakov-loop
potential that accounts for the deconfinement transition,~we get the Quantum
Chromodynamics (QCD) like framework of the renormalized Polyakov quark-meson
(RPQM) model.~When the divergent quark one-loop vacuum term is included in the
effective potential of the quark-meson (QM) model,~its tree level parameters or
the parameters fixed by the use of meson curvature masses,~become inconsistent
as the curvature masses involve the self energy evaluations at zero
momentum.~Using the modified minimal subtraction method,~the consistent chiral
effective potential for the RQM model has been calculated after relating the
counterterms in the on-shell (OS) scheme to those in the $\overline{\text{MS}}$
scheme and finding the relations between the renormalized parameters of both
the schemes where the physical (pole) masses of the $\pi, K, \eta$ and
$\eta^{\prime}$ pseudo-scalar mesons and the scalar $\sigma$ meson,~the pion
and kaon decay constants,~have been put into the relation of the running
couplings and mass parameter.~Using the RPQM model and the PQM Model with
different forms for the Polyakov-loop potentials in the presence or the absence
of the quark back-reaction,~we have computed and compared the effect of the
consistent quark one-loop correction and the quark back-reaction on the scaled
chiral order parameter,~the QCD phase diagrams and the different thermodynamic
quantities.~The results have been compared with the 2+1 flavor lattice QCD data
from the Wuppertal-Budapest collaboration \{JHEP 09,73(2010); PLB
730,99(2014)\} and the HotQCD collaboration \{PRD 90,094503(2014)\}. | Suraj Kumar Rai, Vivek Kumar Tiwari | 2023-06-21T17:02:01Z | http://arxiv.org/abs/2306.12382v1 | # Phase structure of the on-shell parametrized 2+1 flavor Polyakov quark-meson model
###### Abstract
Augmenting the improved chiral effective potential of the on-shell renormalized 2+1 flavour quark-meson (RQM) model with the Polyakov-loop potential that accounts for the deconfinement transition, we get the Quantum Chromodynamics (QCD) like framework of the renormalized Polyakov quark-meson (RPQM) model. When the divergent quark one-loop vacuum term is included in the effective potential of the quark-meson (QM) model, its tree level parameters or the parameters fixed by the use of meson curvature masses, become inconsistent as the curvature masses involve the self energy evaluations at zero momentum. Using the modified minimal subtraction method, the consistent chiral effective potential for the RQM model has been calculated after relating the counterterms in the on-shell (OS) scheme to those in the \(\overline{\text{MS}}\) scheme and finding the relations between the renormalized parameters of both the schemes where the physical (pole) masses of the \(\pi,K,\eta\) and \(\eta^{\prime}\) pseudo-scalar mesons and the scalar \(\sigma\) meson, the pion and kaon decay constants, have been put into the relation of the running couplings and mass parameter. Using the RPQM model and the PQM Model with different forms for the Polyakov-loop potentials in the presence or the absence of the quark back-reaction, we have computed and compared the effect of the consistent quark one-loop correction and the quark back-reaction on the scaled chiral order parameter, the QCD phase diagrams and the different thermodynamic quantities. The results have been compared with the 2+1 flavor lattice QCD data from the Wuppertal-Budapest collaboration {JHEP 09,73(2010); PLB 730,99(2014)} and the HotQCD collaboration {PRD 90,094503(2014)}.
## I Introduction
The hadronic matter under the extreme conditions of high temperatures and/or densities, gets dissolved into its quark and gluon constituents and the Quark Gluon Plasma (QGP) [1; 2; 3; 4; 5] is formed as predicted by the strong interaction theory quantum chromodynamics (QCD). The QCD phase diagram [1] and the general properties of QGP are suject matter of intensive investigation for the ultra-relativistic heavy ion collision experiments like the RHIC (BNL), LHC (CERN) and the upcoming CBM experiments at the FAIR facility (GSI-Darmstadt). The first-principle lattice QCD simulations [6; 7; 8; 9; 10; 11; 12; 13; 14; 15] give us important information and insights for the QCD phase transition that occurs on the temperature axis but when the baryon density is nonzero, these calculations get severely hampered as the QCD action becomes complex due to the fermion sign problem [8]. The QCD-like effective theory models [16; 17] built upon the symmetries of the QCD give us the much needed framework in which the QCD phase structure and its thermodynamics can be explored in great details.
The QCD Lagrangian has the global \(SU_{L+R}(3)\times SU_{L-R}(3)\) symmetry for the three massless quarks. The chiral (axial \(A=L-R\)) symmetry gets spontaneously broken in the low energy vacuum of the QCD and one gets the non-strange and the strange chiral condensates as the order parameters with eight massless pseudoscalar bosons as Goldstone modes. Since the small masses of the \(u\) and \(d\) quarks cause a small explicit breaking while a relatively large mass of the \(s\) quark generates a larger explicit breaking of the chiral symmetry, the three pions are light while the four kaons and one eta are heavier in nature. Due to the instanton effects, the \(U_{A}(1)\) axial symmetry also gets explicitly broken to the \(Z_{A}(N_{f})\) at the quantum level [18]. The \(\eta^{\prime}\) meson does not remain a massless Goldstone boson even when the quarks are massless as it acquires a mass of about 1 GeV due to the \(U_{A}(1)\) axial anomaly. Coupling the nine scalar and nine pseudo-scalar mesons of the three flavor linear sigma model [19] with the two light quarks \(u,d\) and the one heavier s quark, one gets the effective theory framework of the 2+1 flavor quark-meson (QM) model [20].
Several investigations of the QCD phase structure, have already been done in the chiral models [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34], two and three flavor QM model [35; 36; 37; 38; 20]. When the effect of the Dirac sea gets neglected in the standard mean field approximation (s-MFA), the QM model studies look inconsistent because in the chiral limit, the chiral phase transition at zero baryon density turns first-order which is ruled out by the general theoretical arguments [39; 40]. The inclusion of the quark one-loop vacuum fluctuation in the QM model [41], removes the above inconsistency. In several of the QCD phase structure studies carried out in the ambit of the quark-meson model with the quark one-loop vacuum term [42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55], the model parameters are fixed by using the curvature masses of the mesons while the pion and kaon decay constant are fixed by the vacuum expectation value of the non-strange and strange chiral condensates. This parameter fixing turns out to be inconsistent because the effective potential generates the n-point functions of the theory at vanishing |
2302.00271 | CATFL: Certificateless Authentication-based Trustworthy Federated
Learning for 6G Semantic Communications | Federated learning (FL) provides an emerging approach for collaboratively
training semantic encoder/decoder models of semantic communication systems,
without private user data leaving the devices. Most existing studies on
trustworthy FL aim to eliminate data poisoning threats that are produced by
malicious clients, but in many cases, eliminating model poisoning attacks
brought by fake servers is also an important objective. In this paper, a
certificateless authentication-based trustworthy federated learning (CATFL)
framework is proposed, which mutually authenticates the identity of clients and
server. In CATFL, each client verifies the server's signature information
before accepting the delivered global model to ensure that the global model is
not delivered by false servers. On the contrary, the server also verifies the
server's signature information before accepting the delivered model updates to
ensure that they are submitted by authorized clients. Compared to PKI-based
methods, the CATFL can avoid too high certificate management overheads.
Meanwhile, the anonymity of clients shields data poisoning attacks, while
real-name registration may suffer from user-specific privacy leakage risks.
Therefore, a pseudonym generation strategy is also presented in CATFL to
achieve a trade-off between identity traceability and user anonymity, which is
essential to conditionally prevent from user-specific privacy leakage.
Theoretical security analysis and evaluation results validate the superiority
of CATFL. | Gaolei Li, Yuanyuan Zhao, Yi Li | 2023-02-01T06:26:44Z | http://arxiv.org/abs/2302.00271v1 | CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications
###### Abstract
Federated learning (FL) provides an emerging approach for collaboratively training semantic encoder/decoder models of semantic communication systems, without private user data leaving the devices. Most existing studies on trustworthy FL aim to eliminate data poisoning threats that are produced by malicious clients, but in many cases, eliminating model poisoning attacks brought by fake servers is also an important objective. In this paper, a certificateless authentication-based trustworthy federated learning (CATFL) framework is proposed, which mutually authenticates the identity of clients and server. In CATFL, each client verifies the server's signature information before accepting the delivered global model to ensure that the global model is not delivered by false servers. On the contrary, the server also verifies the server's signature information before accepting the delivered model updates to ensure that they are submitted by authorized clients. Compared to PKI-based methods, the CATFL can avoid too high certificate management overheads. Meanwhile, the anonymity of clients shields data poisoning attacks, while real-name registration may suffer from user-specific privacy leakage risks. Therefore, a pseudonym generation strategy is also presented in CATFL to achieve a trade-off between identity traceability and user anonymity, which is essential to conditionally prevent from user-specific privacy leakage. Theoretical security analysis and evaluation results validate the superiority of CATFL.
6G semantic communication, Federated learning, Certificateless authentication, Pseudonym generation, Privacy-enhancing.
## I Introduction
The 6G communication goes beyond the mobile internet to embrace omnipresent Internet of Everything applications and artificial intelligence (AI) services. This shall evolve the wireless communication networks from "connected things" to "connected intelligence". The 6G communications will enable the interconnections among lots of intelligent agents within a hyper-connected cyber-physical system [1]. However, 6G communications face with many challenges generated by constrained network resources, high energy consumption, and new attack surfaces (e.g., privacy leakage, data poisoning attacks), which continues to bring significant hindrance to their world-wide realization and deployment. Semantic communication is a brand-new communication paradigm, which meets the interaction requirements of intelligent agents in the 6G era [2, 3, 4]. Meanwhile, federated Learning (FL) provides a promising collaborative mode to train joint learning models for connected intelligence at the network edge [5]. Compared to the centralized learning framework, the FL allows mobile/edge devices to collaboratively extract the distributed knowledge from the data that does not leave the devices [6, 7]. The characteristics of local training and joint learning of FL is helpful to protect the privacy of information senders and receivers in 6G semantic communications [6].
However, FL still exposes many security threats. Firstly, poisoning attacks seriously threatens its usability [8, 9]. On one hand, since the users' original data and the local training process is not open for the FL server, malicious FL clients may submit poisoned parameters to induce encoder/decoder errors at the model testing phase. Notably, a poisoned parameter may be diffused across all FL clients by model aggregation and distribution, thereby rendering the final global learning model to be backdoored. For a purpose-based terrorist to misguide other autonomous vehicles at the stop sign, it is possible to move forward by submitting maliciously-crafted inputs, leading to a serious traffic accident [10]. Therefore, any clients and parameters they submit in the FL training process need to be authenticated and traced. On the other hand, FL is also vulnerable to model poisoning attacks because it is extremely hard to guarantee that the FL server is trustworthy and robust especially when deployed in 6G edge computing scenarios. In
Fig. 1: Security threats of federated learning in 6G semantic communications. Before participating in the joint training of semantic encoder/decoder, the client needs to obtain a certificate issued by the certification center, and only the client with the legal certificate can join the FL model aggregation. Meanwhile, before receiving the encoded semantic information sent by the sender, the receiver also needs to verify if the sender’s identity is valid.
[11], the attacker ambitiously substitutes the aggregated global model with a malicious model to strengthen the poisoning effect. That's why the FL server also needs to be authenticated by each client to reject false global model. Secondly, possible privacy leakage from FL-based semantic communication systems also should be concerned. On one hand, the semi-honest FL server can reconstruct sensitive training samples (e.g. facial images, financial records, and medical images) of the targeted FL clients through the shared gradients or parameters [12, 13]. On the other hand, since 6G semantic communication transmits compressed semantic information between senders and receivers, the training data can be reconstructed from the semantic representation vectors if the attacker establishes an eavesdrop mechanism on the semantic communication channels. Therefore, it is still necessary to propose an additional privacy-enhancing mechanism against such privacy leakage risks in FL-based semantic communication systems.
Motivated by the aforementioned issues, a trustworthy FL should consider the following fundamental issues: 1) how to guarantee that users' privacy is not leaked during the model training process and 2) how to guarantee the model robustness to malicious manipulations. Therefore, resisting poisoning attacks while protecting users' privacy has became an meaningful and urgent demand. Cryptography-based methods (e.g., homomorphic encryption and secret sharing) are essential to guarantee the confidentiality of shared gradients or parameters between FL clients. However, cryptography-based methods are always invalid when deployed on devices with limited computational and communication resources. For example, too high certificate management overheads reduce the practicality of PKI-based schemes in 6G semantic communication systems.
In this paper, we propose a Certificateless Authentication-based Trustworthy Federated Learning (CATFL) framework for 6G semantic communications, which can efficiently defend against poisoning attacks without leaking users' privacy. In CATFL, the FL server has two types of private key: 1) partial private key and 2) full private key, which are respectively generated by the trusted key generator center (KGC) and itself. Therefore, even an attacker who colludes with one of the CATFL server and KGC, it can not get the full secret key to impersonate, preventing the poisoning attacker from substituting the global learning model with a maliciously-modified model. Meanwhile, the CATFL is able to guarantee the trustworthiness of the local client's gradients because of providing mutual authentication for each client. We also designate a pseudonym generation strategy to hide each client's real identity. This strategy allows to trace the original real identity of each CATFL client by trusted third-parties, enabling to identify malicious mobile/edge devices. In a summary, the main contributions of our work are listed as follows:
* We propose a Certificateless Authentication-based Trustworthy Federated Learning (CATFL) framework as the underlying technology of 6G semantic communications to provide higher security. Each elements in CATFL has two types of private key, which are independently generated by the KGC and itself. We prove that the CATFL can resist two types of security threats: 1) poisoning attacks (including server-side and client-side); 2) privacy leakage (including gradient leakage and semantic representation leakage).
* A pseudonym generation strategy is presented to achieve a trade-off between user anonymity and identity traceability in CATFL. Hence, a powerful determent against malicious threats to FL-based 6G semantic communication systems is achieved.
* We provide a comprehensive theoretic proof for the security and trustworthiness of proposed CATFL by comparison to existing PKI-based methods. It demonstrates that the CATFL is more applicable for practical 6G semantic communication systems.
## II Related Work
We comprehensively overview the security challenges of FL in 6G semantic communications, mainly caused by poisoned attacks and privacy leakage. And then, the disadvantages of existing secure authentication methods for trustworthy FL in 6G semantic communications is discussed in detail.
### _Security Challenges of FL in 6G Semantic Communications_
Different from existing advanced channel encoding/decoding and modulation techniques, semantic communication attempts to extract the "meanings" of sent information at the transmitter using artificial intelligence and then transmits these "meanings" to the receivers. With the assistance of a shared knowledge base (KB), the receiver can accurately catch the received "meanings" [14]. In 6G semantic communication systems, semantic encoder/decoder could be jointly trained using the FL framework for ensuring low loss rate of shared semantic knowledge and transmission accuracy. However, many security risks of FL have not yet been identified in details, lacking of effect countermeasures.
#### Ii-A1 Poisoning Attacks
Many practical poisoning attacks on FL have been proposed, which can be divided into two aspects: 1) data poisoning attacks and 2) model poisoning attacks. Usually, the data poisoning attack uses crafted data to maliciously substitute normal samples secretly. To generate crafted data, Zhang et al. [15] introduced generative adversarial networks to inversely reconstruct training data from the given models, which enables a high attack efficiency. By model replacement and adjusting the scaling factor, Eugene et al. [11] enhanced the persistence of backdoor in FL-based systems.
#### Ii-A2 Privacy Leakage Threats
Nowadays, although the FL is designated to enhance the privacy-preserving ability, privacy leakage threats have plagued the development of FL technologies. On one hand, the untrusted parameter server may reconstruct the training data using an adversarial algorithm proposed in [16] from the gradients submitted by each mobile/edge device. This is mainly because the updated model parameters in the training process may memory sensitive information and leak them to malicious adversaries [17]. On the other hand, the attacker can reconstruct the training data by exploiting some query-feedback information on the targeted learning
model, i.e., membership inference and model inversion [18]. The common feature of this kind of privacy leakage threat is to generate dummy samples that can approximate the real gradients, predictions, and weight parameters submitted by local FL clients.
### _Secure Authentication for Trustworthy Federated Learning_
Secure authentication is widely deployed as an emerging technique of exchanging model parameters and certification to provide security and trustworthiness for FL-based system against privacy leakage threats and poisoning attacks. Authors in [19] proposed a verifiable federated learning (VerifyNet) framework, enabling each client to verify whether the FL server works correctly. In VerifyNet, the adversary can not deceive users by forging the identity proof unless the formulated NP-hard problem is resolved. Salvia et al. [20] provided an implementation case of secure authentication on the open-source FL platform named as "Flower". Since there are two types of malicious behaviors: i) unauthorized manipulations to the submitted model updates; ii) sending the same model update to the FL server with multiple times, authors in [21] introduced data signatures to prevent from malicious behavior i). Meanwhile, this work also proposes to exploit blind signatures to sign local model updates only once to avoid malicious behavior of ii). However, all existing methods use the public key infrastructure (PKI)-based authentication, which is a certificate-based mechanism so that the certificate management cost and communication overheads must be affordable. Therein, many unmanned mobile/edge devices in semantic communication systems can not be equipped with such complex PKI-based authentication mechanisms. To this end, we present a novel, fundamental, efficient certificateless authentication countermeasures for deploying FL in 6G semantic communication systems. To the best of our knowledge, this is the first work to introduce certificateless authentication into the FL-based 6G semantic communication field.
## III Proposed CATFL framework
In this section, we will introduce the proposed CATFL framework for 6G semantic communications in detail, including the preliminaries, security requirements, adversarial model and overview of proposed CATFL.
### _Preliminaries_
#### Iii-A1 Cerificateless Cryptography
Cerificateless Cryptography is first proposed by Al-Riyami et al. [22] to deal with the key escrow limitation of the traditional PKI-based cryptography. In Cerificateless Cryptography, a trusted third party named as KGC is responsible to generate a partial private key (PSK) for the users. The user obtains the full private key by combining the PSK with a secret value, which is unknown to the KGC. Under this seeting, the KGC can not achieve the user's private keys. For FL-based 6G semantic communication systems, intelligent agents are often deployed on mobile/edge devices that has limited computation and communication resources. To reduce authentication overheads and enhance the FL trustworthiness, using Cerificateless Cryptography to establish a novel 6G communication architecture has became an urgent and important demand. In this article, we will discuss its feasibility in detail (refer to Section III.D).
#### Iii-A2 Elliptic Curve Cryptography
In Elliptic Curve Cryptography (ECC), there are two types of components, 1) elliptic curve, 2) pre-defined operation rules. Given a base point on the elliptic curve, it is very hard to calculate the discrete logarithm value of a random elliptic curve element. To save computation resounces, we will use ECC to implement the certificateless authentication mechanism in CATFL, in which the elliptic curve is formulated as the following equation:
\[\mathbb{E}=\{(x,y)|y^{2}=x^{3}+ax+b,4a^{3}+27b^{2}=0\} \tag{1}\]
Where \(\mathbb{E}\) presents an elliptic curve and \(G\) is a generator with the order \(r\). Given a base point \(Q=kG\) and \(k\in Z_{r}\), it is very hard and almost unpractical to find an integer \(k\) in the polynomial time. In FL-based 6G semantic communication systems, we can apply the ECC to implement the certificateless authentication mechanism among the server and clients as well as the sender and the receiver.
### _Security Requirements_
To bring out the motivation of our work, we summarize the security requirements of FL in 6G semantic communication systems as follows.
* Message authentication: The receiver of semantic information needs to verify the semantic encodes' validity. Any tamper on the semantic encodes shall be easily detected.
* Non-repudiation: In CATFL, all authenticated messages could not be repudiated, that means no CATFL entities can deny a valid signature.
* Anonymity: Each participator needs to generate a pseudonym and its real identity has to be hidden during the communication process.
* Un-linkability: No CATFL entities can link multiple different messages to the same user.
* Resistant against attacks: The CATFL framework also needs to prevent from some typical attacks including data modification attacks and relay attacks.
* Conditional traceability: Only the trusted third-party that is named as KGC can know the real identity of each participator in CATFL.
### _Adversarial Model_
According to listed security requirements of CATFL for 6G semantic communications, we present two types of adversaries, which are denoted as follows: 1) \(\mathcal{A}_{1}\), and 2) \(\mathcal{A}_{2}\). Therein, \(\mathcal{A}_{1}\) has the ability of changing the public key of each participator using a selected constant value. However, it fails to obtain the master private key of KGC. Different from \(\mathcal{A}_{1}\), \(\mathcal{A}_{2}\) can get the master private key of KGC, but it fails to modify public keys of any participator.
If a valid signature \(Verify(P_{pub}\), \(m^{*}\), \(AID^{*}\), \(\Theta^{*})=1)\) is forged by an adversary \(\mathcal{A}\), it can be consider the \(\mathcal{A}\) (\(\mathcal{A}_{1}\) or
\(\mathcal{A}_{2}\)) has launched a successful attack. The proposed CATFL framework can be validated to be secure if the attack success rate for any attackers is negligible.
### _Overview of Proposed CATFL_
The proposed CATFL framework consists of four main entities, i.e., TRA, KGC, CS and User, which connect with each other over wireless communication channels. The proposed CATFL framework has two different communication levels: 1) upper level, and 2) lower level. The upper level contains the communications between the TRA and KGC, as well as Server-to-User (S2U) communications via a secret channel, while the lower level consists of User-to-User (U2U) semantic communications via a public channel. The system entities of CATFL is shown in Fig. 2 and explained below in detail.
* Tracing Authority (TRA): The TRA in CATFL is a trusted entity with enough resources. Functions of this entity contain pseudonym generation for participants (FL server and clients) and the corresponding tracing strategy configuration (if needed).
* Key Generation Center (KGC): The KGC in CATFL is an independent and trusted third party, which is responsible to distribute all the private and public keys of each participator in the 6G communication systems.
* Cloud Server (CS): The CS in CATFL is responsible to receive all model updates submitted by FL clients and aggregate them (usually by averaging) to achieve an optimized global model.
* Users: All message senders and receivers (also named as clients) in 6G communications, collaboratively train a uniform encoder-decoder model using the federal paradigm. Each CATFL entity trains the model with several epochs over the private local data on each mobile/edge device, and then uploads the signed model updates to CS.
PKI-based methods usually involve the certificate management issues, in which a troublesome certificate revocation list should be maintained. Meanwhile, ID-based schemes may suffer from key escrow problems. We utilize certificateless cryptography to overcome these disadvantages. The concrete key generation steps in CATFL and its corresponding authentication procedures are described in detail as follows. The CATFL framework consists of eight steps in total, which are 1) System setup, 2) Identity anonymization, 3) Get PSK, 4) Extract USK, 5) CS configuration, 6) Signature/verification, 7) Aggregation/distribution, 8) Secure semantic transmission. Fig. 2 illustrates the workflow of proposed CATFL framework.
#### Iv-D1 System setup
Firstly, this algorithm specifies a security parameter \(\kappa\) as the input, and then it generates a cyclic additive group \((\mathbb{G},q,P)\) and four hash functions \(H_{0}:\mathbb{G}\times\mathbb{G}\times\{0,1\}^{n}\rightarrow\{0,1\}^{n},H_{1}: \mathbb{G}\times\mathbb{G}\times\mathbb{G}\to Z_{q}^{*}\), \(H_{2}:\{0,1\}^{n}\times\mathbb{G}\times\mathbb{G}\times\{0,1\}^{n}\to Z_{q}^ {*}\) and \(H_{3}:\{0,1\}^{n}\times\mathbb{G}\times\mathbb{G}\times\{0,1\}^{n}\to Z_{q}^ {*}\), then sends \(\{\mathbb{G},q,P,H_{0},H_{1},H_{2},H_{3}\}\) to TRA and KGC, respectively. After receiving those information, the TRA and KGC in CATFL will initialize system parameters according to the following three steps.
* TRA in CATFL randomly selects \(\alpha\in Z_{q}^{*}\), \(T_{pub}=\alpha P\).
* The KGC in CATFL randomly selects a value \(\beta\in Z_{q}^{*}\), and configures \(P_{pub}=\beta P\).
* The information \(params=\{\mathbb{G},q,\)\(P,H_{0},H_{1},H_{2},H_{3},\)\(T_{pub}\), \(P_{pub}\}\) is published.
#### Iv-D2 Identity anonymization
The TRA invokes this step to initialize anonymous identities of \(CS\) and each user \(U_{i}\) in CATFL, the corresponding identity information is denoted as \(RID_{n}\).
* \(CS\) or \(U_{i}\) in CATFL randomly selects \(r_{i}\in Z_{q}^{*}\).
* \(CS\) or \(U_{i}\) in CATFL computes \(AID_{i,1}=r_{i}\times P\) and sends \(RID_{i},AID_{i,1}\) to the TRA where \(i=1,2,...,n\) through a secure wireless channel.
* The TRA checks the validity of \(RID_{i}\). If not, the TRA will reject this request; Otherwise, it holds the system timestamp \(T_{i}\) and calculates \(AID_{i,2}=RID_{i}\oplus H_{0}(\alpha AID_{i,1},T_{pub},T_{i})\). The pseudonym of each CATFL entity is generated as \(AID_{i}=\{AID_{i,1},AID_{i,2},T_{i}\}\).
* The TRA stores the computed \(AID_{i}\) and transmits them to the other entities in CATFL.
#### Iv-D3 Get the PSK
The KGC in CATFL generates and sends back the PSK to the requester secretly. After that, the requester will generate its full private key and corresponding public key for the proposed certificateless authentication mechanism. In detail, the requester \(CS\) or \(U_{i}\) sends \(AID_{i}\) to the KGC. The KGC computes the PSK of each requester and then retrieves the \(AID_{i}\) from the real identity list. If the \(AID_{i}\) exists in the base, the KGC needs to execute the following operations:
* Randomly produces a value \(k_{i}\in Z_{q}^{*}\) as the system input.
* Calculates \(U_{i}=k_{i}P\), generates \(\theta_{i}=H_{1}(AID_{i},U_{i},\)\(P_{pub})\) and then produces \(\lambda_{i}=(k_{i}+\theta_{i}\beta)(mod\ q)\).
* Publishes the \(\{\lambda_{i},U_{i}\}\) to \(CS\) or \(U_{i}\) secretly.
#### Iv-D4 Extract the USK
When \(CS\) or \(U_{i}\) receives \(\{\lambda_{i},U_{i}\}\), the participator will produce its key pair according to the following operations:
* Calculate \(\theta_{i}^{*}\) = \(H_{1}(AID_{i},U_{i},P_{pub})\).
Fig. 2: The workflow of proposed CATFL. 1) System setup, 2) Identity anonymization, 3) Get the PSK, 4) Extract the USK, 5) CS configuration, 6) Signature/verification, 7) Aggregation/distribution, 8) Secure semantic transmission.
* Check whether the equation \(\lambda_{i}P=U_{i}+\theta_{i}^{*}P_{pub}\) is right. If not, close the current session; Otherwise, go to the next step.
* Computes \(X_{i}=\mu_{i}P\) and configures the public key as \(PK_{i}=\{X_{i},U_{i}\}\). Subsequently, these public keys will be shared with other CATFL entities.
In CATFL, a batch of \(AID_{i}\) and PSK \(\{\lambda_{i},U_{i}\}\) will be pre-loaded into the mobile/edge devices and store safely. Each user can utilize a unique \(AID_{i}\) and a PSK \(\{\lambda_{i},U_{i}\}\) to validate the identities of other entities. If the user uses up every \(AID_{i}\), it can build a new connection with the TRA again and replenish a stock of \(AID_{i}\) and \(\{\lambda_{i},U_{i}\}\) using a secure communication channel.
#### Iii-B5 CS configuration
The CS in CATFL will serve as a parameter server to provide model aggregation services. First, the server needs to initialize the weights of the global model and required model parameters (e.g., the total number of FL rounds, the total number of FL clients, and the participation rate of FL clients in each training round). It then activates all selected FL clients and broadcasts the initialized global model for local training.
#### Iii-B6 Signature/verification
The signature process is invoked by any \(CS\) or \(U_{i}\) to compute message/signature pairs, \(CS\) or \(U_{i}\) will execute the following operations:
* Randomly chooses \(a_{i}\in Z_{q}^{*}\), and then generate the message signature by computing \(A_{i}=a_{i}P\), \(h_{1,i}=H_{2}(m_{i},AID_{i},PK_{i},A_{i},P_{pub},t_{i})\), \(h_{2,i}=H_{3}(m_{i},AID_{i},\)\(PK_{i},A_{i},P_{pub},h_{1,i})\). Therein, \(t_{i}\) denotes the timestamp of the whole system.
* Calculates \(\eta_{i}=a_{i}-h_{1,i}\mu_{i}-h_{2,i}\lambda_{i}(modq)\).
* Configures the message signature as \(\Theta_{i}=\{\eta_{i},A_{i}\}\) and broadcasts \((m_{i},AID_{i},\theta_{i},PK_{i},\Theta_{i},t_{i})\) to other relational \(CS\) or \(U_{i}\).
The identity verification process is also invoked by any \(CS\) or \(U_{i}\), which aims to verify the validity of each CATFL entity. If yes, the receiver in CATFL can accept the semantic information and perform further actions. The identity verification process is shown below:
* Verifies the parameters \(T_{i}\) and \(t_{i}\). If both of them are not fresh, the received semantic information will be discarded.
* Checks if \(\Theta_{i}\) is equal to \(H_{1}(AID_{i},U_{i},P_{pub})\). If not, discards this traffic status; Otherwise, continues to execute further operations.
* Computes the hash values one by one \(h_{1,i}^{*}\) = \(H_{2}(m_{i},\)\(AID_{i},PK_{i},\)\(A_{i},P_{pub},t_{i})\), and \(h_{2,i}^{*}\) = \(H_{3}(m_{i},AID_{i},\)\(PK_{i},A_{i},P_{pub},h_{1,i}^{*})\) and \(A_{i}^{*}\) = \(\eta_{i}P+\)\(h_{1,i}^{*}X_{i}+h_{2,i}^{*}\)\(U_{i}+(h_{2,i}^{*}\)\(\theta_{i})P_{pub}\) respectively.
* Validates if the value of \(A_{i}\) is equal to \(A_{i}^{*}\). If not, this semantic information will be discarded; Otherwise, the traffic status can be acceptable.
Since the CATFL is derived from [23], the Proof of Correction can refer to that article.
#### Iii-B7 Aggregation/distribution
The CATFL server first aggregates the model updates sent by each FL client and then sends back the aggregated global model to the FL clients for the next training round.
## IV Security Analysis and Evaluation
The CATFL can be proved to meet the aforementioned security requirements. Specifically, to prove the security of proposed CATFL, we design two types of games: 1) playing between a challenger \(\mathcal{C}\) and the adversaries \(\mathcal{A}_{1}\), 2) playing between a challenger \(\mathcal{C}\) and the adversaries \(\mathcal{A}_{2}\).
The detailed security analysis is shown as follows:
1. Message Authentication/Integrity: According to the features of ECC, the CATFL is proven secure under two types of adversarial models so that the FL model updates and transmitted semantics can not be forged.
2. Anonymity: The anonymous identity is used to hide the real identity of CATFL entities because the attacker can not compute \(cAMP_{i,1}\).
3. Non-Repudiation: The TRA could trace the real identity \(RID_{i}\) of each CATFL entity through the pseudonym identity \(AID_{i}\). Therefore, in the proposed CATFL, no entity can deny the validity of the received signature.
4. Conditional Traceability: The real identity \(RID_{i}\) can be reconstructed only by the TRA via its master secret key \(\alpha\). If poisoned model updates are submitted by a malicious client, the TRA can trace this client.
5. Server impersonation attack: To persistent the backdoor in FL encoder/decoder model of 6G semantic communications, the attacker should generate a message/signature pair \(\{m_{i},AID_{i},\theta_{i},PK_{i},\Theta_{i},t_{i}\}\), satisfying the equations: \[\theta_{i}=H_{1}(AID_{i},U_{i},P_{pub})\] (2) \[A_{i}=\eta_{i}P+h_{1,i}X_{i}+h_{2,i}(U_{i}+\theta_{i}P_{pub})\] (3) However, \(\mathcal{PPT}\) attackers cannot generate the valid message/signature pair because none of them can solve the puzzle in ECC. To this end, the CATFL can successfully resist the server impersonation attack.
6. Un-Linkability: In the identity anonymization phase, the TRA randomly picks \(r_{i}\) to generate a pseudonym for the requester. In addition, the requester randomly selects \(a_{i}\) to generate the message signature. It is not possible for \(\mathcal{PPT}\) attackers to connect any two pseudonym identities or two signatures to a specific user.
7. Modification attack: If a \(\mathcal{PPT}\) attackers modified the message/signature pair \(\{m_{i},AID_{i},\theta_{i},PK_{i},\Theta_{i}\}\), the modification could be discovered by checking the equations \(\theta_{i}=H_{1}(AID_{i},U_{i},P_{pub})\) and \(A_{i}=\eta_{i}P+h_{1,i}X_{i}+\)\(+h_{2,i}(U_{i}+\theta_{i}P_{pub})\). Therefore, our CATFL framework can resist the modification attack.
To bring out the efficiency of proposed CATFL, we compare the communication latency with existing methods. Firstly, since the CATFL contains two stages: 1) training stage and 2) testing stage, the communication latency for both two stages are computed, respectively. The signature latency for each training round is denoted as \(T_{sign}\), and the corresponding
verification latency is presented as \(T_{veri}\). Thus, for \(N\) training rounds, the communication cost of CATFL is \(N*(T_{sign}+T_{veri})\). For \(M\) messages exchanged between the sender and the receiver, the communication cost is \(M*(T_{sign}+T_{veri})\). As an instance, Fig. 3 shows the comparison of communication costs against certificate-based authentication. Another impact factor for the communication cost is the number of CATFL entities \(K=2*P+1\), where \(P\) is the number of sender/receiver pairs. Besides, since the FL-based system should obey the synchronous principle, it exists a waiting latency that obeys the Poisson distribution: \(\Delta T_{ca}\sim\pi(\lambda)\).
## V Conclusion
In this paper, we propose a Certificateless Authentication-based Trustworthy Federated Learning (CATFL) for 6G Semantic Communications scheme. With the certificateless authentication technique, the CATFL entities have two types of private key (i.e., partial and full), which are generated independently by the trusted authority and itself. Therefore, even an attacker who colludes with the KGC cannot obtain the participant's full secret key. On the basis of this, the proposed CATFL can prevent the semi-honest servers from inferring the users' private data. The security analysis and evaluation shows the proposed CATFL has higher security and lower communication cost against poisoning attacks and privacy leakage threats. In the future, we also can introduce more emerging techniques (e.g., blockchain) to construct a secure and efficient 6G semantic communication systems.
## Acknowledgment
This research work is funded by Shanghai Sailing Program under Grant No. 21YF1421700, Defence Industrial Technology Development Program Grant No. JCKY2020604B004, and National Nature Science Foundation of China under Grant No. 62202303 and U20B2048.
|
2308.15141 | Uncertainty Aware Training to Improve Deep Learning Model Calibration
for Classification of Cardiac MR Images | Quantifying uncertainty of predictions has been identified as one way to
develop more trustworthy artificial intelligence (AI) models beyond
conventional reporting of performance metrics. When considering their role in a
clinical decision support setting, AI classification models should ideally
avoid confident wrong predictions and maximise the confidence of correct
predictions. Models that do this are said to be well-calibrated with regard to
confidence. However, relatively little attention has been paid to how to
improve calibration when training these models, i.e., to make the training
strategy uncertainty-aware. In this work we evaluate three novel
uncertainty-aware training strategies comparing against two state-of-the-art
approaches. We analyse performance on two different clinical applications:
cardiac resynchronisation therapy (CRT) response prediction and coronary artery
disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The
best-performing model in terms of both classification accuracy and the most
common calibration measure, expected calibration error (ECE) was the Confidence
Weight method, a novel approach that weights the loss of samples to explicitly
penalise confident incorrect predictions. The method reduced the ECE by 17% for
CRT response prediction and by 22% for CAD diagnosis when compared to a
baseline classifier in which no uncertainty-aware strategy was included. In
both applications, as well as reducing the ECE there was a slight increase in
accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD
diagnosis respectively. However, our analysis showed a lack of consistency in
terms of optimal models when using different calibration measures. This
indicates the need for careful consideration of performance metrics when
training and selecting models for complex high-risk applications in healthcare. | Tareen Dawood, Chen Chen, Baldeep S. Sidhua, Bram Ruijsink, Justin Goulda, Bradley Porter, Mark K. Elliott, Vishal Mehta, Christopher A. Rinaldi, Esther Puyol-Anton, Reza Razavi, Andrew P. King | 2023-08-29T09:19:49Z | http://arxiv.org/abs/2308.15141v1 | Uncertainty aware training to improve deep learning model calibration for classification of cardiac MR images
###### Abstract
Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy artificial intelligence (AI) models beyond conventional reporting of performance metrics. When considering their role in a clinical decision support setting, AI classification models should ideally avoid confident wrong predictions and maximise the confidence of correct predictions. Models that do this are said to be well _calibrated_ with regard to confidence. However, relatively little attention has been paid to how to improve calibration when training these models, i.e. to make the training strategy _uncertainty-aware_. In this work we: (i) evaluate three novel uncertainty-aware training strategies with regard to a range of accuracy and calibration performance measures, comparing against two state-of-the-art approaches, (ii) quantify the data (aleatoric) and model (epistemic) uncertainty of all models and (iii) evaluate the impact of using a model calibration measure for model selection in uncertainty-aware training, in contrast to the normal accuracy-based measures. We perform our analysis using two different clinical applications cardiac resynchronisation therapy (CRT) response prediction and coronary artery disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The best-performing model in terms of both classification accuracy and the most common calibration measure, expected calibration error (ECE) was the Confidence Weight method, a novel approach that weights the loss of samples to explicitly penalise confident incorrect predictions. The method reduced the ECE by 17% for CRT response prediction and by 22% for CAD diagnosis when compared to a baseline classifier in which no uncertainty-aware strategy was included. In both applications, as well as reducing the ECE there was a slight increase in accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD diagnosis respectively. However, our analysis showed a lack of consistency in terms of optimal models when using different calibration measures. This indicates the need for careful consideration of performance metrics when training and selecting models for complex high risk applications in healthcare.
## 1 Introduction
Artificial intelligence (AI) techniques have the potential to be used as decision support tools in medicine, for example in applications such as diagnosing disease or predicting response to treatment. However, in recent years even though AI models have dominated medical research they are often developed without consideration of how the models will be used in clinical practice. Specifically, the lack of trust in automated predictions for clinical applications is a major barrier preventing clinical adoption (Linardatos et al., 2021). One way to provide a level of trust in AI predictions is to estimate uncertainty or classification confidence and provide end-users with the confidence score as well as the prediction.
Ideally, models used in a decision support setting should avoid confident wrong predictions and maximise the confidence of correct predictions. The concept of _model calibration_ refers to the relationship between the accuracy of predictions and their confidence: a well-calibrated model will be less confident when making wrong predictions and more confident when making correct predictions. With this in mind measures of model calibration have been proposed that provide a more complete understanding of the performance of a predictive
model by estimating how closely the predictive confidence matches its accuracy (Nixon et al., 2019). However, relatively little attention has been paid to how to optimise AI models with respect to model calibration. If training could be performed in such a way as to maximise both accuracy and calibration this would have the potential to provide a level of trust and reliability in model outputs (Gawlikowski et al., 2021; Sensoy et al., 2021).
In this paper we investigate training schemes that aim to improve model calibration as well as accuracy, with a specific focus on deep learning (DL) models. These schemes are collectively known as _uncertainty-aware_ training methods. We utilise recent advances in uncertainty estimation and uncertainty-aware training to investigate multiple methodologies to identify the best performing strategy with respect to accuracy and calibration. Specifically, we investigate two applications from cardiology: prediction of response to cardiac resynchronisation therapy (CRT) from pre-treatment cardiac magnetic resonance (CMR) images, and diagnosis of coronary artery disease (CAD), again from CMR images. In this introduction, we first focus on techniques utilised to estimate uncertainty, followed by a discussion on model calibration and then we move on to methods to develop uncertainty-aware AI models. Finally, we provide an overview of the contributions of our research.
### Uncertainty estimation
Two commonly identified sources of uncertainty are aleatoric uncertainty, which is caused by noisy data inputs and epistemic uncertainty, which is the uncertainty inherent in the model itself (Hullermeier and Waegeman, 2021). Aleatoric uncertainty is irreducible as the 'noise' present in the input data cannot be altered. Epistemic uncertainty, however, may be improved by providing more knowledge through larger and more varied datasets (Abdar et al., 2021; Gawlikowski et al., 2021).
Epistemic and aleatoric uncertainty estimates for task DL models have predominantly been made using Bayesian approximation, ensemble methods and test-time augmentation (Abdar et al., 2021; Gawlikowski et al., 2021). Bayesian DL aims to model a distribution over the model's weights and is a favoured method for uncertainty estimation, as the modelling of an approximated posterior distribution provides the ability to produce more representative epistemic uncertainty estimations (Abdar et al., 2021). However, approximation methods are required to compute the estimates requiring more computational effort for both training and inference (Gawlikowski et al., 2021; Alizadelsani et al., 2021). Ensemble methods seek to train multiple models, each with different parameters, which are then used to generate multiple predictions from which the variance in predicted classes can be considered a measure of the epistemic uncertainty (Gawlikowski et al., 2021). For example, Mehrtash et al. (2020) demonstrated the use of ensembles to quantify a model's predictive uncertainty for medical image segmentation, using MR images of the brain, heart and prostate. Aleatoric uncertainty is often estimated by augmenting test data to generate multiple test samples and measuring the variance in the predictions whilst keeping the model architecture intact (Shorten and Khoshgoftaar, 2019). An example of this type of approach is Wang et al. (2018), who investigated test-time uncertainty estimation to improve automatic brain tumour segmentation tasks using random flipping and rotation, later expanding their research to epistemic uncertainty (Wang et al., 2019).
Currently these approaches have predominantly been applied to medical image segmentation applications and less so for classification applications such as predicting diagnosis or treatment response (Abdar et al., 2021; Gawlikowski et al., 2021). Therefore, actively researching and improving uncertainty estimation techniques to identify more calibrated and easily scaleable estimates will aid the development of trustworthy decision support tools for clinicians (Gawlikowski et al., 2021).
### Model calibration
Quantifying uncertainty of DL models has highlighted underlying problems of DL architectures. In particular, the Softmax probability function, often used as the final layer of a DL classification model has been shown to provide over-confident predictions for both in and out of distribution data (Kompa et al., 2021; Gawlikowski et al., 2021). Additionally, the hard label binary classification approach has been shown to have a negative impact by overestimating confidence in predictions, indicating that a softer approach may provide a more reliable method mimicking real world behaviour (Thulasidasan et al., 2019). Guo et al. (2017) highlighted that while developments have been made to produce a variety of architectures and uncertainty estimations for DL models, evaluating the calibration of models is necessary to understand and interpret probability estimates. To this end, Guo et al. (2017) proposed the Expected Calibration Error (ECE) metric, which partitions or bins confidences and utilises the accuracy and confidence estimates over all sets of samples in all bins to provide a measure of model calibration. Interestingly, Nixon et al. (2019) investigated the shortfalls of the ECE, noting that the choice of the number of bins has the potential to skew results. This influence is noticeable when visualised on an illustrative representation of the ECE referred to as a reliability diagram. Responding to this weakness, an alternate measure called the Adaptive ECE (AECE) has been suggested based on an adaptive binning strategy (Ding et al., 2020). The authors argue that AECE provides a robust approach to handle non-uniform confidence calibration and enables enhanced visual illustrations in reliability diagrams. Often calibration errors are also evaluated with the Maximum Calibration Error (MCE), which quantifies the largest deviation across the confidence bins (Guo et al., 2017). Overconfidence Error (OE) is an additional calibration performance metric which penalises predictions by the weight of the confidence but only when confidence exceeds accuracy (Thulasidasan et al., 2019). OE has been proposed as an appropriate calibration metric for high risk applications such as healthcare where it is important to avoid confident wrong predictions (Thulasidasan et al., 2019). Alternate metrics such as the Brier Score (BS) have been utilised in the literature and considered as a proper scoring rule, computed using uncertainty, resolution and reliability. However, the measure has the potential to under-penalise predictions with lower probabilities.
Despite the range of metrics presented, studies continue to investigate alternate, standardised and improved methods to understand and evaluate the calibration of DL models (Ashukha et al., 2020; Ovadia et al., 2019). To date, most of this research has focused on computer vision problems, and little work has evaluated the utility of these measures on real-world medical applications.
### Uncertainty-aware training
Uncertainty-aware training refers to methods that incorporate uncertainty information into the training of a DL model with the aim of improving its model calibration. Yu et al. (2019) provide an example of this type of approach, demonstrating how a DL model can learn to gradually exploit uncertainty information to provide more reliable predictions for a 3D left atrium segmentation application. Alternate approaches aim to directly target confident cases based on an acceptable user-risk level, for example the'selective' image classification method proposed in Geifman and El-Yaniv (2017). Uncertainty estimates have been directly incorporated into the loss function of the model, as proposed by Ding et al. (2020) for a segmentation task. The outcomes demonstrated the ability to maximise performance on confident outputs and reduce overconfident wrong predictions. In our previous work (Davwood et al., 2021), we used a similar approach to Ding et al. (2020) and proposed, for the first time, an uncertainty-aware DL model for CRT response prediction, as a preliminary investigation to evaluate changes in predictive confidence. We used confidence bands estimated
at test time to highlight an improvement in the confidence of correct predictions and a reduction in confidence of incorrect predictions. Another group of methods has attempted to define differentiable loss terms that directly quantify model calibration (Krishnan and Tickoo, 2020; Karandikar et al., 2021). Alternately, building on recent work on Evidential DL (Sensoy et al., 2018, 2020), a Bayesian methodology was incorporated into Evidential DL by Sensoy et al. (2021). They utilised probability distributions to obtain uncertainty in predictions for each category/class and introduced new methods to handle the risk associated with incorrect predictions. The continued research and improvements within the field therefore highlight the need to incorporate uncertainty estimation when training DL models as it will likely become a vital component in high-risk applications such as diagnostic predictions in healthcare (Gawilkowski et al., 2021).
### Contributions
In this paper, we seek to perform a thorough investigation of uncertainty-aware DL training methods and evaluate them on two real-world clinical applications. Our contributions are:
1. We propose three novel uncertainty-aware training strategies (including the one proposed in our preliminary work (Dawood et al., 2021)), and compare them to two state-of-the-art methods from the literature.
2. We evaluate all models on two realistic medical imaging applications: CRT response prediction and CAD diagnosis, both from CMR images.
3. We use a wide range of calibration performance measures proposed in the literature, combined with a reliability diagram based on adaptive binning to understand the effects of different uncertainty-aware training methods.
4. We further quantify the performance of all models in terms of aleatoric and epistemic uncertainty.
5. We evaluate the impact of using a calibration-based model selection criterion on accuracy and calibration performance.
The paper is structured as follows. In Section 2 we describe our uncertainty-aware strategies and the comparative approaches. In Section 4 we present all experiments performed to evaluate and compare the different approaches with all results found in Section 5. Section 6 then discusses the findings, evaluates the outcomes and recommends future work towards cultivating trustworthy and calibrated predictive DL classification models.
## 2 Methods
In this Section we introduce the different uncertainty-aware and comparative strategies used and evaluated in the paper.
### Notation
Before presenting our novel and comparative approaches for uncertainty-aware training, we first define a common notation that will be used in the subsequent descriptions. Throughout, \(\mathbb{P}\) represents the probability of an event, \(A\cap B\) represents the intersection of set \(A\) and set \(B\), \(A\cup B\) represents their union, \(|A|\) denotes the cardinal of the set \(A\) and \(\tilde{A}\) its complement. Hyperparameters of the network are denoted by Greek lowercase letters: \(\theta\) is trainable whereas \(\lambda\), \(w\) and \(\mu\) are hyperparameters. We denote by \(B\) the set of samples in a training batch. For a binary classifier \(f_{\theta}\) with trainable parameters \(\theta\), we define:
* \(G\subset B\) the samples labelled as ground truth positive (\(G\cup\tilde{G}=B\))
* \(P_{\theta}\subset B\) the samples classified by the model as positive (\(P_{\theta}\cup\tilde{P}_{\theta}=B\))
* \(\gamma_{\theta}=(P_{\theta}\cap G)\cup(\tilde{P}_{\theta}\cap\tilde{G})\) the samples correctly classified (\(\gamma_{\theta}\cup\tilde{P}_{\theta}=B\))
* \(\gamma_{\theta}\subset B\) the samples with an "uncertain" classification (based on their classification confidence) (\(\gamma_{\theta}\cup\gamma_{\theta}=B\))
* \(f_{\theta}:x\rightarrow[\mathbb{P}(x\in P_{\theta}),\mathbb{P}(x\in\tilde{P}_ {\theta})]\) (for a sample \(x\) from \(B\))
* \(r_{i}\) is the confidence (probability) of the model-predicted class for sample \(i\), i.e. \(r_{i}=max[\mathbb{P}(x_{i}\in P_{\theta}),\mathbb{P}(x_{i}\in\tilde{P}_{ \theta})]\)
* \(\varepsilon_{i}\) is the ground truth label of sample \(i\) (i.e. 1 for positive and 0 for negative)
### Baseline model
The diagram in Fig. 1 illustrates the architecture of the baseline classification model developed by Puvol-Anton et al. (2020), Dawood et al. (2021), which was used as the framework to perform the experiments. The baseline model utilises CMR short axis (SA) image segmentations produced by a pre-trained U-net (Chen et al., 2020). These segmentations are used as input into a variational autoencoder (VAE), which during the training phase is tasked with reconstructing the segmentations frame-by-frame from the learned latent representations. Subsequently, a classifier is trained to make predictions from the concatenated VAE latent spaces of the time series of CMR SA segmentations.
The points at which aleatoric uncertainty and epistemic uncertainty are estimated (see Section 5.3) are shown in the dotted blocks in Fig. 1. In the literature, quantifying aleatoric uncertainty has often been performed using data augmentation at test time (Ayhan and Berens, 2018). In our work, we produced realistic augmentations for this purpose by using the U-net segmentation model to generate multiple segmentations which were inputted into the VAE/classifier to estimate aleatoric uncertainty. To quantify epistemic uncertainty we drew multiple samples from the learned VAE latent space.
Formally, we define the loss function of the baseline model as comprising three terms:
\[\small\begin{split}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.
Here, the first sum is over samples (\(S_{0}\)) classified as positive (\(P_{b}\)) or negative (\(\bar{P}_{b}\)) in a batch, and in the second sum the (\(x_{i},x_{j}\)) are pairs of false positive (or negative) and true positive (or negative) samples.
Intuitively, Eq. (3) will evaluate all pairs of correct/incorrect positive/negative predictions in a training batch, and the terms will be positive when the incorrect prediction (\(i\)) has higher confidence than the correct one (\(j\)). If the correct one has higher confidence than the incorrect one by a margin of the hyperparameter \(\mu\) or more the terms will be zero. Note that in the _max_ term of Eq. (3), the probability of a correct prediction is subtracted from the probability of an _incorrect_ prediction.
#### 2.3.2 Probability Loss
In our second novel method, we again adapted the baseline model loss function to more heavily penalise incorrect predictions with high confidence. We note that the standard cross entropy loss already penalises such cases. However, it is well-known that models trained with cross-entropy loss are prone to poor calibration (Guo et al., 2017), and this motivated the formulation of the Probability Loss approach:
\[\alpha_{p}(\theta)=\frac{1}{|G|}\sum_{x_{i}\in G}\mathbb{P}(x_{i}\in\bar{P}_{ b})+\frac{1}{|G|}\sum_{x_{i}\in G}\mathbb{P}(x_{i}\in P_{b}) \tag{4}\]
As before, the developed loss term is added into the loss function of the model to follow the form in Eq. (2). The Probability Loss function differs from the approach described in Section 2.3.1 as the \(\mathbb{P}\) terms represent the class probabilities of the classifier (after the Softmax layer) for positive and negative _ground truth_ samples. Intuitively, this loss term penalises ground truth positive (negative) samples with high confidence in negative (positive) prediction. The terms are normalised by the number of samples for the positive and negative classes in the training batch.
#### 2.3.3 Confidence Weight
An alternative solution to defining a new loss term is to add a weighting term to the existing classifier loss \(\mathcal{L}_{C}\) to penalise training samples with highly confident incorrect predictions. The weighting term is determined by first estimating the epistemic uncertainty of each prediction in the batch by sampling in the latent space of the VAE. Specifically, we randomly sampled 20 points from the VAE latent space and computed predictions for each one.
The prediction confidence was calculated as the proportion of positive predictions from these samples and we denote this by \(C_{i}\in[0,1]\) for sample \(i\). See Section 5.3 for further details of the epistemic uncertainty estimation. The weighting term for each sample in the batch was computed as follows:
\[\alpha_{i}^{\prime}=g_{i}\cdot(1-C_{i})+(1-g_{i})\cdot C_{i} \tag{5}\]
Here, \(\cdot\) denotes scalar multiplication. Intuitively, these weights will be high when making a confident wrong prediction, thus encouraging the model training to focus on minimising such cases. \(\alpha_{i}^{\prime}\) was then scaled to produce \(\alpha_{S_{J}}\) to ensure that the weights would not drop below a pre-defined value \(w\):
\[\alpha_{S_{J}}^{\prime}=(1-w)\cdot W_{i}+w \tag{6}\]
Note that \(w\) is a hyperparameter that is optimised during the training of the classifier.
### Comparative approaches
We now present three existing methods proposed in the recent literature as our comparative uncertainty-aware training strategies.
#### 2.4.1 Accuracy versus Uncertainty Loss
Recent work by Krishnan and Tickoo (2020) utilised the relationship between accuracy and uncertainty to develop a loss function aimed at improving model calibration. A differentiable Accuracy versus Uncertainty (AvUC) loss function was developed by placing each prediction into one of four categories; accurate and certain, accurate and uncertain, inaccurate and certain and lastly inaccurate and uncertain. Utilising these four categories, a differentiable loss term was defined as follows.
\[\mathcal{L}_{\text{AvUC}}(\theta)=log\left(1+\frac{\left|q_{\theta}\cap\alpha_ {b}\right|+\left|\frac{\left|q_{\theta}\cap\tilde{u}_{\theta}\right|}{\left|q_ {\theta}\cap\tilde{u}_{\theta}\right|+\left|\tilde{q}_{\theta}\cap\tilde{u}_{ \theta}\right|}\right)}{\left|q_{\theta}\cap\tilde{u}_{\theta}\right|+\left| \tilde{q}_{\theta}\cap\tilde{u}_{\theta}\right|}\right) \tag{7}\]
Similar to the methods proposed in Sections 2.3.1 and 2.3.2, the final loss function follows the same structure as the baseline model and the AvUC loss is added to the total loss, weighted by the hyperparameter \(\tilde{\lambda}_{N}\) as presented in Eq. (2).
#### 2.4.2 Soft ECE loss function
Karandikar et al. (2021) extended research performed by Krishnan and Tickoo (2020), leveraging on the approach of a differentiable loss function to improve calibration. However, here they investigated the ECE measure as a differentiable loss. To implement the loss they
Figure 1: Diagram showing the architecture of the baseline VAE/classification model developed by Puyol-Antón et al. (2020), Dzuwood et al. (2021) together with the stages at which uncertainty estimates are made. The CMR SA images are segmented using a DL-based model, and these segmentations act as the inputs to the VAE. The classification is performed in the latent space of the VAE and the points at which distractor uncertainty and epistemic uncertainty were estimated are shown using dotted blocks.
introduced a soft binning function scaled with a _soft binning temperature_, \(T\) (see Section 4.3). Below we define this loss function using our notation, we refer the reader to Karandikar et al. (2021) for a full explanation using the original notation.
\[\zeta_{\text{SECE}}(\theta,\text{M})=\left(\sum_{m=1}^{M}\left(\frac{\left|B _{j}\right|}{\left|B\right|}\cdot\left|A_{j}-R_{j}\right|^{p}\right.\right) \right)^{1/p} \tag{8}\]
Here, we use \(B_{j}\) to denote the set of samples that fall into confidence bin \(j\), \(M\) represents the number of bins, and \(m\) and \(j\) represent the confidence bins and are related using the soft binning membership function described in Krishnan and Tickoo (2020). \(A_{j}\) represents the average accuracy within bin \(j\) (i.e. the proportion of \(B_{j}\) that are correctly classified) and \(R_{j}\) represents the average confidence within bin \(j\) (i.e. the average of \(r_{j}\) within \(B_{j}\)). The term \(p\) is the order of the soft binning function.
#### 2.4.3 Maximum Mean Calibration Error loss function
Kumar et al. (2018) utilised a reproducing kernel Hilbert space (RKHS) approach with a differentiable loss function to improve calibration, which they termed the Maximum Mean Calibration Error (MMCE).
\[\zeta_{\text{MLCE}}(\theta) =\sum_{(v_{i},v_{j})\in\mathbb{Z}_{\theta}}\frac{r_{i}\cdot r_{j} \cdot k(r_{i},r_{j})}{\left(\left|\varphi_{0}\right|-\left|\mathcal{B}\right| \right)^{2}}\] \[+\sum_{(v_{i},v_{j})\in\mathbb{Z}_{\theta}}\frac{(1-r_{i})\cdot( 1-r_{j})\cdot k(r_{i},r_{j})}{\left|\mathcal{B}\right|^{2}}\] \[-2\sum_{(v_{i},v_{j})\in\mathbb{Z}_{\theta}\times\mathbb{Z}_{ \theta}}\frac{(1-r_{i})\cdot r_{j}\cdot k(r_{i},r_{j})}{\left(\left|\varphi_{0 }\right|-\left|\mathcal{B}\right|\right)\cdot\left|\mathcal{B}\right|} \tag{9}\]
Here, \(k\) represents the Hilbert space kernel and all other terms are as defined in Section 2.1. Further detailed derivations and explanations using the original notation are provided in Kumar et al. (2018).
## 3 Materials
We performed two experiments utilising different materials for each. The first experiment focused on response prediction for CRT patients, and the second on diagnosis of CAD. See Table 1 for a summary of the data used in each experiment, which are further described below. Both experiments utilised CMR images as model inputs. In both, we train and evaluate the baseline model featuring a segmentation network followed by a VAE and classifier, and compare this with six different uncertainty-aware versions of the same model.
### CRT response prediction model
We used two databases to train and evaluate our baseline and uncertainty-aware CRT response prediction models: (i) CMR SA stacks of 10,000 subjects (a mix of healthy and cardiovascular disease patients) from the UK Biobank (UKBB) dataset (Petersen et al., 2015) and (ii) a database from the clinical imaging system of Guy's and St Thomas' NHS Foundation Trust (GSTFT) consisting of 20 heart failure (HF) patients and 73 CRT patients. The UKBB database was utilised to train the VAE, the HF patients for fine-tuning the segmentation model and the VAE and the CRT patients were used to train and evaluate the VAE and classifier. Further details are provided in Section 4.1.
Details of the UKBB database are provided in Section 3.2. For the GSTFT database, all 73 CRT patients met the conventional criteria for CRT patient selection, chosen using current clinical guidelines based on New York Heart Association classification, left ventricular ejection fraction, QRS duration, the type of bundle branch block and etiology of cardiomyopathy and atrial rhythm (Members et al., 2013). CMR imaging was performed prior to CRT and the CMR multi-slice SA stack was used in this study. The Siemens Aera 1.5T, Siemens Biograph mMR 3T, Philips 1.5T Ingenia and Philips 1.5T and 3T Achieva scanners were used to perform CMR imaging. The typical slice thickness was 8-10 mm, in-plane resolution was between \(0.94\times 0.94\) mm\({}^{2}\) and \(1.5\times 1.5\) mm\({}^{2}\) and the temporal resolution was 13-31 ms/frame. Using post-CRT echocardiography images (at 6 month follow up), a positive response was defined as a 15% reduction in left ventricular (LV) end-systolic volume. The HF patients had similar CMR imaging details to the CRT was carried out using 300 manually segmented CMR SA slices (multiple slices/time points from the 20 CMR scans).
For this experiment, for all datasets the top three slices of the SA stack were employed as the input to the models described in Section 2. Ideally all slices should be utilised but for computational efficiency we only used three slices for prediction. We chose the basal to mid slices as these slices exhibit most myocardial deformation throughout contraction (Jung et al., 2006). All slices were resampled in-plane to a voxel size of \(1.25\times 1.25\) mm, cropped to \(80\times 80\) pixels, and temporally resampled to \(T=25\) time samples as per the same process utilised by Puyol-Anton et al. (2020), before being used for training/evaluation of the models.
### CAD diagnosis model
For the CAD diagnosis model all images were extracted from the UKBB. Images were obtained on a 1.5 T MRI scanner (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany). A typical CMR dataset consists of 10 SA image slices with a matrix size of \(208\times 187\) and a slice thickness of 8 mm, covering both ventricles from the base to the apex. The in-plane image resolution is \(1.8\times 1.8\) mm\({}^{2}\), the slice gap is 2 mm, with a repetition time of 2.6 ms and an echo time of 1.10 ms. Each cardiac cycle consists of \(T=50\) frames, with further details on the image acquisition protocol described in Petersen et al. (2015). For the CAD diagnosis experiment, we utilised 16022 UKBB subjects (14384 healthy and 1638 CAD subjects). As coronary occlusions can occur throughout the coronary tree we chose the middle three slices of the SA stack to cover the base, mid and apical portions of the heart for all subjects, similar to Clough et al. (2019). All slices were cropped to \(80\times 80\) pixels and did not require any re-sampling. To follow a similar approach to the CRT experiment, only 25 time frames were utilised for the training of the models.
### Ethics
Institutional ethics approval was obtained for use of the clinical data and all patients consented to the research and for the use of their data. All relevant protocols were adhered to in order to retrieve and store the patient data and related images.
## 4 Experiments
Below we describe the details of our two experiments. Please refer to Table 1 for summaries of the data used in each.
### Experiment 1 - CRT response prediction
In the first experiment the task was to predict the binary response to CRT (positive/negative) using the pre-treatment CMR data. In order to train the framework for this task the following steps were performed:
* _Fine-tune the pre-trained segmentation model_: The segmentation model (Chen et al., 2020) was pre-trained using UKBB CMR data so to make it robust to the clinical GSTFT data it was fine-tuned using CMR data from the 20 GSTFT HF patients. The fine-tuning was carried out using 300 manually segmented CMR SA slices (multiple slices/time points from the 20 CMR scans).
* _Segment the UKBB and GSTFT CRT CMR data:_ The fine-tuned segmentation model was used to automatically segment all frames of the 10,000 UKBB subjects as well as the 73 GSTFT CRT subjects. (Note that this cohort of 10,000 UKBB subjects was separate from the UKBB data used to initially train the segmentation model.)
* _Train the VAE_: The VAE was pre-trained using the U-net segmented UKBB data and fine-tuned using the ground truth segmentations of the GSTFT HF data.
* _Train the VAE_ and _classifier together_: We then used the U-net segmented CRT data to train the VAE and CRT classifier for 300 epochs similar to Puyol-Anton et al. (2020). For training each uncertainty-aware method, the fine-tuned VAE model was used, the uncertainty-aware loss function or weighting introduced and then both the VAE and CRT classifier trained for 300 epochs using the U-net segmentations of the 73 CRT patients.
In this experiment, the framework was trained using a faster learning rate for the VAE and a slower rate for the CRT classifier (\(10^{-2}\)) to (\(10^{-8}\)), with a batch size of 8. For all approaches, the final model was selected as the one with the highest validation balanced accuracy (BACC) over the classifier training epochs.
Both the CRT baseline and uncertainty-aware models were trained and evaluated using a 5-fold nested cross validation. For each of the 5 outer folds, an inner 2-fold cross validation was performed with grid search hyperparameter optimisation over a range of values. In these inner folds, the set of hyperparameters yielding the highest validation BACC was selected. The optimal hyperparameters were used to train a model (using all training data for that outer fold) and then applied to the held-out (outer) fold. This process was repeated for all outer folds. In this way, hyperparameter optimisation was performed using training data and the model was always applied to completely unseen data. Note also that the CRT data had not been used in pre-training either the segmentation model or the VAE. The hyperparameters optimised using grid search for the CRT response prediction model are presented (on the left) in Table 2. The hidden layer size in the classifier was also optimised as a hyperparameter but all methods found an optimal size of 32.
### Experiment 2 - CAD diagnosis
In this experiment the task was to diagnose (positive/negative) CAD from CMR images. A similar training procedure was followed as in Experiment 1, i.e.
* _Segment the UKBB CMR data_: First, the U-net segmentation model (Chen et al., 2020) (pre-trained on the separate UKBB cohort as in Experiment 1) was used to segment the 16,022 UKBB CMR stacks. Note that no fine-tuning was necessary for this experiment as it used only UKBB data.
* _Train the VAE_: The VAE was pre-trained using the segmented UKBB CMR data for 60 epochs.
* _Train the VAE and classifier together_: The classifier was introduced and trained for a further 35 epochs. For training the uncertainty-aware methods, the trained VAE was used and the classifier trained for an additional 35 epochs.
In this experiment the framework was trained using the same learning rate for the VAE and classifier (\(10^{-5}\)) with a batch size of 25. As for the CRT experiment, the highest validation BACC was used for model selection. For validation a single training/validation/test split of 11535/1282/3205 subjects was employed (i.e. 16022 subjects in total, comprising 14384 healthy and 1638 CAD subjects, as detailed in Section 3.2). The same hyperparameters from the CRT application were optimised for the CAD diagnosis model using grid search. The final hyperparameters are presented (on the right) in Table 2. Similar to the CRT classifier all methods had an optimal hidden layer of size 32 across all strategies.
### Additional hyperparameters for comparative approaches
In addition to the hyperparameters in Table 2, the AvUC loss function utilised hyperparameters stated in the paper by Krishnan and Tickoo (2020). Specifically, a warm up strategy was employed, starting with the uncertainty threshold set to 1 and then updated every epoch after the first 3 epochs. The additional parameters utilised for the Soft ECE loss function were the same as those stated in Karandikar et al. (2021). We fixed the number of bins \(M\) to keep the search space manageable at a value of 15 and varied the _soft binning temperature_ or \(T\) value to obtain an optimal outcome at \(T=0.1\) for CRT response prediction and \(T=0.01\) for CAD diagnosis. The parameter \(T\) is described in detail in the original paper, Karandikar et al. (2021) and is utilised as a parameter to scale the bins or soften them.
### Implementation details
All models were trained on a NVIDIA A6000 48 GB GPU using an Adam optimiser. All data for both experiments was augmented with random flipping and rotations. The code1 and implementation details is available for download and use.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline Method & CRT & & \multicolumn{4}{c}{CAD} \\ \cline{2-10} & \(\lambda_{\text{E}1}\) & \(\lambda_{\text{C}}\) & \(\lambda_{\text{N}}\) & \(\mu\) & \(w\) & \(\lambda_{\text{E}1}\) & \(\lambda_{\text{C}}\) & \(\lambda_{\text{N}}\) & \(\mu\) & \(w\) \\ \hline
1. Baseline model & 0.001 & 3 & – & – & – & 0.1 & 1.3 & – & – & – \\
2. Paired Confidence Loss & 0.001 & 1.8 & 1 & 0.6 & – & 0.1 & 1.5 & 0.4 & 0.8 & – \\
3. Probability Loss & 0.1 & 2 & 0.5 & – & – & 0.1 & 0.6 & 1.2 & – & – \\
4. Confidence Weight & 0.001 & 2 & – & – & 1 & 0.001 & 1.5 & – & – & 2 \\
5. Accuracy versus Uncertainty Loss & 0.001 & 2 & 2 & – & – & 0.001 & 2 & 3 & – & – \\
6. Soft ECE Loss & 0.001 & 1.5 & 1 & – & – & 0.001 & 0.6 & 1.5 & – & – \\
7. MMCC Loss & 0.001 & 3 & 10 & – & – & 0.001 & 3 & 2 & – & – \\ \hline \end{tabular}
\end{table}
Table 1: Summary of datasets used in training/evaluating the different models for the task of CRT response prediction (left) and for the task of CAD diagnosis (right).
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline Method & CRT & & & & & & & & \\ \hline Segmentation\({}^{\star}\) & VAE & Classifier & Segmentation\({}^{\star}\) & VAE & Classifier \\ \hline
1. UKBB 10 000 subjects & & ✓ & & & & & & \\
2. UKBB 16022 CAD subjects & & ✓ & & & ✓ & ✓ & ✓ \\
3. GSFIT 73 CRT subjects & ✓ & (fine-tuned) & ✓ & & & & & \\
4. GSFIT 20 HF subjects & ✓ & (fine-tuned) & ✓ & & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the optimal hyperparameters achieved for the baseline and each uncertainty-aware strategy for the task of CRT response prediction (left) and for the task of CAD diagnosis (right). We refer the reader to Section 2, where all parameter descriptions are presented.
## 5 Experimental results
### Evaluation metrics
In our work we present a number of performance metrics to evaluate our uncertainty-aware strategies. First, we utilise the conventional classification performance measures: sensitivity, specificity and BACC (Carrington et al., 2021). Second, we include the ECE value (Guo et al., 2017) as a measure of model calibration. The confidence used when calculating ECE was the predicted probability after the Softmax layer. A set number of confidence bins was chosen and the average accuracy achieved by the model for all samples that fall into each confidence bin was computed. We then calculate the ECE as follows:
\[ECE=\sum_{n=1}^{M}\frac{|\beta_{n}|}{n}\bigg{|}acc(\beta_{n})-conf(\beta_{n}) \bigg{|} \tag{10}\]
In Eq. (10), the confidences are grouped into \(M=15\) bins, \(\beta_{n}\) is the set of samples whose predictions fall into bin \(m\), and \(n\) is the total number of samples in all the bins, with corresponding accuracies (\(acc\)) and confidences (\(conf\)) (Guo et al., 2017).
Our next calibration measure is the Overconfidence Error (OE), which aims to quantify confident wrong predictions and is computed as follows:
\[OE =\sum_{m=1}^{M}\frac{|\beta_{n}|}{n}\bigg{|}conf(\beta_{m})\cdot\] \[max\bigg{(}conf(\beta_{m})-acc(\beta_{m}),0\bigg{)}\bigg{|} \tag{11}\]
Once again the Softmax confidences are grouped into \(M=15\) bins, \(\beta_{n}\) is the set of samples whose predictions fall into bin \(m\), and \(n\) is the total number of samples in all the bins, with corresponding accuracies (\(acc\)) and confidences (\(conf\)) (Thulasidasan et al., 2019).
Our third calibration measure is the Maximum Calibration Error (MCE), which is based on the ECE equation but finds the maximum calibration error across the bins (Guo et al., 2017).
\[MCE=\max_{m=(1,...,M)}\bigg{|}acc(\beta_{m})-conf(\beta_{m})\bigg{|}\]
Our final calibration measure is the Brier Score (BS), which is a cost function that evaluates the accuracy of probabilistic predictions, using the prediction probability from the Softmax layer, as presented in Eq. (12).
\[BS=\frac{1}{N}\sum_{j=1}^{N}(p_{j}-o_{j})^{2} \tag{12}\]
Here, \(N\) represents the number of samples, \(p_{j}\) the probability and \(o_{j}\) represents the ground truth one-hot encoded vector. A low BS indicates a well calibrated model.
### Evaluation of uncertainty-aware models
#### 5.2.1 Accuracy
The performances in terms of classification accuracy of each uncertainty-aware model on both the CRT response prediction and CAD diagnosis tasks are presented in Table 3. Analysing the results we can see that the Confidence Weight term produced the highest test BACC for both tasks. McNemar's non-parametric test was used to test if the baseline classifier versus each of the uncertainty-aware classifiers had statistically significantly different classification performances at a significance level of 0.05. For the CAD model there was a significant difference across all strategies, indicated with asterisks, but for the CRT response models (which had a smaller test set) the tests indicated no statistically significant differences.
#### 5.2.2 Calibration
All calibration measures computed are presented in Table 4 for the CRT response prediction and CAD diagnosis models respectively, with all best performing metrics indicated in bold. The experiments were run three times with different random weight initialisations and the mean and standard deviation of all metrics are shown.
For all metrics, a lower score implies a better calibrated model. The most widely used calibration metric in the literature has been the ECE. The results indicate that the Confidence Weight term reduced the ECE measure the most on both the CRT and CAD predictive models. However, this conclusion is not as clear when considering the other calibration metrics, with all tested models (including the baseline) performing best according to at least one metric for one experiment. However, we note that the results for the CRT experiment might be less reliable due to the smaller test set size.
To visualise the calibration performance of the different models, we present reliability diagrams in Figs. 2 and 3 for our larger cohort of CAD subjects using AECE. The reliability diagram plots accuracy against confidence and a perfectly calibrated model would have a line close to identity. We can see that the Confidence Weight model (Fig. 3d) shows the most improvement across the confidence bands, however improvement is still lacking in the high confidence bands.
### Uncertainty quantification
To further understand the effect of uncertainty-aware training on model calibration we now estimate the aleatoric and epistemic uncertainties of our different models. The specific points at which uncertainty was estimated are illustrated in Fig. 1. To estimate the aleatoric uncertainty we generated multiple plausible segmentation inputs to the VAE using inference-time dropout in the segmentation model with probability=0.2, similar to Davwood et al. (2021). Aleatoric uncertainty was then estimated using the prediction of the original data's segmentations and those from 19 additional segmentation sets generated in this way, i.e. the original and 19 additional segmentations were propagated through the VAE and classifier. We note that using dropout
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Method & CRT & & & CAD & \\ \cline{2-7} & SEN (\%) & SPEC (\%) & BACC (\%) & SEN (\%) & SPEC (\%) & BACC (\%) \\ \hline
1. Baseline model & 73.3 & 64.3 & 68.8 & 75.0 & 65.3 & 70.0 \\
2. Paired Confidence Loss & 57.8 & 78.6 & 68.2 & 61.9 & 78.3 & 70.1\({}^{*}\) \\
3. Probability Loss & 68.9 & 64.3 & 66.6 & 79.3 & 58.1 & 68.7\({}^{*}\) \\
4. Confidence Weight & 62.2 & 78.6 & 70.4 & 73.5 & 70.2 & 71.9\({}^{*}\) \\
5. AVUC Loss & 68.9 & 71.4 & 70.2 & 66.2 & 68.4 & 67.3\({}^{*}\) \\
6. Soft ECE Loss & 75.6 & 57.1 & 66.3 & 58.8 & 79.2 & 69.0\({}^{*}\) \\
7. MMCZ Loss & 71.1 & 67.9 & 69.5 & 64.3 & 75.4 & 70.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The Sensitivity (SEN), Specificity (SPP) and Balanced Accuracy (HACC) for the baseline classifier and the five uncertainty-aware methods for CAD diagnosis. The best performing CRT classifier for each model was chosen using the best validation HACC achieved over 35 epochs. The best-performing CAD classifier for each model was chosen using the best validation accuracy achieved over 300 epochs. The strategy(s) with the highest BACC is indicated in bold. Those with an asterisk indicate statistical significance when compared to the baseline (McNeemar’s test, 0.05 significance level) which was found in the CAD experiment.
in the segmentation model approximates the epistemic uncertainty of the segmentation model. However, the multiple segmentations generated using this approximation are passed as inputs into the VAE and classification model, in this way they can be used to approximate the aleatoric uncertainty of the VAE/classifier.
The epistemic uncertainty of the baseline and uncertainty-aware model was estimated using random sampling in the latent space of the VAE. Again, the original embedding together with 19 additional random samples were used for estimating epistemic uncertainty. Increasing the number of samples from the latent space did not have a statistically significant difference on the estimate but did adversely affect execution time, therefore just 20 samples were used for epistemic uncertainty estimation. For both types of uncertainty, the outputs of the Softmax layer were used to compute prediction confidence/uncertainty as a percentage of positive predictions out of the 20 samples. The values for all metrics for the epistemic and aleatoric uncertainty for both CRT response and CAD diagnosis models are presented in Tables 5 and 6 respectively, with the lowest and optimal metric highlighted in bold.
Interestingly, we see different outcomes for both CRT and CAD in the presence of epistemic and aleatoric uncertainty. The results indicate that the Confidence Weight model has a lower ECE than the baseline model for epistemic uncertainty, as can be seen in Table 5 but a similar outcome and consistency was not seen in the presence of aleatoric uncertainty for CRT. Similar to CRT, the CAD results in Table 6 highlight the same outcomes. Interestingly it may imply the modelling of aleatoric uncertainty may need further refinement. However, one may also argue that the ECE value might not be an optimal metric to utilise to assess calibration performance as our application is in a high risk setting, and therefore the OE measure could be a more appropriate metric. However, analysing the OE, a consistent outcome across both applications was not seen but did match the representation on the reliability diagram. One noticeable outcome for the CRT application was that the Confidence Weight model did seem to handle uncertainty with noticeably lower OE values.
### Comparison of validation accuracy and calibration metric-based model selection
In this section we continue to analyse our uncertainty-aware training methods by investigating two different approaches for model selection. We use only the CAD diagnosis application for this analysis due to its larger training and test set sizes.
Most current research utilises the highest validation accuracy to identify the best/optimal performing model (up until this point we have used BACC). However, in our work we aim to provide more evidence of the optimal uncertainty-aware model by investigating if different optimal models would be obtained if we instead used lowest validation ECE as the criterion for model selection. We chose ECE as it is still the most common and widely utilised calibration measure, even with its weaknesses (Roelofs et al., 2022). We illustrate how the use of ECE and BACC as model selection criteria can affect the optimal performing model by indicating the test ECE and test BACC in Figs. 4 and 5 respectively. Here, the orange bars indicate the result when using validation ECE as the model selection criterion and the blue bars are the results when using validation BACC as the selection criterion.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline Method & \multicolumn{3}{c}{CAD Epistemic} & \multicolumn{6}{c}{CAD Aleatoric} \\ \cline{2-10} & ECE \(\Downarrow\) & AICE & OE \(\Downarrow\) & MCE \(\Downarrow\) & BS \(\Downarrow\) & ECE \(\Downarrow\) & ACE \(\Downarrow\) & OE \(\Downarrow\) & MCE \(\Downarrow\) & BS \(\Downarrow\) \\ \hline
1. Baseline model & 0.46 & 0.61 & 0.17 & 0.76 & **0.21** & 0.51 & 0.58 & 0.30 & 0.83 & 0.43 \\
2. Paired Confidence Loss & 0.48 & 0.59 & 0.12 & 0.87 & 0.24 & 0.49 & 0.63 & 0.17 & 0.83 & 0.21 \\
3. Probability Loss & 0.48 & 0.60 & 0.21 & 0.72 & 0.23 & 0.51 & 0.58 & 0.29 & 0.79 & 0.40 \\
4. Confidence Weight & **0.41** & **0.50** & 0.29 & **0.71** & **0.21** & 0.51 & 0.62 & 0.42 & **0.77** & **0.13** \\
5. AUC Loss & 0.46 & 0.56 & 0.13 & 0.78 & 0.23 & **0.47** & 0.58 & 0.19 & 0.79 & 0.3 \\
6. Soft ECE Loss & 0.45 & 0.53 & **0.09** & 0.84 & 0.25 & 0.48 & 0.62 & 0.15 & 0.87 & 0.23 \\
7. MMCL Loss & 0.45 & 0.55 & 0.15 & 0.95 & 0.25 & 0.48 & 0.58 & 0.19 & 0.83 & 0.34 \\ \hline \end{tabular}
\end{table}
Table 6: All calibration metrics (Expected Calibration Error (ECG), Adaptive ECE (ARC), Overconfidence Error (OE), Maximum Calibration Error (MCG) and Refer Soner(BS)) computed for the CAD models for epistemic (left) and aleatoric uncertainty (right). The lowest and optimal metric across strategies is highlighted in bold All metrics should ideally move to zero as models become more calibrated as indicated by the \(\Downarrow\).
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l} \hline Method & \multicolumn{3}{c}{CRT Epistemic} & \multicolumn{6}{c}{CAD Aleatoric} \\ \cline{2-10} & ECE \(\Downarrow\) & AICE & OE \(\Downarrow\) & MCE \(\Downarrow\) & BS \(\Downarrow\) & ECE \(\Downarrow\) & ACE \(\Downarrow\) & OE \(\Downarrow\) & MCE \(\Downarrow\) & BS \(\Downarrow\) \\ \hline
1. Baseline model & 0.22 \(\pm\) 0.03 & 0.19 \(\pm\) 0.03 & **0.007** \(\pm\) 0.004 & **0.73** \(\pm\) 0.05 & 0.26 \(\pm\) 0.02 & 0.52 \(\pm\) 0.02 & 0.52 \(\pm\) 0.02 & 0.19 \(\pm\) 0.02 & 0.74 \(\pm\) 0.03 & **0.40**\(\pm\) 0.02 \\
2. Paired Confidence Loss & 0.21 \(\pm\) 0.02 & 0.18 \(\pm\) 0.01 & 0.07 \(\pm\) 0.03 & 0.91 \(\pm\) 0.26 & 0.26 \(\pm\) 0.02 & 0.46 \(\pm\) 0.03 & 0.47 \(\pm\) 0.02 & **0.15**\(\pm\) 0.02 & **0.72**\(\pm\) 0.01 & **0.38**\(\pm\) 0.03 \\
3. Probability Loss & 0.21 \(\pm\) 0.03 & 0.18 \(\pm\) 0.03 & 0.03 \(\pm\) 0.02 & 0.90 \(\pm\) 0.09 & 0.26 \(\pm\) 0.01 & 0.55 \(\pm\) 0.05 & 0.56 \(\pm\) 0.02 & 0.22 \(\pm\) 0.01 & 0.76 \(\pm\) 0.04 & **0.40**\(\pm\) 0.06 \\
4. Confidence Weight & **0.19**\(\pm\) 0.01 & **0.17**\(\pm\) 0.01 & 0.02 \(\pm\) 0.01 & 0.81 \(\pm\) 0.09 & 0.23 \(\pm\) 0.01 & **0.39**\(\pm\) 0.01 & **0.40**\(\pm\) 0.01 & 0.21 \(\pm\) 0.03 & 0.82 \(\pm\) 0.02 & 0.59 \(\pm\) 0.02 \\
5. AUC Loss & 0.18 \(\pm\) 0.02 & **0.13**\(\pm\) 0.04 & **0.003**\(\pm\) 0.001 & 0.96 \(\pm\) 0.01 & **0.24**\(\pm\) 0.03 & 0.50 \(\pm\) 0.04 & 0.50 \(\pm\) 0.04 & 0.21 \(\pm\) 0.02 & 0.75 \(\pm\) 0.01 & 0.43 \(\pm\) 0.02 \\
6. Soft ECE Loss & 0.20 \(\pm\) 0.02 & **0.16**\(\pm\) 0.02 & 0.03 \(\pm\) 0.01 & 0.97 \(\pm\) 0.01 & 0.25 \(\pm\) 0.01 & 0.43 \(\pm\) 0.03 & **0.42**\(\pm\) 0.03 & **0.13**\(\pm\) 0.02 & 0.73 \(\pm\) 0.03 & 0.43 \(\pm\) 0.03 \\
7. MMCL Loss & 0.20 \(\pm\) 0.01 & **0.16**\(\pm\) 0.01 & 0.02 \(\pm\) 0.00 & 0.95 \(\pm\) 0.02 & 0.25 \(\pm\) 0.02 & 0.47 \(\pm\) 0.02 & 0.47 \(\pm\) 0.03 & 0.15 \(\pm\) 0.02 & 0.75 \(\pm\) 0.01 & 0.40 \(\pm\) 0.01 \\ \hline \end{tabular}
\end{table}
Table 4: All calibration metrics (Expected Calibration Error (ECG), Adaptive ECE (ARC), Overconfidence Error (OE), Maximum Calibration Error (MCG) and Refer Soner(BS)) computed for the CRT (left) and aleatoric uncertainty (right). The lowest and optimal metric across strategies is highlighted in bold. All metrics should ideally move to zero as models become more calibrated as indicated by the \(\Downarrow\).
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \hline Method & \multicolumn{3}{c}{CRT Epistemic} & \multicolumn{6}{c}{CRT Aleatoric} \\ \cline{2-10} & ECE \(\Downarrow\) & AICE & OE \(\Downarrow\) & MCE \(\Downarrow\) & BS \(\Downarrow\) & ECE \(\Downarrow\) & ACE \(\Downarrow\) & DE \(\Downarrow\) & MCE \(\Downarrow\) & BS \(\Downarrow\) \\ \hline
1. Baseline model & 0.28 & 0.22 & 0.10 & 0.92 & 0.29 & 0.33 & 0.36 & 0.17 & **0.70** & **0.29** \\
2. Paired Confidence Loss & 0.32 & 0.27 & **0.05** & 0.6 & 0.
## 6 Discussion
In this paper we have proposed three novel uncertainty-aware training approaches, our Paired Confidence Loss from our preliminary investigation (Dawood et al., 2021), a Probability Loss function and a Confidence Weight term. Three comparative state-of-the-art approaches were also evaluated, Accuracy versus Uncertainty Loss, Soft ECE and MMCE Loss. All six strategies were evaluated for two clinically realistic CMR-based classification problems with the aim of finding a preferred
Figure 4: Comparison of test BACC for the best-performing model within each uncertainty-aware method for different model selection criteria (validation BACC vs. validation ECE).
Figure 3: Reliability diagrams using adaptive binning to illustrate the performance of all strategies against the baseline model, indicating movement in samples and the relative accuracy and confidence relationship (Ding et al., 2020).
Figure 2: Reliability diagrams using adaptive binning to illustrate the performance of all strategies against the baseline model, indicating movement in samples and the relative accuracy and confidence relationship.
uncertainty-aware strategy that can promote clinical trust in a decision support setting. Specifically, we want to reduce confident incorrect predictions and improve confidence in correct predictions. In our work we utilised both accuracy and calibration measures to identify the best performing model and also investigated different approaches for model selection, using the highest validation BACC versus the lowest validation ECE.
### Model performance
Overall, according to the most commonly used calibration metric (ECE), our novel Confidence Weight strategy performed the best across both the CRT and CAD applications. However, for the CAD diagnosis model, the MCE for one of the bins in the Confidence Weight strategy indicated a high calibration error of 0.84, which may be attributed to the large deviation away from ideal calibration for lower confidence samples. However, considering that our goal for a high risk application is to identify and reduce overconfident wrong predictions, these low confidence bins might be less important. In our setting, after analysing our results, we argue that the overconfidence error may be a better measure to evaluate uncertainty-aware training methods, focusing as it does on overconfident wrong predictions. By this measure, the best-performing models for the CAD diagnosis task are the Paired Confidence Loss and the Soft ECE Loss.
However, our results highlight a fundamental difficulty with assessing model calibration using a single metric such as ECE. Specifically, our results tend to indicate that the calibration metrics do not completely agree. As an example, for the CAD diagnosis problem the 'best' model according to ECE actually increases the overconfidence error, maximum calibration error and Brier score. Likewise, on the same task the best-performing model according to Brier score is the Paired Confidence Loss, however this does not reduce ECE significantly. Analysis of the reliability diagram does allow us to explain some of the differences between ECE and Brier score, as Brier score is known to be insensitive to lower probabilities if fewer and infrequent samples lie within these bands (Ovadia et al., 2019). Analysing the overconfidence error, which we believe has the potential to be more useful for high risk applications, we see that the Confidence Weight model is no longer the best-performing model when analysed as a stand-alone calibration metric.
### Model selection
Interestingly, we found that for the Soft ECE loss and the Confidence Weight strategies the optimal performing model was not affected by the model selection criterion. When analysing the baseline and other uncertainty-aware strategies, a surprising result can be observed: choosing the model based on validation BACC yielded better ECE values but the accuracies achieved were lower. However, some of these differences were relatively small and so require further investigation. We also note that the AvUC method had an optimal model when utilising the validation ECE but had poorer performance if the best validation BACC was utilised.
Our analysis suggests that the choice of model selection criterion may be important for uncertainty-aware training methods, a point that we do not believe has been highlighted before in the literature. However, it appears that there is no single correct model selection measure that will consistently achieve good model calibration outcomes.
Overall, we argue that the best approach may be to look at a range of model selection metrics and choose the model that maximises both accuracy and calibration, with the calibration metric(s) being chosen to suit the context of the intended application.
### Limitations and future work
In our work we made use of Softmax probabilities, which are widely utilised and accepted but are known to be less calibrated estimates of uncertainty (Gupta et al., 2020). Additionally, our VAE architecture using multiple time-based image stacks may have prevented robust estimates of uncertainty and limited calibration performance. In future work we will aim to incorporate alternative direct methods of uncertainty estimation during training of DL models, to reduce over-estimation and underestimation of confidence, which is known to be an ongoing research problem within the field of uncertainty estimation and model calibration.
Future work will also focus on more extensive investigation and analysis of uncertainty-aware training methods for a wider range of clinical problems. We will investigate the development of alternate calibration metrics which are more tuned to specific (clinical) contexts and/or are less biased and more applicable to the healthcare setting. Furthermore we will investigate alternate architectures for quantifying uncertainty in a robust manner as well as alternate strategies for improving calibration such as focal loss (Kumar and Sarawagi, 2019). Additionally, we plan to investigate the impact of label smoothing (Carse et al., 2022) on our uncertainty-aware approaches. In this paper we chose to focus on uncertainty-aware training methods, rather than approaches that alter the training labels, but we note that label smoothing approaches could be combined with any uncertainty-aware training method, and the interaction of these two approaches should be thoroughly investigated. We will also investigate the possibility of using other calibration metrics, such as overconfidence error, for model selection, rather than BACC and ECE as we have investigated in this paper. In addition, we believe that it is important to evaluate the impact of AI on clinical workflows in a decision support setting, and the importance of model calibration on this impact. Future work will also focus on this area.
## 7 Conclusion
In summary, we have investigated a range of different calibration metrics to assess our uncertainty-aware training methods. In terms of the most commonly used calibration metric (ECE), the Confidence Weight approach resulted in the best-calibrated models. However, we highlighted that the choice of best model would vary depending on the metric used. We have argued that overconfidence error might be the most appropriate metric for high risk medical applications, and in terms of overconfidence error the best-performing models were the Paired Confidence Loss term and the Soft ECE loss.
Overall our analysis indicated that the goal of trying to improve deep learning model calibration for cardiac MR applications was achieved but only in terms of some calibration metrics. The results further highlighted the potential weakness of current measures and indicated the need to continue to investigate and identify robust metrics for high risk healthcare applications rather than simply using ECE and BACC (Gupta et al., 2020), bearing in mind that the most relevant metrics may not be the same for different applications. Further research into uncertainty-aware training for optimising different (combinations of) metrics is also recommended.
Figure 5: Comparison of test ECE for the best-performing model within each uncertainty-aware method for different model selection criteria (validation BACC vs. validation ECE).
### Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Tarenen Duwood reports financial support was provided by NIHR Biomedical Research Centre at Guy's and Saint Thomas' NHS Foundation Trust and King's College. Esther Puyol reports financial support was provided by Wellcome Trust.
## Data statement
The UKBB datasets presented in this study are publicly available and can be found in online repositories under approved research projects from [https://www.ukbiobank.ac.uk/](https://www.ukbiobank.ac.uk/). The GSTFT dataset cannot be made publicly available due to restricted access under hospital ethics and because informed consent from participants did not cover public deposition of data.
## Acknowledgements
This work was supported by the Kings DRIVE Health CDT for Data-Driven Health and further funded/supported by the National Institute for Health Research (NIHR) Biomedical Research Centre at Guy's and St Thomas' NHS Foundation Trust and King's College London, United Kingdom. Additionally this research was funded in whole, or in part, by the Wellcome Trust, United Kingdom WT203148/Z/16/Z. For the purpose of open access, the author has applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission. The work was also supported by the EPSRC, United Kingdom through the SmartHeart Programme Grant (EP/P001009/1). This research has been conducted using the UK Biobank Resource under Application Number 17806. The views expressed in this paper are those of the authors and not necessarily those of the NHS, EPSRC, the NIHR or the Department of Health and Social Care.
|
2309.02714 | Atomic-scale observation of localized phonons at FeSe/SrTiO3 interface | In single unit-cell FeSe grown on SrTiO3, the superconductivity transition
temperature features a significant enhancement. Local phonon modes at the
interface associated with electron-phonon coupling may play an important role
in the interface-induced enhancement. However, such phonon modes have eluded
direct experimental observations. Indeed, the complicated atomic structure of
the interface brings challenges to obtain the accurate structure-phonon
relation knowledge from either experiment or theory, thus hindering our
understanding of the enhancement mechanism. Here, we achieve direct
characterizations of atomic structure and phonon modes at the FeSe/SrTiO3
interface with atomically resolved imaging and electron energy loss
spectroscopy in a scanning transmission electron microscope. We find several
phonon modes highly localized (~1.3 nm) at the unique double layer Ti-O
termination at the interface, one of which (~ 83 meV) engages in strong
interactions with the electrons in FeSe based on ab initio calculations. The
electron-phonon coupling strength for such a localized interface phonon with
short-range interactions is comparable to that of Fuchs-Kliewer (FK) phonon
mode with long-rang interactions. Thus, our atomic-scale study provides new
insights into understanding the origin of superconductivity enhancement at the
FeSe/SrTiO3 interface. | Ruochen Sh, Qize Li, Xiaofeng Xu, Bo Han, Ruixue Zhu, Fachen Liu, Ruishi Qi, Xiaowen Zhang, Jinlong Du, Ji Chen, Dapeng Yu, Xuetao Zhu, Jiandong Guo, Peng Gao | 2023-09-06T04:57:52Z | http://arxiv.org/abs/2309.02714v1 | # Atomic-scale observation of localized phonons at FeSe/SrTiO\({}_{3}\) interface
###### Abstract
We present a
### Abstract
In single unit-cell FeSe grown on SrTiO\({}_{3}\), the superconductivity transition temperature features a significant enhancement. Local phonon modes at the interface associated with electron-phonon coupling may play an important role in the interface-induced enhancement. However, such phonon modes have eluded direct experimental observations. Indeed, the complicated atomic structure of the interface brings challenges to obtain the accurate structure-phonon relation knowledge from either experiment or theory, thus hindering our understanding of the enhancement mechanism. Here, we achieve direct characterizations of atomic structure and phonon modes at the FeSe/SrTiO\({}_{3}\) interface with atomically resolved imaging and electron energy loss spectroscopy in a scanning transmission electron microscope. We find several phonon modes highly localized (\(\sim\)1.3 nm) at the unique double layer Ti-O termination at the interface, one of which (\(\sim\) 83 meV) engages in strong interactions with the electrons in FeSe based on _ab initio_ calculations. The electron-phonon coupling strength for such a localized interface phonon with short-range interactions is comparable to that of Fuchs-Kliewer (FK) phonon mode with long-rang interactions. Thus, our atomic-scale study provides new insights into understanding the origin of superconductivity enhancement at the FeSe/SrTiO\({}_{3}\) interface.
Single unit-cell (UC) FeSe grown on SrTiO\({}_{3}\) substrate has attracted strong research interest for its remarkably high superconductivity transition temperature, which is about an order of magnitude higher compared to that of bulk FeSe [1, 2, 3]. The anomalously large superconducting gap occurring only in the first UC of FeSe indicates that superconductivity is substantially enhanced by the existence of interface [1, 4, 5]. Replica bands were firstly observed in angle-resolved photoemission spectroscopy (ARPES) experiments, which have an approximately 90-to-100-meV energy shift between replica bands and main bands, despite some debate [6], were identified as the signature of electron-phonon coupling [7, 8, 9, 10]. Recent high resolution electron energy loss spectroscopy (HREELS) experiments suggest that the phonons involved in electron-phonon coupling are likely to be the Fuchs-Kliewer (F-K) phonons of SrTiO\({}_{3}\)[11, 12, 13]. The similarity between energy of the F-K phonon and energy shift of replica bands indicates that this phonon could contribute predominately to the electron-phonon coupling.
The emergence of localized phonons at the interface is widely understood to be a consequence of the breakdown of translational symmetry, which alters the local bonds and, subsequently, the lattice vibrations at the interface. However, to precisely probe the highly localized phonons across the FeSe/SrTiO\({}_{3}\) interface is challenging for the surface analysis techniques such as ARPES and HREELS due to their large probe size and their unique setup configurations. On the other hand, the lack of accurate knowledge on the atomic structure of interface in previous studies blurs our understanding of the structure-property relation. It can be expected that the interface properties, including the electronic structures, phonon modes and electron-phonon coupling strongly depend on the atomic structure. In fact, the atomic structure of this interface is sensitive to sample history such as annealing and surface treatment during sample preparation [14, 15]. Theoretically, previous investigations either approximated the phonon structure without providing an accurate depiction of the atomic structure [16, 17, 18] (relying solely on the normal monolayer Ti-O termination, which however does not align well with experimental observations [14, 19, 20]), or they concentrated primarily on the electron bands at the interface [21, 22, 23]. The variable and complicated atomic structure of interface poses challenges for _ab initio_ calculations aiming to meticulously reproduce or predict interfacial properties such as the phonon structure or electron-phonon coupling. To date, the localized phonons across the FeSe/SrTiO\({}_{3}\) interface are still
largely unknown, not to mention precisely correlating with their atomic arrangements or superconductivity properties, which motivates our present study.
In this work, we study FeSe/SrTiO\({}_{3}\) interface by using scanning transmission electron microscopy-electron energy loss spectroscopy (STEM-EELS). The cutting-edge developments in STEM-EELS have made it possible to directly image phonon excitations at the nanoscale credit to its high spatial and energy resolution [24, 25, 26, 27, 28], which is very suitable for the study of interfacial phonons in heterostructures [29, 30, 31, 32, 33]. Meanwhile, the ability to reveal the atomic structure, electronic state, and phonon mode allows us to understand their interrelationships and thus the underlying mechanism.
We first combine high-angle annular dark-field (HAADF) images with atomically resolved core-loss spectra to recognize the atomic structure of double layer Ti-O termination at FeSe/SrTiO\({}_{3}\) interface. The structure reconstruction exists in the top layer Ti-O (Top layer is defined as layer adjacent to FeSe). From the atomically resolved phonon spectra across the interface, highly localized interfacial phonon modes can be observed. The _ab initio_ calculations of interfacial phonon by taking subsistent double Ti-O termination layer into consideration further confirm that the interfacial phonons originated from the double layer Ti-O termination, i.e., phonons with energy \(\sim\)18 meV and \(\sim\)81 meV are centered at the top layer and phonons with energy \(\sim\)51 meV and \(\sim\)80 meV are centrally enhanced at the bottom layer. Particularly, we identify a strongly coupling interfacial (SCI) phonon mode at the double layer Ti-O termination. This mode can promote pronounced electron-phonon coupling at the interface, whose strength is even comparable to that of the previously reported Fuchs-Kliewer (FK) phonon mode. Thus, such a localized interface phonon likely plays a pivotal role in the enhancement of interface superconductivity. Our atomic-scale measurements of phonons across the FeSe/SrTiO\({}_{3}\) interface and studies of correlation among the atomic structure, phonon structure and quantum properties help us to understand the past experiments and provide new insights for the mechanism of the enhanced interfacial superconductivity.
In our study, the substrate treatment, sample growth and annealing procedure are exactly the same as previously reported conditions that are optimal for superconductivity [34]. We first study the atomic structure of FeSe/SrTiO\({}_{3}\) interface by atomically resolved STEM-HAADF and core-level EELS. Fig. 1a is a HAADF image of FeSe/SrTiO\({}_{3}\) interface viewed along [100] zone axis. The double layer Ti-O
termination is observed, in accordance with former researches [14, 19, 20]. The bottom/top layers of double layer Ti-O termination are marked in the Fig. 1a with purple/orange arrows, and named as Ti-O(B)/Ti-O(T) in the following text. The existence of double layer Ti-O termination can also be confirmed from the atomically resolved core-loss EELS of Fe-\(L_{2,3}\) edge (upper panel) and Ti-\(L_{2,3}\) edge (lower panel) in Fig. 1b. Changes of Ti-\(L_{2,3}\) edge at Ti-O(B) and Ti-O(T) layers are attributed to the distortion of TiO\({}_{6}\) octahedron and possible oxygen vacancies introduced during sample annealing [35, 36](see Extended Data Fig. 1 for better visualization). In the Ti-O(T) layer, there is an extra atom contrast between ordinary top Ti sites (red arrow in Fig. 1a), which is expected to be oxygen atom position and invisible in the HAADF image. This has been reported and explained as the reconstructed Ti-O termination layer [23], e.g., recent studies revealed \(\sqrt{13}\times\sqrt{13}\) R\(33.7^{\circ}\) reconstruction of Ti-O layer after FeSe was grown [20, 22]. In fact, various SrTiO\({}_{3}\) surface reconstructions with additional Ti-O layer have been reported [37, 38, 39]. To verify the assumption that the extra atom contrast comes from reconstruction, we performed the relaxation with density functional theory (DFT) on FeSe/SrTiO\({}_{3}\) structure without reconstruction, and with \(\sqrt{2}\times\sqrt{2}\) R\(45^{\circ}\), \(\sqrt{5}\times\sqrt{5}\) R\(26.6^{\circ}\), \(\sqrt{10}\times\sqrt{10}\) R\(18.3^{\circ}\), \(\sqrt{13}\times\sqrt{13}\) R\(33.7^{\circ}\) reconstruction, and corresponding STEM-HAADF image simulations using QSTEM [40] (see Extended Data Fig. 2 for details). From the simulation, structure without reconstruction does not show extra atom contrast on red-arrow-pointed site. Although the \(\sqrt{2}\times\sqrt{2}\) R\(45^{\circ}\) structure does show extra atom contrast, the distance between bottom Se layer and top Ti layer is significantly smaller than the experimental result. The other three structures are sub-structures of \(\sqrt{13}\times\sqrt{13}\) R\(33.7^{\circ}\) reconstruction [39], and their simulated images all agree well with the experiment. Due to the reported experimental evidence of \(\sqrt{5}\times\sqrt{5}\) R\(26.6^{\circ}\) reconstruction under similar annealing conditions [41] (see Methods for detail), we pick \(\sqrt{5}\times\sqrt{5}\) R\(26.6^{\circ}\) reconstruction as a rational structure to explain our experimental result. Its simulated HAADF image and atomistic models are shown in Fig. 1c.
We then measure phonon spectra across the FeSe/SrTiO\({}_{3}\) interface by atomically resolved STEM-EELS. The HAADF image of the acquisition region and a line profile of corresponding EEL spectra are shown in Fig. 2a and Fig. 2b respectively. The clear contrast between Sr column and Ti-O column demonstrates sufficiently high spatial resolution to distinguish interfacial phonon and the rationality of column-by-column spectra analysis. As shown in Fig. 2b, the SrTiO\({}_{3}\) transverse optical (TO) branch phonon
around 65 meV splits into two branches as approaching the interface from SrTiO\({}_{3}\) side, a transformation attributable to the emergence of the interfacial phonon. One of the phonon branches experiences a blue-shift, positioning itself within the energy gap of 65-100 meV, while the other transitions towards lower energy domains. For better visualization, the spectra extracted from bulk SrTiO\({}_{3}\) (blue), Ti-O(B) (purple), Ti-O(T) (orange), FeSe layer adjacent to the interface (FeSe\({}_{\text{int}}\), yellow) and bulk FeSe (green) are shown in Fig. 2c. Their spatial regions are labeled by dashed rectangles with corresponding color in Fig. 2a. We find that several local interfacial phonon modes emerge due to the presence of double layer Ti-O termination. Even with broadening over adjacent atomic layers, we have still successfully pinpointed the localization centers of these modes. The red arrows in Fig. 2c point to the spectral features show enhancement centered at Ti-O(T) layer, whose energies are \(\sim\)18 meV and \(\sim\)81 meV. The black arrows point to the features that are centrally located at Ti-O(B) layer whose energies are \(\sim\)51 meV and \(\sim\)80 meV.
To better separate the intrinsic spectra from interface, we applied non-negative matrix factorization (NMF) on the whole spectrum image. NMF is a powerful tool to provide well-interpretable characteristic of data, which has already been successfully applied to the identification of EEL spectra[42, 43]. Three components are found to best describe the acquired data. Their intensity maps are shown in Fig. 2d-f. Fig. 2g shows the line profile of their intensity map across the interface. Component I shows atomic contrast consistent with Ti column in HAADF and decays fast as approaching the interface, indicating its origin from the vibration of Ti-O plane in SrTiO\({}_{3}\). Component III shows contrast of Sr atom and FeSe column, thus is attributed to the sum of vibration signals from FeSe and Sr atom in SrTiO\({}_{3}\), considering their similarity in vibration energy. Intriguingly, component II is highly localized at the interface, constructing a peak centered at the Ti-O(T) layer, spreading over the double layer Ti-O termination and the first FeSe layer. This helps to explain the large enhancement of superconductivity only occurring in the first UC of FeSe. The full-width-at-half-maxima (FWHM) of component II (w in Fig. 2g) is \(\sim\)1.3 nm. A different and independent approach to extract the features of interfacial phonons is finding the minimum difference between measured spectrum and possible linear combination of two bulk spectra. The result is shown in Extended Data Fig. 3, in which the width \(\sim\)1.3
nm agrees well with the NMF result, further confirming the presence and highly localized nature of interfacial phonons.
Notably, the feature above 45 meV in FeSe spectra still remains even far from the interface, exceeding the max frequency of pure FeSe[17]. Therefore, it must come from SrTiO\({}_{3}\) substrate. This can be found in NMF spectrum of component III as well (Extended Data Fig. 4a). We ascribe these signals to the phonon polaritons (PPs), i.e., the F-K phonons, of SrTiO\({}_{3}\), which are found to penetrate into FeSe in previous works[11, 12, 13]. To confirm this, we fitted the spectra of NMF component III, and compared it to the result that was acquired under on-axis experimental setup. The similarity of spectrum shape and fitted energy between on-axis and off-axis results supports our assumption (see Extended Data Fig. 4b-d for detail).
To get further insights into the interfacial phonon, we carried out _ab initio_ phonon calculations on an interface model of FeSe on the double-layer Ti-O terminated SrTiO\({}_{3}\) (see Methods). Corresponding to the experimental spectra, the projected phonon density of states (PDOS) of bulk SrTiO\({}_{3}\), Ti-O(B) layer, Ti-O(T) layer, FeSeint layer and bulk FeSe are plotted in Fig. 3a. Similarly, the spectrum features \(\sim\)18 meV and \(\sim\)81 meV that are significantly enhanced at Ti-O(T) layer (red arrows), and spectrum features \(\sim\)51 meV and \(\sim\)80 meV that are enhanced at Ti-O(B) layer (black arrows), are spotted. The eigenvectors of these modes from side view and top view are illustrated in Extend Data Fig. 5a-d. These modes involve the vibrations much stronger at either Ti-O(B) or Ti-O(T) layer, indicating their highly localized nature (more details in Extended Data Fig. 5e-h). The projected dispersions of interface models further demonstrate the localized nature of interfacial phonons and dynamical stability of the reconstructed structure (Extended Data Fig. 6), i.e., significant imaginary frequency only exists in the reconstruction-free interface model. All these characters agree well with the experimental findings, despite a small mismatch of energy that is within the accuracy of the DFT calculations.
To establish a direct correlation between observed interfacial phonons and enhanced superconductivity, we extracted the electron-phonon coupling features from the calculations. The phonon linewidth due to electron-phonon coupling mapped on the phonon dispersion and the Eliashberg spectral function at \(\Gamma\) point for calculated structure is shown in Fig. 3b. Particularly, we find that a SCI phonon mode which has the strongest coupling strength in our calculations. Firstly, it has an energy of \(\sim\)83 meV
and maximum line width among all modes at \(\Gamma\) point, which corresponds to the peak of the same energy in the Eliashberg spectral function. From the Eliashberg spectral function curves, we obtain the electron-phonon coupling constant[44] of the SCI phonon mode as \(\lambda\!\approx\!0.10\), a value comparable with the previously reported electron-phonon coupling strength from the forward scattering model[8, 45]. Secondly, its eigenvector from side and top view is shown in Fig. 3c, suggesting this mode is caused by the out-of-phase vibration of Ti and O ions. Extended Data Fig.7 demonstrates the localized nature of this SCI mode. Such vibrations induce a dipole, and change the electric field at the interface, leading to interactions with electrons in FeSe[7]. Thirdly, the electron-phonon coupling usually leads to the phonon softening[46]. To confirm this behavior in our system, we performed calculations with a double layer Ti-O terminated SrTiO\({}_{3}\) surface model (without FeSe). Distinctively, we find a mode analogous to SCI mode with respect to vibration eigenvector (see Extended Data Fig. 8). This mode has an energy of \(\sim\)93 meV at the FeSe-free SrTiO\({}_{3}\) surface, which is about 10 meV higher than the energy of the SCI mode. The pronounced softening in energy for the same vibrational mode can solely be attributed to the presence of FeSe. The substantial phonon softening in this system implies the presence of strong electron-phonon coupling induced by the observed SCI mode, i.e., the phonons of SrTiO\({}_{3}\) strongly interact with the electrons in FeSe.
In previous studies, F-K phonons from the SrTiO\({}_{3}\) substrate have usually been regarded as the primary origin of interfacial electron-phonon coupling in this system[11, 12, 13]. The intense long-range dipolar field generated by the substrate F-K phonons has been widely believed to enhance the electron pairing in FeSe, although the exact microscopic interaction mechanism is still awaiting clarification. Our results show that besides to the F-K phonons with long-range interactions, the localized interfacial phonons with short-range interactions also make unique contributions to the electron-phonon coupling, providing new insights into understanding the underlying mechanisms of enhanced superconductivity at interfaces.
In summary, we carried out direct measurements of phonon spectra at FeSe/SrTiO\({}_{3}\) interface and correlated the features with the unique double layer Ti-O termination with reconstruction. We implemented the _ab initio_ calculations of interfacial phonons by taking subsistent double Ti-O termination layer into consideration, which agree well with the experiment. We find that highly localized phonons are emergent at the interface,
which promote intense electron-phonon coupling. These findings take an essential step towards revealing the role of interface in the interface-enhanced superconducting systems.
## Methods
**Sample growth and TEM sample preparation.** A FeTe(20 UC) /FeSe(20 UC)/SrTiO\({}_{3}\) structure was grown by molecular beam epitaxy. Before the epitaxial growth of FeSe films, the Nb-doped (0.5 wt %) SrTiO\({}_{3}\) (001) substrates (from Shinkosha Co. Ltd) were pretreated in ultrahigh vacuum (UHV) at 1000 \({}^{\circ}\)C for 45 min to obtain a Ti-O plane terminated surface. The high-quality FeSe film was grown by co-depositing high-purity Fe (99.99%) and Se (99.99+%) with a flux ratio \(\sim\)1:10 onto treated SrTiO\({}_{3}\) held at 470 \({}^{\circ}\)C. After growing FeSe, the sample was annealed at 480\({}^{\circ}\)C for 3 hours under UHV. Afterwards, FeTe capping layer was grown by co-depositing Fe and Te with a flux ratio \(\sim\)1:5 onto FeSe held at 385 \({}^{\circ}\)C. The cross-sectional TEM sample was prepared by ThermoFisher Helios G4 UX Focused Ion Beam (FIB) system.
**TEM characterization and EELS data acquisition.** The STEM-HAADF images shown in Fig. 1a, Extended Data Fig. 1b and Extended Data Fig. 2a were recorded using an aberration-corrected FEI Titan Themis G2 operated at 300 kV. The beam convergence semi-angle was 30 mrad and the collection angle was 39-200 mrad. The HAADF image shown in Fig. 2a was recorded at 60 kV using a Nion U-HERMES200 microscope equipped with both a monochromator and aberration correctors, with 35-mrad convergence semi-angle and 80-200 mrad collection semi-angle. All the EELS data were acquired on the same Nion U-HERMES200 microscope operated at 60 kV, with 35-mrad convergence semi-angle and 24.9-mrad collection semi-angle. EELS data shown in Fig. 1b, Extended Data Fig. 1a, Extended Data Fig. 1c and Extended Data Fig. 4c-d was collected under the on-axis experimental setup, namely the center of the collection aperture was placed at the center of the direct beam disk. Data shown in Fig. 1b and Extended Data Fig. 1a was recorded with 128\(\times\)12 pixels within the range of 6 nm across the interface. The energy dispersion was set as 0.262 eV/channel. Data shown in Extended Data Fig. 4c-d was collected with 128\(\times\)16 pixels within the range of 8 nm across the interface. The energy dispersion was 0.5 meV/channel. Data shown in Fig. 2b-g, Extended Data Fig. 3 and Extended Data Fig. 4a-b was collected under the off
axis experimental setup, in which the electron beam was displaced from optical axis along the [010] direction of SrTiO\({}_{3}\) with 60 mrad to greatly reduce the contribution from the long-range dipole scattering [28, 47]. The data was collected with 80\(\times\)10 pixels within the range of 8 nm across the interface. The energy dispersion was 0.5 meV/channel.
**EELS data processing.** All acquired EEL spectra were processed by custom-written MATLAB code. The EEL spectra were first aligned by their normalized cross-correlation. Subsequently, the spatial drift-correction was applied for obtaining line-scan data. The background of the core-loss EELS data shown in Fig. 1b and Extended Data Fig. 1a-c was fitted by power-law and then subtracted from the whole space. The phonon spectra shown in Fig. 2b-g, Extended Data Fig. 3 and Extended Data Fig. 4 were multiplied by the square of energy rather than fitting with a background function to resolve the difficulty of fitting the background function at low energy, and to treat spectra in different spatial components uniformly. This method has already been performed effectively in processing EELS data acquired in SrTiO\({}_{3}\) and other analogous materials [33]. Lucy-Richardson deconvolution was then employed to ameliorate the broadening effect caused by the finite energy resolution. NMF was performed to decompose the off-axis data. NMF is a computational method that decomposes a non-negative matrix into the product of two non-negative matrices, often used for feature extraction [48]. We found 3 components best describe our data. The spectra in Extended Data Fig. 4 was fitted by gaussian peak. In Extended Data Figure 3, the interface component of the spectra was extracted by fitting measured spectra with linear combination of SrTiO\({}_{3}\) spectrum and FeSe spectrum. The fitting was performed by minimizing \(\left\|S(\omega)-a_{1}S_{\text{SrTiO}_{3}}(\omega)-a_{2}S_{\text{FeSe}}( \omega)\right\|\) while keeping the residual non-negative, where \(S(\omega)\) is the measured spectrum in Fig. 2b, \(S_{\text{SrTiO}_{3}}\) means the bulk SrTiO\({}_{3}\) spectra, \(S_{\text{FeSe}}\) means the bulk FeSe spectra, and \(a_{1}\), \(a_{2}\) are adjusted coefficients.
_Ab initio calculations._ Density functional theory calculations were performed using Quantum ESPRESSO [49, 50] with the Perdew-Burke-Ernzerhof for solid (PBEsol) exchange-correlation functional [51] and the projector augmented wave (PAW) method pseudopotential [52]. The kinetic energy cut-off was 60 Ry for wavefunctions and 600 Ry for charge density and potential. The reconstruction-free, \(\sqrt{2}\times\sqrt{2}\) R45\({}^{\circ}\) reconstruction, \(\sqrt{5}\times\sqrt{5}\) R26.6\({}^{\circ}\) reconstruction, \(\sqrt{10}\times\sqrt{10}\) R18.3\({}^{\circ}\) reconstruction
and \(\sqrt{13}\times\sqrt{13}\) R33.7\({}^{\circ}\) reconstruction FeSe/SrTiO\({}_{3}\) slab structures containing 1 UC FeSe connected to the double-layer Ti-O terminated 3 UC SrTiO\({}_{3}\) were built, for which an in-plane lattice constant of \(a_{\text{SrTiO}_{3}}\) = 3.893 A (the optimized lattice constant of bulk SrTiO\({}_{3}\) under the used exchange-correlation functional and pseudopotential) and an out-of-plane lattice constant of \(c\) = 40 A was chosen. The total atom numbers of these structures are 22, 45, 110, 217 and 277, respectively. The lattice mismatch between SrTiO\({}_{3}\) and FeSe was ignored. All the structures were optimized while keeping lattice constant invariant until the residual force was below 10\({}^{\text{-}5}\) Ry/Bohr on every atom and the total energy gradient below 10\({}^{\text{-}10}\) Ry, reaching the numerical limit of the software.
For the phonon calculation, we used FeSe (1 UC)/SrTiO\({}_{3}\)(3 UC) interface models. Only reconstruction-free and \(\sqrt{5}\times\sqrt{5}\) R26.6\({}^{\circ}\) reconstruction FeSe/SrTiO\({}_{3}\) structures are applied, because the \(\sqrt{2}\times\sqrt{2}\) R45\({}^{\circ}\) reconstruction structure is not consistent with our experimental observation, and \(\sqrt{10}\times\sqrt{10}\) R18.3\({}^{\circ}\) reconstruction or \(\sqrt{13}\times\sqrt{13}\) R33.7\({}^{\circ}\) reconstruction structures contain too many atoms to be calculated in DFT framework. The dynamical matrices and force constants were obtained using Phonopy[53]. The treatment of non-analytical term is implemented in Quantum Espresso package. The projected phonon DOS (PDOS) was calculated by interpolating the dynamical matrix on a 15\(\times\)15\(\times\)5 q-mesh. No evident change in PDOS can be observed in the SrTiO\({}_{3}\) 2 UC away from the interface compared with bulk SrTiO\({}_{3}\), manifesting that our model is large enough to distinguish the interfacial SrTiO\({}_{3}\) form the bulk SrTiO\({}_{3}\). The calculated PDOS contains small imaginary frequencies occupying \(\sim\)1% of the total PDOS of SrTiO\({}_{3}\) along with the double-layer Ti-O termination, close to that obtained from calculation in bulk SrTiO\({}_{3}\) performed in the same way. But the imaginary frequencies did not appear in the region around the \(\Gamma\) point, thus not confounding the analysis of phonon modes at the \(\Gamma\) point. The eigenvectors in Fig. 3c, Extended Data Fig. 5, 7 and 8 are picked at \(\Gamma\) point. As a comparison, a surface structure only containing 3 UC double layer Ti-O terminated SrTiO\({}_{3}\) was relaxed separately and then used for phonon calculation. A higher-energy surface mode \(\sim\)93 meV was found, whose eigenvector resembles that of the SCI mode. As another comparison, phonon calculation of fully relaxed reconstruction-free FeSe/SrTiO\({}_{3}\) structure was performed using both FD method with a 2\(\times\)2\(\times\)1 supercell and DFPT method with a 2\(\times\)2\(\times\)1 q-mesh. No difference is found between results of two methods. Its projected phonon dispersion
is shown in Extended Data Fig. 6b. Several phonon modes with large imaginary frequencies emerge at \(\Gamma\) point, indicating the dynamical instability of the structure.
To confirm the spatial characteristics of the SCI mode, we examined all phonon modes in the interface model containing 3 UC SrTiO\({}_{3}\). The calculation result also shows the presence of this SCI mode, which only involves atoms of the top layer of SrTiO\({}_{3}\), consolidating that this mode is highly localized at the interface (see Extended Data Fig. 7).
For the phonon calculation of bulk SrTiO\({}_{3}\) and bulk FeSe in Fig.3a, we built the conventional one-unit-cell SrTiO\({}_{3}\) and FeSe, and fully relaxed with constrain of in-plane lattice constant \(a=3.893\) A. The interatomic forces are calculated by DFPT with a 4\(\times\)4\(\times\)4 q-mesh. The PDOS was calculated by diagonalizing dynamical matrix interpolated on a 20\(\times\)20\(\times\)20 q-mesh, and projected onto all the atoms in the unit cell.
For the electron-phonon coupling calculation, a \(\sqrt{5}\times\sqrt{5}\) R26.6\({}^{\circ}\) reconstructed FeSe/SrTiO\({}_{3}\) slab structures containing 1 UC FeSe connected to the double-layer Ti-O terminated 1 UC SrTiO\({}_{3}\) (60 atoms in total) was applied using density functional perturbation theory (DFPT) implemented in Quantum ESPRESSO. We chose 1 UC SrTiO\({}_{3}\) instead of 3 UC (110 atoms in total) for electron-phonon coupling calculation due to the limitation of computing resource. As a comparison, phonon structure corresponding to this model was also calculated using finite displacement (FD) method with a combination of Phonopy and Quantum ESPRESSO using a 1\(\times\)1\(\times\)1 supercell. No evident change in phonon structure was observed in results of DFPT or FD method for this structure. A dense mesh of 16\(\times\)16\(\times\)4 k-points was used for the sum of electron-phonon coefficients at the Fermi energy. Other parameters are kept same as in the above text.
The work was supported by National Natural Science Foundation of China (52125307, 11974023, 52021006, T2188101), the "2011 Program" from the Peking-Tsinghua-IOP Collaborative Innovation Center of Quantum Matter, Youth Innovation Promotion Association, CAS. The sample growth at CAS was supported by the National Key R&D Program of China (2017YFA0303600, 2021YFA1400200), the National Natural
Science Foundation of China (11874404, 11974399, 11974402), and the Strategic Priority Research Program of Chinese Academy of Sciences (XDB33000000). X.T.Z. was partially supported by the Youth Innovation Promotion Association of Chinese Academy of Sciences. We acknowledge Prof. Peter Rez at Arizona State University for helpful discussion. We acknowledge Electron Microscopy Laboratory of Peking University for the use of electron microscopes. We acknowledge High-performance Computing Platform of Peking University for providing computational resources for the DFT and FD calculation.
## Data availability
The data that support the findings of this study are available from the corresponding author upon request.
## Code availability
Custom MATLAB codes used for data processing and DFT related post-processing are available from the corresponding author upon request.
## Author Contribution
R.C.S., Q.Z.L. and X.F.X. contributed equally to this work. P.G. and J.D.G. conceived the project; X.F.X. grew the sample with the guidance of X.T.Z. and J.D.G.; R.C.S. performed the STEM-EELS experiment and data analysis assisted by Q.Z.L., B.H., F.C.L., R.S.Q., and X.W.Z. with the guidance of P.G.; Q.Z.L. and R.C.S. performed _ab initio_ calculations with the guidance of J.C.; R.X.Z. prepared the TEM sample. B.H. acquired the atomic-resolution STEM-HAADF image. X.F.X., X.T.Z. and J.D.G. helped the data interpretation. R.C.S. wrote the manuscript with the help of Q.Z.L. under the direction of X.T.Z., J.D.G. and P.G.; All the authors contributed to this work through useful discussion and/or comments to the manuscript.
## Competing Interests
The authors declare no competing interests. |
2304.02930 | Simulation of Nonlinear Systems Trajectories: between Models and
Behaviors | In this paper, we study connections between the classical model-based
approach to nonlinear system theory, where systems are represented by
equations, and the nonlinear behavioral approach, where systems are defined as
sets of trajectories. In particular, we focus on equivalent representations of
the systems in the two frameworks for the problem of simulating a future
nonlinear system trajectory starting from a given set of noisy data. The goal
also includes extending some existing results from the deterministic to the
stochastic setting. | Antonio Fazzi, Alessandro Chiuso | 2023-04-06T08:39:18Z | http://arxiv.org/abs/2304.02930v3 | # Data-driven prediction and control for NARX systems
###### Abstract
We consider two problems related to control of unknown nonlinear systems: find the input which generates a given trajectory and tracking a given trajectory from a predicted one. The proposed method predicts the system future evolution directly from the available data, hence without estimating a system plant. While data-driven control problem are known and studied for linear systems, only special classes of systems are considered in the literature for the nonlinear case.
Data-driven algorithms - Data-driven control - Nonlinear systems
## I Introduction
The exponentially increasing quantity and quality of available data [1] is stimulating an unprecedented interest in the developments of data-based techniques in a variety of application domains. In the broad area of control design, also motivated by the increasing complexity of modern systems, _it is also widely recognized that obtaining the process model is the single most time consuming task in the application of model-based control_[2]. This has sparked a significant interest in developing control methods that do not explicitely exploit a mathematical model, but rather aim at extracting information directly from historcyclal data [3].
The idea of data-driven methods is indeed to study and express systems properties by working directly with the available measured data, without explicitly identifying/estimating the underlying process. This can be done, for instance, in the behavioral setting ponieared by J.C. Willems [4] (see also [5]), without explicitely exploting a system model (e.g., state space or transfer function [6], but rather representing systems as sets of trajectories (the _behavior_). This "behavioural" approach has close connection with subspace identification [7], where Hankel data matrices play a major role.
A milestone in the behavioural setting is a result known as _fundamental lemma_[8], that allows to represent all the finite-length trajectories of a system from an observed one, that can be also viewed as a reinterpretation of so-called intersection algorithms developed in the subspace identification literature for deterministic systems [9].
A leading role is played by the Hankel matrix built from the observed trajectory. But the applications of this result are not limited to data-based systems representation since it also allowed to perform data-driven simulations for LTI systems [10], that is, simulations of a system dynamic based only on the observed data (hence by skipping the _system identification_ step).
The only limitation of this powerful result was its restriction to the class of controllable LTI systems. Later on, several authors started to work on the topic, and different generalizations and extensions of the _fundamental lemma_ have been proposed for data-driven simulations and control: data generated by multiple trajectories [11], uncontrollable systems [12] and several special classes of systems: second order Volterra [13], Wiener-Hammerstein [14], polynomial NARX [15], bilinear systems [16], parameter varying systems [17], generalized bilinear systems [18]. Other works [19] use the Koopman operator to deal with nonlinear systems by adding a block (associated with the Koopman operator) to the classical Hankel matrix.
A critical role was recently gained by the rank of the involved Hankel matrix, since it was shown that the classical hypotheses of the fundamental lemma (system controllability and persistency of excitation of the input variables) can be replaced by a rank constraint on the Hankel matrix [20].
About data-driven control, it is a well-known topic in the literature for the class of linear time-invariant systems. Control problems as _trajectories steering_ already appeared in [21] as an alternative to the classical systems interconnection paradigm [22]. [23, Section 5] summarizes some control problems stated in the behavioral setting and based on Willems' fundamental lemma.
While we may agree that the difference (if any) between model-based and data-driven methods is rather mild in a deterministic setting, we do believe (direct) data-driven techniques may have an edge over model-based when uncertainty, undermodeling and noise come into play. Yet, before delving inte the issue of properly handling noise, we believe it is fundamental to properly understand the deterministic scenario. As such, in this paper, we stick to noise-free data, except for some simultaion results aiming at demonstrating robustness (w.r.t. noise) of the developed tools.
This work was inspired by both [19], exploiting a (finite dimensional) Koopman representation of a nonlinear system to apply Willems' fundamental lemma, and by [18], whose author performs data-driven simulations for generalized bilinear systems in the behavioral setting by a linear time-invariant embedding.
Our contribution is twofold: do data-driven simulations for nonlinear systems (based on a nonlinear version of Willems' fundamental lemma) by proposing an iterative strategy which can deal with arbitrary nonlinear terms, and use the computed predictions to solve some data-driven predictive control problems.
## II Data-driven simulations for LTI systems
The problem of data-driven simulations was first studied in [10] for the class of LTI systems. Its solution method was based on the Hankel matrix built from the observed data, and the Willems' _fundamental lemma_[8]. We state the problem, and briefly provide its solution strategy in the following.
**Problem 1**: _Given an observed data trajectory \((u_{d},y_{d})=w_{d}=(w_{d}(1),\ldots,w_{d}(n_{d}))\) of a LTI system \(\mathcal{B}\), an initial condition \(w_{ini}=(w(-t_{ini}+1),\ldots,w(0))\) and an input \(u_{f}=(u_{f}(1),\ldots,u_{f}(t_{f}))\), find the output \(y_{f}=(y_{f}(1),\ldots,y_{f}(t_{f}))\) such that \((w_{ini}\wedge w_{f})\) is an admissible trajectory for \(\mathcal{B}\) (we denoted as \(w_{f}=(u_{f},y_{f})\) and \(a\wedge b\) as the concatenation of the trajectories \(a\) and \(b\))._
Problem 1 is stated and solved in [10] by a data-based representation of the systems based on Hankel matrices built from the corresponding trajectories. It admits an explicit solution that can be computed directly from the observed data, and it is cheap from the computational point of view:
\[U =H_{t_{ini}+t_{f}}(u_{d})\quad Y=H_{t_{ini}+t_{f}}(y_{d}) \tag{1}\] \[U =\begin{bmatrix}U_{p}\\ U_{f}\end{bmatrix}t_{fi}\quad Y=\begin{bmatrix}Y_{p}\\ Y_{f}\end{bmatrix}t_{fi}\] \[g =\begin{bmatrix}U_{p}\\ Y_{p}\\ U_{f}\end{bmatrix}^{+}\begin{bmatrix}u_{ini}\\ y_{ini}\\ u_{f}\end{bmatrix}\] \[y_{f} =Y_{f}g\]
where \(A^{+}\) denotes the pseudo-inverse of the matrix \(A\). We remark that a sufficiently long initial condition guarantees the uniqueness of solution [10, Lemma 1].
## III Data-driven and model-based simulations for NAR systems
Discrete time nonlinear systems can be represented by nonlinear difference equations where both the input and the output depend on their past values [24]. [18] considers Single-Input Single-Output systems of the form
\[y(t+\ell)=f(\mathbf{x}), \tag{2}\] \[\mathbf{x}=(u(t),y(t),\ldots,u(t+\ell-1),y(t+\ell-1),u(t+\ell)).\]
In (2) \(u\) is an input (free variable) and \(y\) is the output (which depends on \(u\) and the initial conditions \(u_{ini},y_{ini}\)). The function \(f\) is assumed to be a (finite) linear combination of arbitrary (but known) linear and nonlinear functions, which we collect in a vector \(\mathbf{\Psi}\)1. We denote as \(x_{\mathbf{\Psi}}=\mathbf{\Psi}(x)\) the vector collecting all the terms of the form \(\Psi_{i}(x_{j})\forall i,j\).
Footnote 1: As in [18], we assume to know exactly the (finite) set of basis functions generating the system. The problem of selecting such functions from an overset is addressed, _e.g._, in [25, 26].
### _Model-based simulation_
The system (2) can be rewritten as
\[\begin{bmatrix}\theta&-1\end{bmatrix}\begin{bmatrix}x_{\mathbf{\Psi}}(1)& \cdots&x_{\mathbf{\Psi}}(T-\ell)\\ y(\ell+1)&&y(T)\end{bmatrix}=\Theta\begin{bmatrix}x_{\mathbf{\Psi}}\\ y\end{bmatrix}=0. \tag{3}\]
The model-based simulation relies on the following two-steps procedure:
1. Estimate the parameters \(\Theta\) which define the system equation (2) (this is done by computing the left kernel of the nonlinearly structured matrix \(\begin{bmatrix}x_{\mathbf{\Psi}}\\ y\end{bmatrix}\) in (3));
2. Plug the estimate \(\hat{\Theta}\) in the system equation (2).
If we work in the deterministic setting (noiseless data), this method is usually more convenient. However, we illustrate how to approach the problem by a data-driven approach.
### _Data-driven simulation_
The Hankel matrix is no more suitable to predict system trajectories if the available (input/output) data come from a nonlinear system since the classical version of the fundamental lemma does not hold. We need to replace it with a suitable structured matrix which takes into account for all the possible nonlinearities on both the input and the output. Data-driven simulations for generalized bilinear systems are studied in [18] as an extension of the problem in Section II. We try to summarize the main points in the following, without describing all the details, but we add some relevant points and arguments to the topic.
The data-driven simulation problem is an extension of Problem 1, where a LTI system is replaced by a nonlinear system of the form (2). The new problem formulation follows:
**Problem 2**: _We are given a nonlinear system of the form (2), an observed trajectory of \(n_{d}\) data points \(\{(u_{i},y_{i}):\eqref{eq:1}\text{ holds for }i=1,\ldots,n_{d}\}\) and a set of functions \(\mathbf{\Psi}\) which defines the system equation (2). Given an initial condition \(u_{ini},y_{ini}\) and an input \(u_{f}\), find the output \(y_{f}\) such that \((u_{f},y_{f})\) satisfies (2) and they match the initial condition._
The idea of [18] is to embed a nonlinear system into a LTI system. This is done by replacing each nonlinear term in the system equation (2) by an additional input. By denoting as \(u_{nl}\) the vector of such added nonlinear inputs, the formula for the data-driven simulation in [18] is
\[y_{f}=H_{t_{f}}(y(t+\ell))\begin{bmatrix}H_{\ell}(u_{\mathbf{\Psi}})\\ H_{\ell}(y_{\mathbf{\Psi}})\\ H_{t_{f}}(u_{\mathbf{\Psi}}(t+\ell))\end{bmatrix}^{+}\begin{bmatrix}u_{\mathbf{ \Psi}ini}\\ y_{\mathbf{\Psi}ini}\\ u_{\mathbf{\Psi}f}\end{bmatrix}. \tag{4}\]
The proposed LTI embedding does not always coincide with the original system since it is not possible, in general, to write the additional nonlinear inputs as functions on the problem data (the nonlinear terms, _e. g._, can depend on the output). Therefore, (4) is not able to predict correctly the system trajectories if any nonlinear term in (2) depends on the system output.
**Remark 3**: _The trajectories of generalized bilinear systems can also be predicted exactly by an extension of (4) [18, Theorem 13]._
### _On the basis functions_
The LTI embedding of [18] is based on the exact knowledge of the nonlinear terms appearing in the system equation. This exact knowledge seems a restriction; indeed we are
going to show that it is possible to use more functions in the simulation problem than the ones appearing in the system equation. This fact weakens the preliminary knowledge about the system equation, but it increases the size of the involved matrices (that is, the computational cost).
**Theorem 4**: _As far as we work with noiseless data, it is possible to add functions to the vector \(\boldsymbol{\Psi}\) (which are linearly independent from the existing ones) without changing the result of the simulation problem._
The proof is split into two parts: the model-based and the data-driven case.
_Model-based_: the dimension of the left kernel in (3) equals \(|\boldsymbol{\Psi}|+1\) (where \(|\cdot|\) denotes the cardinality). Assume we fix an order among the elements of \(\boldsymbol{\Psi}\), and we add a new (linearly independent) function \(\hat{\psi}\) as last entry, \(\hat{\boldsymbol{\Psi}}=\{\boldsymbol{\Psi},\hat{\psi}\}\). By (2), we have
\[\begin{bmatrix}\hat{\theta}&-1\end{bmatrix}\begin{bmatrix}x_{\hat{ \boldsymbol{\Psi}}}(1)&\cdots&x_{\hat{\boldsymbol{\Psi}}}(T-\ell)\\ y(\ell+1)&&y(T)\end{bmatrix}= \tag{5}\] \[\begin{bmatrix}\theta&0&-1\end{bmatrix}\begin{bmatrix}x_{ \boldsymbol{\Psi}}(1)&\cdots&x_{\boldsymbol{\Psi}}(T-\ell)\\ x_{\hat{\psi}}(1)&\cdots&x_{\hat{\psi}}(T-\ell)\\ y(\ell+1)&&y(T)\end{bmatrix}=0,\]
since the function \(\hat{\psi}\) does not appear in the system equation (2). Therefore, up to the addition of zeros, the estimated coefficients in the system equation \(\theta\) are the same. The result holds true up to a permutation of the elements in \(\hat{\boldsymbol{\Psi}}\) and by replacing \(\hat{\psi}\) with a set of functions.
_Data-driven_: by [20, Corollary 19] and [18, Eq. (19)], the system (2) can be identified by the available data through the solution of Problem 2 if and only if
\[\text{rank }\begin{bmatrix}H_{L}(u_{\boldsymbol{\Psi}})\\ H_{L}(y_{\boldsymbol{\Psi}})\end{bmatrix}=(1+n_{nl})L+\ell,\]
where \(n_{nl}\) is the number of nonlinear terms. For the same set \(\hat{\boldsymbol{\Psi}}\) as in the previous case, the addition of \(\hat{\psi}\) is equivalent to add \(L\) linearly independent rows to each Hankel matrix, hence the rank condition is still satisfied.
The trajectories predicted by (4) are error-free for a special class of nonlinear systems only. What can we do in general?
### _Prediction for general NARX systems_
If it is not possible to write the nonlinear terms in (2) as additional inputs, we can not use (4) to predict exactly the system trajectories. What to do in this case?
A simple but effective strategy is to exploit the link between the initial conditions and the first point of the predicted trajectory (this is a straightforward extension of [10, Lemma 1]). By iterating length-one predictions and updating the initial conditions each step, we can get an error-free prediction whenever (4) fails to do it. This is illustrated in Algorithm 1.
**Theorem 5**: _The prediction \(y_{f}\) computed by Algorithm 1 is correct (error-free) for a system of the form (2)._
The first point of the predicted output depends linearly on the problem data, hence it is predicted exactly. By updating the initial conditions with the last (exactly) predicted points, we can predict correctly the second point. By induction, the iteration with update strategy allows to reach any finite horizon prediction.
**Remark 6**: _In the deterministic setting, moving step by step with the data-driven iteration given by Algorithm 1 is the same as iterating the model-based simulation. As a more interesting result, we are going to show that such an equivalence still holds true with noise-corrupted data._
**Theorem 7**: _If the output data are corrupted by noise, the iterative data-driven simulation of Algorithm 1 and the model-based simulation of Section III-A predict the same trajectories._
The noisy data are denoted with a bar as uppercase. We show that one-step predictions computed by model-based and data-driven simulations, starting from the same data, predict the same output.
The (extended) data matrix in (12) with noisy data is full rank. The one-step model-based estimator is defined as the projection of the noisy output on the space generated by the data. Such a projection comes from an approximate LQ decomposition.
\[\begin{bmatrix}x_{\boldsymbol{\Psi}}\\ \bar{y}\end{bmatrix} =\begin{bmatrix}L_{11}&0\\ L_{21}&L_{22}\end{bmatrix}\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix} \tag{6}\] \[\hat{y}_{MB} =\Pi_{x_{\boldsymbol{\Psi}}}\bar{y}=L_{21}Q_{1}=L_{21}L_{11}^{-1} L_{11}Q_{1}\]
If we set \(\hat{\theta}=L_{21}L_{11}^{-1}\), and we write the data by the equation \(L_{11}Q_{1}=x_{\boldsymbol{\Psi}}\), we can define the one-step model based estimator as
\[\hat{y}_{MB}=\hat{\theta}x_{\boldsymbol{\Psi}}=\hat{\theta}x_{ini\boldsymbol{ \Psi}},\] (MB)
where the last equality holds true because the prediction length is one.
On the other hand, any data-driven prediction can be computed by the following two-steps:
1. compute \(g\) of minimum norm such that \[x_{\boldsymbol{\Psi}}g\approx x_{ini\boldsymbol{\Psi}},\]
2. \(\hat{y}_{DD}=H_{t_{f}}(\bar{y})g=\bar{y}g\) because of the length-one prediction.
Since the vector \(g\) defines a linear combination of the columns of the data matrix, there exist \(\gamma\) such that \(g=Q_{1}^{T}\gamma\) (where \(Q_{1}\) is the matrix in (6)). Doing so, we get
\[x_{ini\boldsymbol{\Psi}}=x_{\boldsymbol{\Psi}}Q_{1}^{T}\gamma=L_{11}Q_{1}Q_{1} ^{T}\gamma=L_{11}\gamma. \tag{7}\]
The second block equation in the LQ decomposition (6) gives
\[\hat{y}_{DD}=\bar{y}g=(L_{21}Q_{1}+L_{22}Q_{2})g=(L_{21}Q_{1})Q_{1}^{T}\gamma=L_{21}\gamma.\]
The thesis holds true since \(\gamma=L_{11}^{-1}x_{ini\boldsymbol{\Psi}}\) (see (7)).
## IV Control for Nonlinear Systems
We propose some control methods which work directly on the available data; we leave away the model-based strategy for the moment, even if in the deterministic setting it would be probably more convenient and computationally cheaper.
We consider two different problems for nonlinear systems:
1. Output driving: given a reference trajectory \(y_{r}\), compute the input \(u_{r}\) which generates \(y_{r}\):
2. Tracking problem: given a reference trajectory \((u_{r},y_{r})\), compute the input / output pair \((u^{*},y^{*})\) that minimizes the distance from the target: \[\min_{u^{*},y^{*}}\|u^{*}-u_{r}\|_{2}^{2}+\|y^{*}-y_{r}\|_{2}^{2}\] (8)
The idea to solve these problems is to exploit the predictions computed by Algorithm 1 as starting point for the sought solution.
### _Output driving problem_
The solution of the output driving problem is simple and computationally cheap, since the problem can be solved similarly to the prediction problem illustrated in Section III.
Because of the variables symmetry (which is a feature of the behavioral setting [5]), we can switch the role of input and output in (4) to compute which input signal generates the sought reference trajectory.
```
0:\((u_{i},y_{i}),i=1,\ldots,n_{d}\) (data points), \(y_{r}(t)\) (reference trajectory), \(\boldsymbol{\Psi}\) (set of functions in the system equation), \(u_{ini},y_{ini}\) (initial condition)
0:\(u_{r}(t)\) (predicted input) Build the following block-Hankel matrix: \[\mathcal{H}=\begin{bmatrix}U_{\boldsymbol{\Psi}p}\\ Y_{\boldsymbol{\Psi}p}\\ Y_{\boldsymbol{\Psi}f}\end{bmatrix}\] (9) Build the following vector by stacking in a column the initial conditions and the reference output: \[v=\begin{bmatrix}u_{ini\boldsymbol{\Psi}}\\ y_{ini\boldsymbol{\Psi}}\\ y_{r\boldsymbol{\Psi}f}\end{bmatrix}\] (10) Solve the linear system of equations \(\mathcal{H}g=v\) Compute \(u_{r\boldsymbol{\Psi}}=U_{\boldsymbol{\Psi}f}g\)
```
**Algorithm 2** Output driving problem for nonlinear systems
Since it is not possible to recover the nonlinear pattern generated by \(\boldsymbol{\Psi}\) by solving a linear system of equations, we highlight that:
1. to simulate back the sought reference trajectory, we need to plug in the whole pattern \(u_{r\boldsymbol{\Psi}}\) computed by Algorithm 2, even if such a pattern is incorrect:
2. it is possible to get a correct (error-free) estimate of the input \(u_{r}\) only when all the functions \(\psi_{j}\) in (2) having the input as argument are linear; an iterative scheme similar to Algorithm 1 would be needed otherwise.
### _Tracking problem_
The solution of (8) is harder to compute since it requires the solution of several optimization problems. By imposing that the computed solution satisfies the system equation, we can rewrite the problem as a constrained optimization problem with nonlinear constraints:
\[\min_{u,y,g} \|u-u_{r}\|_{2}^{2}+\|y-y_{r}\|_{2}^{2}\] (11) subject to \[\begin{pmatrix}U_{\boldsymbol{\Psi}p}\\ Y_{\boldsymbol{\Psi}p}\\ U_{\boldsymbol{\Psi}f}\end{pmatrix}g=\begin{pmatrix}u_{ini\boldsymbol{\Psi}}\\ y_{ini\boldsymbol{\Psi}}\\ u_{\boldsymbol{\Psi}}\\ y_{\boldsymbol{\Psi}}\end{pmatrix}\]
A linearized version of (11) with regularization was used in [27] for a nonlinear problem; the authors added some noise on the data to deal with errors caused by the nonlinear terms. Hence the nonlinear problem was approached by the solution of a linear one with inexact data. Indeed, it is quite challenging to satisfy all the nonlinear constraints in (11), even from the numerical point of view. Besides this fact, the first algorithm to predict nonlinear system trajectories by a data-driven approach is in [18]; but it works exactly for a special class of systems only.
The solution approach is based on the fact that Algorithm 1 can predict exactly trajectories of arbitrary nonlinear systems. Given a reference trajectory \((u_{r},y_{r})\), an initial condition \((u_{ini},y_{ini})\) and the set of functions \(\boldsymbol{\Psi}\) generating the system, we solve at each time step an optimization problem over a N-steps prediction and we keep the first input/output pair only. Algorithm 3 shows how to solve (11) in the case of systems whose lag equals one (but the extension to lags bigger than one is straightforward).
```
0:\((u_{i},y_{i}),i=1,\ldots,n_{d}\) (data points), \((u_{r},y_{r})\) (reference trajectory), \(\boldsymbol{\Psi}\) (set of functions in the system equation), \(u_{ini},y_{ini}\) (initial condition), \(T\) (control horizon), \(N\) (trajectory length for each time step)
0:\((u^{*},y^{*})\) for\(i=1:T\)do Write \(y_{k}(i+1)\) as function of the optimization variable \(u_{k}(i)\) using Algorithm 1: \(y_{k}(i+1)=f_{i}(u_{k}(i))\); Solve the optimization problem (11) and get \(\hat{u}_{k}(i)\), \(\hat{y}_{k}(i+1)=f_{i}(\hat{u}(i))\); Set \(u^{*}(i)=\hat{u}_{1}(i)\), \(y^{*}(i+1)=\hat{y}_{1}(i+1)\) Append \(u^{*}(i),y^{*}(i+1)\) to the previous input / output data; Go to the next iteration endfor
```
**Algorithm 3** Data-driven nonlinear tracking problem
**Remark 8**: _Algorithm 3 is suitable for working with noise-free data. If the data are affected by noise, the following
noise-removal approach based on an approximate LQ decomposition can be adopted.
1. Compute the LQ decomposition \[\begin{bmatrix}x_{\boldsymbol{\Psi}}\\ \bar{y}\end{bmatrix}=\begin{bmatrix}L_{11}&0\\ L_{21}&L_{22}\end{bmatrix}\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}\]
2. project \(\bar{y}\) on the space generated by the data \(x_{\boldsymbol{\Psi}}\): \(\hat{y}=\Pi_{x_{\boldsymbol{\Psi}}}\bar{y}=L_{21}Q_{1}\).
3. use the projected output \(\hat{y}\) as problem datum
## V Numerical experiments
To conclude, we run some numerical tests to check how the proposed method works. To check the correctness of our method, we will generate a trajectory of \(120\) points: the first \(100\) points are the available data, while the last \(20\) points need to be predicted and controlled. In this way, we can compare the computed prediction and control trajectories with the real one to observe the proposed algorithm's performance. In all the experiments we consider a trajectory of the following nonlinear system:
\[y(t+1)=0.1y(t)^{2}+u(t)+\sin u(t), \tag{12}\]
with random initial condition and randomly generated input.
Before testing Algorithm 3 for control problems, we want to check on an experiment the correctness of Algorithm 1.
The partial knowledge about the system generating functions tells us that \(\Psi=\{I,x^{2},\sin x\}\); we can also take advantage to check Theorem 4 by running a simulation with all the functions
\[\{u,y,u^{2},y^{2},\sin u,\sin y\}.\]
We observe that the term \(y(t)^{2}\) cannot be rewritten as an input, therefore we expect that (4) cannot predict exactly the future system output (the same holds true for \(\sin y(t)\), even if this term could be removed since it does not appear in (12)).
We run then both (4) and Algorithm 1 on the same example, to observe what happens.
As expected, the prediction of the whole trajectory computed by (4) is incorrect because of the presence of \(y^{2}\) in the system equation (there is no way to turn it into an input); on the other hand, the output of Algorithm 1 matches exactly the true system trajectory. The starting points of all the trajectories are the same because of the initial conditions.
The next step is to solve a tracking problem for the same nonlinear system (12). Our goal is to approach the sinusoidal trajectory \(y_{r}(t)=0.1sin(t),t=1,\ldots,20\) by the smallest input, that is \(u_{r}(t)=0\)\(\forall t\).
We run two different experiments, with noise-free and noisy data. Both the numerical solutions are computed by Algorithm 3 with \(T=20\) and \(N=5\). The results are in Figures 2 and 3, respectively.
In both the experiments we observe that the input and the output get closer to the target trajectory with respect to the starting point. But they can never reach the target because of the constraints, since the computed points need to satisfy the (unknown) system equation which is hidden in the data. Indeed, it can be checked that the computed trajectories satisfy (at least approximately if the data are noisy) the system equation; this motivates the apparent convergence to a trajectory which is close (but different) to the target. We do not expect to track exactly the reference trajectory, but we look only for a pair \((u^{*},y^{*})\) which minimizes the distance from the target by satisfying the system equation. We expect
Fig. 1: Data-driven prediction of system (12).
Fig. 2: Data-driven control of the system (12) with noise-free data: control input and output. |
2303.00158 | HyScale-GNN: A Scalable Hybrid GNN Training System on Single-Node
Heterogeneous Architecture | Graph Neural Networks (GNNs) have shown success in many real-world
applications that involve graph-structured data. Most of the existing
single-node GNN training systems are capable of training medium-scale graphs
with tens of millions of edges; however, scaling them to large-scale graphs
with billions of edges remains challenging. In addition, it is challenging to
map GNN training algorithms onto a computation node as state-of-the-art
machines feature heterogeneous architecture consisting of multiple processors
and a variety of accelerators.
We propose HyScale-GNN, a novel system to train GNN models on a single-node
heterogeneous architecture. HyScale- GNN performs hybrid training which
utilizes both the processors and the accelerators to train a model
collaboratively. Our system design overcomes the memory size limitation of
existing works and is optimized for training GNNs on large-scale graphs. We
propose a two-stage data pre-fetching scheme to reduce the communication
overhead during GNN training. To improve task mapping efficiency, we propose a
dynamic resource management mechanism, which adjusts the workload assignment
and resource allocation during runtime. We evaluate HyScale-GNN on a CPU-GPU
and a CPU-FPGA heterogeneous architecture. Using several large-scale datasets
and two widely-used GNN models, we compare the performance of our design with a
multi-GPU baseline implemented in PyTorch-Geometric. The CPU-GPU design and the
CPU-FPGA design achieve up to 2.08x speedup and 12.6x speedup, respectively.
Compared with the state-of-the-art large-scale multi-node GNN training systems
such as P3 and DistDGL, our CPU-FPGA design achieves up to 5.27x speedup using
a single node. | Yi-Chien Lin, Viktor Prasanna | 2023-03-01T01:12:25Z | http://arxiv.org/abs/2303.00158v1 | # HyScale-GNN: A Scalable Hybrid GNN Training System on Single-Node Heterogeneous Architecture
###### Abstract
Graph Neural Networks (GNNs) have shown success in many real-world applications that involve graph-structured data. Most of the existing single-node GNN training systems are capable of training medium-scale graphs with tens of millions of edges; however, scaling them to large-scale graphs with billions of edges remains challenging. In addition, it is challenging to map GNN training algorithms onto a computation node as state-of-the-art machines feature heterogeneous architecture consisting of multiple processors and a variety of accelerators.
We propose HyScale-GNN, a novel system to train GNN models on a single-node heterogeneous architecture. HyScale-GNN performs hybrid training which utilizes both the processors and the accelerators to train a model collaboratively. Our system design overcomes the memory size limitation of existing works and is optimized for training GNNs on large-scale graphs. We propose a two-stage data pre-fetching scheme to reduce the communication overhead during GNN training. To improve task mapping efficiency, we propose a dynamic resource management mechanism, which adjusts the workload assignment and resource allocation during runtime. We evaluate HyScale-GNN on a CPU-GPU and a CPU-FPGA heterogeneous architecture. Using several large-scale datasets and two widely-used GNN models, we compare the performance of our design with a multi-GPU baseline implemented in PyTorch-Geometric. The CPU-GPU design and the CPU-FPGA design achieve up to \(2.08\times\) speedup and \(12.6\times\) speedup, respectively. Compared with the state-of-the-art large-scale multi-node GNN training systems such as \(P^{3}\) and DistDGL, our CPU-FPGA design achieves up to \(5.27\times\) speedup using a single node.
GNN training, Heterogeneous architecture, Large-scale graphs
## I Introduction
Graph Neural Networks (GNNs) have become state-of-the-art models for representation learning on graphs, facilitating many applications such as molecular property prediction [1, 2], social recommendation system [3, 4], electronic design automation [5, 6], etc. These domains often involve large-scale graphs with over billion edges [7]. Scaling GNN training systems to support such large graphs remains challenging. Previous works [8, 9, 10, 11] store the input graph in the device memory (e.g., GPU global memory, FPGA local DDR memory) rather than the CPU memory because accessing data from device memory via DDR channel is much faster than accessing data from the CPU memory via PCIe. The drawback of this setup is that the size of the device memory is limited (usually 16 to 64 GB), so it cannot accommodate large-scale graphs such as MAG240M [7] (202 GB); storing the graph in the CPU memory can overcome this limitation, but then the data needs to be fetched via PCIe which has lower memory bandwidth. In addition to memory size limitation, it is also challenging to map GNN training algorithms onto a target platform because of the complex architecture of modern machines. In particular, state-of-the-art nodes adopt a heterogeneous architecture design to meet the performance requirements of various applications [12, 13]. A heterogeneous architecture consists of multiple multi-core CPUs connected to several accelerators; the connected accelerators can be GPUs, FPGAs, or AI-specific accelerators [14, 15, 16]. Most of the existing works adopt a naive and static task mapping [9, 17, 18] which treats the CPU as a preprocessor, whose main purpose is to offload GNN computations to the accelerator; this straightforward task mapping overlooks the potential of utilizing the CPU resources for training. For example, on a dual-socket AMD EPYC 7763 (7.2 TFLOPS) platform equipped with a single Nvidia RTX A5000 (27.8 TFLOPS), utilizing CPU+GPU for training can potentially provide a (7.2+27.8) / 27.8 = 1.26\(\times\) speedup compared with GPU-only training. In addition, the speed of executing GNN training tasks depends on both the training algorithm and the performance of the target platform; this makes static task mapping inefficient.
Motivated by the challenges, we propose HyScale-GNN, a _hybrid_ GNN training system that can efficiently train GNN models on a given heterogeneous architecture. We propose a general processor-accelerator training protocol, which defines how the processors and the accelerators should interact and synchronize to collaboratively train a GNN model. The protocol is generic and can be adapted to various accelerators including GPU, FPGA, or AI-specific accelerators. We propose a dynamic resource management mechanism to efficiently map GNN training tasks onto a given heterogeneous architecture. The mechanism assigns GNN training tasks to both the CPUs and the accelerators, and dynamically adjusts the workload assignment during runtime. Unlike previous works that result in CPU idling most of the time, our hybrid training system efficiently utilizes both the CPUs and the accelerators to collaboratively train a GNN model. In addition, HyScale-GNN supports GNN training on large-scale graphs with billions of edges. To accommodate large-scale graphs, our system stores the input graph in the CPU memory, which can be several terabytes on high-end nodes. To mitigate the expensive PCIe communication overhead of reading data from the CPU memory, we propose a two-stage feature prefetching scheme to pre-load the required data onto the accelerator. While we
apply various optimizations in our system, these optimizations do not alter the semantics of the GNN training algorithm; thus, the convergence rate and model accuracy remain the same as the original sequential algorithm.
We summarize the contributions of this work as follows:
* We propose HyScale-GNN, a hybrid GNN training system that efficiently utilizes both the CPUs and the accelerators to perform GNN training collaboratively. Our system achieves the same convergence rate and model accuracy as existing works [19, 20, 21] as the proposed optimizations do not alter the original training algorithm.
* We propose a general processor-accelerator training protocol that enables HyScale-GNN to work with various accelerators including GPUs, FPGAs, or AI-specific accelerators.
* To support GNN training on large-scale graphs (such as ogbn-papers100 [22] and MAG240M [7]), we propose to store the input graph in the CPU memory, and perform two-stage data prefetching to hide the communication overhead.
* We propose a performance model which predicts the training performance of our system based on algorithmic parameters of the GNN training algorithm and platform metadata. HyScale-GNN utilizes the predicted performance to derive a coarse-grained task mapping onto the target platform during the design phase.
* We propose a dynamic resource management mechanism, which performs fine-grained task mapping by fine-tuning the workload assigned to the CPUs and the accelerators during runtime.
* We evaluate HyScale-GNN using several large-scale graphs and two widely used GNN models: GraphSAGE [2], and GCN [23]. On a dual-socket platform connected to 4 high-end GPUs, and a dual-socket platform connected to 4 high-end FPGAs, our CPU-GPU and CPU-FPGA designs achieve up to \(2.08\times\) speedup, and \(12.6\times\) speedup compared with our multi-GPU baseline implemented using PyTorch-Geometric [18], respectively. Compared with the state-of-the-art distributed GNN training systems [19, 21] that use 16 to 64 GPUs on a multi-node cluster, our CPU-FPGA design achieves up to \(5.2\times\) speedup using only 4 FPGAs on a single-node.
## II Background
### _Graph Neural Networks_
We defined the notations related to a GNN in Table I. A GNN learns to generate low-dimensional vector representation (i.e., node embeddings) for a set of vertices (i.e., target vertices \(\mathcal{V}^{L}\)), and the node embeddings can facilitate many downstream applications as mentioned in Section I. A GNN model can be expressed using the aggregate-update paradigm [24]:
\[\mathbf{a}_{v}^{l}=\text{AGGREGATE}(\mathbf{h}_{u}^{l-1}:u\in\mathcal{N}(v)\cup\{v\}) \tag{1}\]
\[\mathbf{h}_{v}^{l}=\phi(\text{UPDATE}(\mathbf{a}_{v}^{l},\mathbf{W}^{l})) \tag{2}\]
During feature aggregation, for each vertex \(v\), the feature vectors \(h_{u}^{l-1}\) of the neighbor vertices \(u\in\mathcal{N}(v)\) are aggregated into \(a_{v}^{l}\) using algorithm-specific operators such as mean, max, or LSTM. Since graph-structured data are non-Euclidean, accessing the feature vectors \(h_{u}^{l-1}\) of the neighbor vertices incurs a massive volume of irregular data access. The feature update phase is a multi-layer perceptron (MLP) followed by an element-wise activation function \(\phi\) (e.g., ReLU), which applies a linear transformation and a non-linear transformation to \(a_{v}^{l}\), respectively. While there exist a variety of GNN models, these models follow the aggregate-update paradigm. We list two representative models as an example:
* GCN [23]: is one of the most widely-used GNN models. The model can be specified as follows: \[\mathbf{a}_{v}^{l} =\text{Sum}(\frac{1}{\sqrt{D(v)\cdot D(u)}}\cdot\mathbf{h}_{u}^{l-1})\] (3) \[\mathbf{h}_{v}^{l} =\text{ReLU}\left(\mathbf{a}_{v}^{l}\mathbf{W}^{l}+\mathbf{b}^{l}\right)\] Where \(D(v)\) denotes the degree of vertex \(v\), and \(\mathbf{b}^{l}\) indicates the bias of the update function.
* GraphSAGE [2]: proposed a neighbor sampling algorithm for mini-batch GNN training. The model can be specified as follows: \[\mathbf{a}_{v}^{l} =\mathbf{h}_{v}^{l-1}||\text{Mean}\left(\mathbf{h}_{u}^{l-1}\right)\] (4) \[\mathbf{h}_{v}^{l} =\text{ReLU}\left(\mathbf{a}_{v}^{l}\mathbf{W}^{l}+\mathbf{b}^{l}\right)\] Where \(||\) indicates the concatenation operation.
By adopting the aggregate-update paradigm in our system design, our work is capable of training various GNN models.
### _Mini-batch GNN Training_
We depict the workflow of mini-batch GNN training in Figure 1. In each training iteration, a sampler first extracts a mini-batch \(\{\mathcal{G}(\mathcal{V}^{l},\mathcal{E}^{l}):1\leqslant l\leqslant L\}\) from the original graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\). The mini-batch serves as a computational graph to perform GNN operations, namely feature aggregation and feature update. During the forward propagation stage, the GNN operations are performed for \(L\) iterations. The output embeddings \(\{\mathbf{h}_{i}^{L}:v_{i}\in\mathcal{V}^{L}\}\) are compared with the ground truth for loss calculation. The calculated loss is then used as input for backward propagation. Backward propagation performs the same set of GNN operations as in forward propagation, but in a reverse direction [25]; backward propagation produces the
gradients for the weight matrix \(\mathbf{W}^{l}\) in each layer, which are then used to update the model weights.
Our work adopts synchronous Stochastic Gradient Descent (SGD) [26] to train GNNs on multiple devices, which performs a similar workflow as the original GNN training but with few variations. During the first step, multiple mini-batches are sampled and then each device is assigned one mini-batch. Each device then performs forward/backward propagation as in the original GNN training algorithm. Finally, the gradients within each device are gathered and averaged. The averaged gradients are then broadcast to each device to perform a global weight update. Training in synchronous SGD on multiple devices is algorithmically equivalent to training with a larger mini-batch on a single device. For example, training on 4 GPUs with mini-batch size 1024 is equivalent to training on 1 GPU with mini-batch size 4096 [27].
### _Target Heterogeneous Architecture_
Figure 2 shows the target heterogeneous architecture. It consists of multiple CPUs and multiple accelerators. The CPU memory on the platform forms a shared address space: each CPU is able to access the CPU memory to which it is connected, and can also access CPU memory that is connected to other CPUs via processor interconnection channels such as QPI [28]. Each accelerator is connected to a processor via PCIe, and each accelerator is connected to a device memory via DDR channels.
## III System Design
In this Section, we first introduce the logical components of HyScale-GNN in Section III-A. Then, we show how the logical components run on a heterogeneous platform in Section III-B. Finally, we introduce the Processor-Accelerator Training Protocol in Section III-C, which defines how the processors and the accelerators should interact to perform hybrid training.
### _Hybrid GNN Training System_
HyScale-GNN consists of multiple logical components. We depict the logical components (grey rounded rectangles) and their input/output (green rectangles) in Figure 3, and describe each component in the following:
**Mini-batch Sampler:** At the beginning of a training iteration, the Mini-batch Sampler takes the graph topology \(\mathcal{G}(\mathcal{V},\mathcal{E})\) as input, and produces multiple mini-batches by executing a sampling algorithm [2, 29].
**Feature Loader:** Given a mini-batch, the Feature Loader extracts a mini-batch feature matrix \(\mathbf{X}^{\prime}\) from the original feature matrix \(\mathbf{X}\). The extracted feature matrix \(\mathbf{X}^{\prime}\) contains only the vertex features of the sampled vertices instead of all the vertices in the input graph.
**GNN Trainers:** The GNN Trainers perform the forward propagation and backward propagation of GNN training. They take the mini-batch topology and mini-batch feature matrix as inputs, and produce gradients for model weight update.
**Synchronizer:** After each GNN Trainer produces a set of gradients, the Synchronizer performs an all-reduce operation which gathers the gradients from each Trainer, takes the average value of the gradients, and broadcasts the averaged gradients back to each Trainer to update their model weights.
**Runtime:** The Runtime system manages the interaction and data communication between the CPUs and the accelerators based on our Processor-Accelerator Training Protocol (Section III-C). In addition, it also performs Dynamic Resource Management which fine-tunes the workload assignment on the target platform during training.
### _Task mapping and Coordination_
Our hybrid training system consists of a runtime thread, several processor threads, and several accelerator threads.
Fig. 1: Overview of mini-batch GNN training
Fig. 3: System overview
Each logical component (Section III-A) is mapped to one or multiple threads. We show the task mapping and coordination of HyScale-GNN in Figure 4. HyScale-GNN decomposes GNN Training into four pipeline stages: Sampling, Feature Loading, Data Transfer, and GNN Propagation: (1) **Sampling**: mini-batch sampling can be performed on both the CPUs and accelerators. In each training iteration, \(n\) mini-batches are produced, where \(n\) is the number of GNN Trainers in the system. After each mini-batch is produced, it is stored in the CPU memory for Feature Loading. (2) **Feature Loading**: after collecting all \(n\) mini-batches, the Feature Loader reads the feature vector of the sampled vertices from the input feature \(\mathbf{X}\), and stores the loaded features in a sub-matrix \(\mathbf{X}^{\prime}\). Feature Loading is only performed on the CPUs. This is because an input feature matrix \(\mathbf{X}\) is too large to fit in the device memory for large-scale graphs; thus, the feature matrix \(\mathbf{X}\) is stored in the CPU memory, and accessed by the Feature Loader which runs on the CPUs. (3) **Data Transfer**: a mini-batch can be either executed on the CPU, or on the accelerator. If the mini-batch is executed on the accelerator, the mini-batch topology \(\{\mathcal{G}(\mathcal{V}^{l},\mathcal{E}^{l}):1\leqslant l\leqslant L\}\) and mini-batch feature matrix \(\mathbf{X}^{\prime}\) are transferred to the accelerator device memory via PCIe. (4) **GNN Propagation**: in each training iteration, each device (a processor or an accelerator) is assigned a mini-batch topology and a mini-batch feature matrix; these serve as the inputs for the GNN Trainers to perform forward and backward propagation. Initially, the workload (i.e., mini-batch size) assignment is decided based on our performance model (Section V) at design time. If there is a workload imbalance among the devices at runtime, the DRM engine (Section IV-A) adjusts the workload assignment of the next training iteration. After the propagations, each Trainer produces a set of gradients that are later used to update the model weights; each Trainer then sends a "DONE" signal to the Synchronizer when the gradients are stored/transferted to the CPU memory. Since all the accelerators are connected to the CPUs, and the CPUs are connected to each other (Figure 2), it is natural to run the Synchronizer on a CPU. After receiving the "DONE" signals from all the Trainers, the Synchronizer performs an all-reduce operation, which averages the gathered gradients, and broadcasts the result back to each Trainer. The Runtime system proceeds to the next training iteration after all the Trainers update their local model weights and send an acknowledgment to the Runtime system.
### _Processor-Accelerator Training Protocol_
To perform hybrid training on a given heterogeneous architecture, we propose a general _Processor-Accelerator Training Protocol_. The protocol consists of three layers: the application layer consists of the logical components defined in
Fig. 4: Task mapping and coordination
Fig. 5: Processor-accelerator training protocol
Section III-A; the programming layer consists of libraries that are used to implement the logical components on multi-core CPUs, GPUs, FPGAs, or AI-specific accelerators; the physical layer consists of the actual hardware. HyScale-GNN can be ported to various heterogeneous architectures since the process-accelerator interaction is defined at the application layer, which is not bound to a specific type of accelerator. We show the data exchange and handshake signals in Figure 5. Note that Figure 5 does not depict the Feature Loading stage since there is no data exchange or handshake signal in that stage. In each pipeline stage, a barrier is set at the end for synchronization. In addition, the Runtime system collects the execution time of each stage to fine-tune the workload assignment in the next iteration (Section IV-A).
## IV Optimizations
In order to achieve high GNN training throughput, we develop various optimizations to perform efficient task mapping (Section IV-A) and to reduce communication overhead (Section IV-B, IV-C). It is worth noticing that these optimizations do not alter the semantics of the original GNN training algorithm. Thus, HyScale-GNN does not trade off the model accuracy and convergence rate for higher training throughput.
### _Dynamic Resource Management_
To efficiently map GNN training tasks onto a heterogeneous architecture, we first utilize the predicted result from our performance model (Section V) to initialize the GNN training task mapping during compile time. Furthermore, we propose a _Dynamic Resource Management (DRM)_ engine that fine-tunes the resource allocation, and task mapping to improve GNN training throughput during runtime. The DRM engine is a bottleneck-guided optimizer, which improves training throughput by accelerating the bottleneck stage in each iteration. The DRM engine features two functions to speedup the bottlenecked stage: _balance work_ and _balance thread_. The _balance work_ function balances the workload between the CPU and the accelerator by varying the mini-batch size assigned to the Trainers. The total mini-batch size executed on the hybrid system remains the same after the re-assignment. The _balance thread_ function explores the performance trade-off between CPU tasks (e.g., CPU Sampler, CPU Trainer), and is only used when the bottleneck stage is a CPU task. It speedups the bottleneck stage by reducing the number of threads assigned to the fastest CPU task, and re-assign those threads to the bottleneck stage.
```
1:execution time of Sampling on Accelerator \(T_{\text{SA}}\), Sampling on CPU \(T_{\text{SC}}\), Feature Loading \(T_{\text{Load}}\), Data Transfer \(T_{\text{Tan}}\), Training on CPU \(T_{\text{TC}}\), Training on Accelerator \(T_{\text{TA}}\)
2:thread assignment, workload assignment
3:\(T_{\text{Accel}}=\textbf{max}(T_{\text{Tan}},T_{\text{TA}})\)
4:Sorted_all = sort(\(T_{\text{SC}},T_{\text{SA}},T_{\text{Load}},T_{\text{TC}},T_{\text{Accel}}\))
5:fastest = Sorted_list[4]
6:second = Sorted_list[3]
7:bottleneck = Sorted_list[0]
8:
9:Sorted_CPU = sort(\(T_{\text{SC}},T_{\text{Load}},T_{\text{TC}}\))
10:fastest_CPU_task = Sorted_CPU[2]
11:
12:switch bottleneck do\(\triangleright\) Bottleneck-guided Optimizer
13:case\(T_{\text{SA}}\) :
14:balance_work(\(T_{\text{SC}}\), \(T_{\text{SA}}\))
15:case\(T_{\text{Accel}}\) :
16:balance_work(\(T_{\text{TC}}\), \(T_{\text{Accel}}\))
17:case\(T_{\text{Load}}\) :
18:balance_thread(fastest_CPU_task, bottleneck)
19:case\(T_{\text{SC}}\) :
20:if fastest ==\(T_{\text{SA}}\)then
21:balance_work(\(T_{\text{SC}}\), \(T_{\text{SA}}\))
22:else
23:balance_thread(fastest, bottleneck)
24:endif
25:case\(T_{\text{TC}}\) :
26:if fastest ==\(T_{\text{Accel}}\)then
27:balance_work(\(T_{\text{TC}}\), \(T_{\text{Accel}}\))
28:else
29:balance_work(\(T_{\text{TC}}\), \(T_{\text{Accel}}\))
30:else
31:balance_thread(fastest, bottleneck)
32:endif
33:balance_thread(fastest, bottleneck)
34:endif
```
**Algorithm 1** Dynamic Resource Management
Algorithm 2 describes how the DRM engine works in a high-level view. First, the DRM engine bundles the Data Transfer time and Training on Accelerator time because the execution time of the two is highly correlated. For example, if the workload assigned to the accelerator is reduced, the Data Transfer time also reduces since fewer data needs to be communicated. The DRM engine takes the execution time of each stage as input and identifies the bottleneck stage and the fastest stage. If the system is bottlenecked by an accelerator task, then the DRM performs _balance work_ to adjust the workload assignment between the CPUs and the accelerators in the next iteration. If the system is bottlenecked by the Feature Loader, the DRM engine performs _balance thread_ which re-allocates more threads to perform Feature Loading. If the system is bottlenecked by the CPU Sampler, the DRM engine can either perform _balance work_ or _balance thread_ to speedup the CPU Sampler. The decision depends on which stage runs the fastest. If the Accelerator Sampler runs the fastest, the DRM engine performs _balance work_ which increases the workload assigned to the accelerators; if the fastest task is the Accelerator Trainer, followed by the Accelerator Sampler, then the DRM engine also assigns more workload to the accelerators; otherwise, the DRM engine performs _balance thread_ which explores performance trade-offs between other CPU tasks. Finally, if the system is bottlenecked by the CPU Trainer, the DRM engine can also improve the performance
by performing either _balance work_ or _balance thread_; thus, the improvement strategy is similar to when bottlenecked by the CPU Sampler.
### _Two-stage Feature Prefetching_
HyScale-GNN runs GNN training on both the CPUs and the accelerators. For GNN Trainers that run on the accelerators, the data needs to be fetched from the CPU memory and then transferred to the accelerators via PCIe. To reduce the expensive communication overhead, HyScale-GNN performs Feature Prefetching. The idea is to pre-load the mini-batches for the next training iteration onto the accelerators first, so the accelerators do not have to wait for data when performing GNN operations. Observing that the Feature Prefetching stage can still bottleneck the system, we further decompose Feature Prefetching into two pipeline stages: Feature Loading, and Data Transfer. The Feature Loading stage loads the Mini-batch Feature \(\mathbf{X}^{\prime}\) from the CPU memory, and the Data Transfer stage sends the matrix \(\mathbf{X}^{\prime}\) to the accelerator via PCIe. The two stages can run concurrently because they utilize different memory channels (CPU RAM channel, and PCIe channel), and there is no data dependency between mini-batches. Figure 7 shows an example of the Two-stage Feature Prefetching: in iteration 4, while an accelerator is executing mini-batch 1, the feature matrix of mini-batch 2 is being transferred to the accelerator via PCIe, and the feature matrix of mini-batch 3 is being loaded from the CPU memory, simultaneously. Thus, when the accelerator executes mini-batch 2 in the next iteration, the mini-batch topology and mini-batch features are ready in the accelerator's device memory. Note that Figure 7 only shows a simplified version of the system pipeline. For each iteration, multiple mini-batches can be sampled, loaded, transferred, and executed. It is also worth noting that our system pipeline efficiently utilizes the various resources on the heterogeneous architecture.
### _Hardware Kernel Design_
GNN training incurs a massive amount of memory access, and the expensive memory access overhead limits the training throughput. Thus, we design dedicated hardware kernels and datapath to reduce external memory access for the GNN Propagation stage as shown in Figure 6. GNN propagation consists of an aggregation stage and an update stage (Section II-A). For the update stage, we adopt a systolic-array-based design to perform Multi-Layer Perceptron (MLP); for the aggregation stage, we adopt a scatter-gather design [17, 30] to process multiple edges in parallel. Figure 6 shows an example of a kernel with four sets of scatter-gather processing elements (PEs) which can process four edges in parallel. To maximize data reuse, HyScale-GNN first sorts the edges within a mini-batch by their source vertex so that edges with the same source vertex are executed in a back-to-back manner. When a vertex feature is fetched from the external memory, the Feature Duplicator broadcasts the fetched feature to all the scatter-PEs (S-PEs). The feature is then stored in the local memory of the S-PEs and reused \(D_{out}(v)\) times where \(D_{out}(v)\) is the out-degree of vertex \(v\). For example, in Figure 6, four edges are processed. Assume \(D_{out}(v_{0})\) is 3, then the loaded feature \(X_{0}\) can be reused three times at most. Since the edges are sorted by source vertex, the first three edges have the same source vertex \(v_{0}\), and \(X_{0}\) is reused three times. The fourth S-PE remains idle until \(X_{1}\) is read from memory. This design maximizes the input data reuse since each vertex feature only needs to be read once from memory, and the input memory traffic is reduced from \(O(\mathcal{E}^{1})\) to \(O(\mathcal{V}^{0})\) (notations are defined in Table I). To reduce memory footprint, we design a customized datapath, which avoids writing the intermediate results back to the memory during GNN Propagation. As shown in Figure 6, the output of the aggregate kernel is directly sent to the update kernel, and the output of the update kernel is sent to the aggregate kernel for feature aggregation in the next layer. Therefore, only the final output is written back to the memory.
## V Performance Model
To initialize the task mapping on a heterogeneous architecture, we propose a performance model to predict the performance of HyScale-GNN. First, we define the GNN training
Fig. 6: Hardware kernel designs and datapath
Fig. 7: Two-stage Feature Prefetching
throughput as million of traversed edges per second (MTEPS):
\[\text{Throughput}=\frac{\sum_{i=0}^{n}\sum_{l=1}^{L}|\mathcal{E}_{i}^{l}|}{T_{\text{ execution}}} \tag{5}\]
We use \(n\) to denote the number of GNN Trainers running on the system. Each Trainer executes one mini-batch in each iteration. Therefore, the numerator denotes the total number of edges traversed by all the Trainers in one iteration, and the denominator denotes the execution time of one training iteration (Section II-B). There are four pipeline stages in our system (Section III-A): Sampling, Feature Loading, Data Transfer, and GNN Propagation; thus, \(T_{\text{execution}}\) is modeled as:
\[T_{\text{execution}}=\max(T_{\text{samp}},T_{\text{load}},T_{\text{trans}},T_{ \text{prop}}) \tag{6}\]
Instead of formulating an equation, we estimate \(T_{\text{samp}}\) by running the sampling algorithm under different numbers of threads and different mini-batch sizes, and deriving their execution time during design phase. This is because the computation pattern varies in different sampling algorithms [2, 29], so it is not feasible to model the sampling time \(T_{\text{samp}}\) of various algorithms with a simple equation.
\(T_{\text{load}}\) and \(T_{\text{tran}}\) can be modeled as:
\[T_{\text{load}}=\frac{\sum_{i=0}^{n}|\mathcal{V}^{0}|\times f^{0}\times S_{ \text{feat}}}{BW_{\text{DDR}}} \tag{7}\]
\[T_{\text{trans}}=\frac{|\mathcal{V}^{0}|\times f^{0}\times S_{\text{feat}}}{BW _{\text{PCIe}}} \tag{8}\]
The numerator in Equation 7 denotes the size of vertex features that need to be loaded from the CPU memory, and the numerator in Equation 8 denotes the size of vertex features that need to be transferred to a single accelerator. \(S_{\text{feat}}\) denotes the data size, which is 4 bytes for a single-precision floating-point. For \(T_{\text{load}}\), the data is loaded from the CPU memory, so the denominator is the CPU memory bandwidth; for \(T_{\text{tran}}\), the denominator is the PCIe bandwidth. We use "bandwidth" to denote the effective bandwidth of performing burst data transactions as opposed to the peak bandwidth.
Multiple GNN Trainers run in parallel, and \(T_{\text{train}}\) can be modeled as:
\[T_{\text{prop}}=\max_{i\in n}(T_{\text{Trainer},i})+T_{\text{sync}} \tag{9}\]
The execution time of a single Trainer can be modeled as:
\[\begin{split} T_{\text{Trainer}}=t_{\text{forward\_prop}}+t_{ \text{backward\_prop}}=\\ \sum_{l=1}^{L}\oplus(t_{\text{aggregate}}^{l},t_{\text{update}}^{ l})+\\ t_{\text{update}}^{1}+\sum_{l=2}^{L}\oplus(t_{\text{aggregate}} ^{l},t_{\text{update}}^{l})\end{split} \tag{10}\]
which is the total time to perform forward propagation and backward propagation. The \(\oplus\) operator depends on the kernel design. If feature aggregation and feature update are pipelined (e.g., Trainer on FPGA), then \(\oplus\) is the \(\max\) operator; if they are not pipelined (e.g., Trainer on CPU), then \(\oplus\) is the \(\sum\) operator. \(t_{\text{aggregate}}^{l}\) and \(t_{\text{update}}^{l}\) can be modeled as:
\[t_{\text{aggregate}}^{l}=\frac{|\mathcal{E}^{l-1}|\times f^{l}\times S_{\text{ feat}}}{BW_{\text{DDR}}} \tag{11}\]
\[t_{\text{update}}^{l}=\frac{|\mathcal{V}^{l}|\times f^{l}\times f^{l+1}}{N\times freq.} \tag{12}\]
The aggregation time \(t_{\text{aggregate}}^{l}\) is modeled as the traffic size of fetching the feature vector of the source vertices, divided by the memory bandwidth. The memory bandwidth depends on where the Trainer is located: for the CPU Trainer, it fetches data from the CPU memory; for Accelerator Trainer, it fetches data from the device memory. Since \(|\mathcal{E}^{l}|\) edges are processed during the \(l\)-th layer feature aggregation, the traffic size can be modeled as \(|\mathcal{E}^{l-1}|\times f^{l}\times S_{\text{feat}}\). The update time \(t_{\text{update}}^{l}\) is modeled as the number of multiply-and-add (MAC) operations that are performed during feature update, divided by the computing power of the GNN Trainer. We model the computing power as \(N\times freq\). where \(N\) is the number of computation parallelism (e.g., MAC units) in each trainer, and \(freq\). is the operating frequency. \(T_{sync}\) can be model as:
\[T_{\text{sync}}=\frac{\sum_{l=1}^{L}f^{l-1}\times f^{l}\times S_{\text{feat}}}{ BW_{\text{PCIe}}}\times 2 \tag{13}\]
The numerator is the model size (i.e., total size of all of the weight matrices). The factor of 2 comes from the all-reduce operation where the model is first gathered, averaged, and then scattered back to each Trainer; thus, the data is transferred through the PCIe twice. The denominator is the PCIe bandwidth.
## VI Experimental Results
### _Experimental Setup_
#### Vi-A1 Environment
We conduct our experiments on a dual-socket server, which consists of two AMD EPYC 7763 CPUs. We evaluate HyScale-GNN using two heterogeneous setups: a CPU-GPU heterogeneous architecture, and a CPU-FPGA heterogeneous architecture. For the CPU-GPU heterogeneous architecture, the dual-socket server is connected to four Nvidia A5000 GPUs; for the CPU-FPGA heterogeneous architecture, the dual-socket server is connected to four Xilinx U250 FPGAs. We list the detailed specification of the CPU, GPU, and FPGA in Table II. We implement the multi-GPU baseline, and CPU-GPU design using Python v3.8, PyTorch v1.11, CUDA v11.3, and PyTorch-Geometric v2.0.3. We develop our FPGA kernels using Xilinx Vitis HLS v2021.2 [31].
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Platforms** & CPU & GPU & FPGA \\ AMD EPVC 7763 & Nvidia RTX A5000 & Xilinx AveU250 \\ \hline \hline Technology & T8MC 7 mm & Samsung 8 mm & T8MC 16 mm \\ Frequency & 2.25 GHz & 2000 MHz & 300 MHz \\ Peak Performance & 3.6 TFLOPS & 27.8 TFLOPS & 0.6 TFLOPS \\ On-chip Memory & 256 MB 1.3 cache & 6 MB 1.2 Cache & 54 MB \\ Memory Bandwidth & 205 GB/s & 768 GB/s & 77 GB/s \\ \hline \hline \end{tabular}
\end{table} TABLE II: Specifications of the platforms
#### Iii-A2 GNN Models and Datasets
We choose two widely used a GNN models: GCN [23], and GraphSAGE [2] to evaluate our system. We adopt a commonly used model setup: a two-layer model with a hidden feature size of 256. We choose a medium-scale dataset, and two large-scale datasets with over billion edges for evaluation: ogbn-products, ogbn-papers100M [22], and MAG-240M (homo) [7]. The ogbn-products dataset is a medium-scale graph with 60 million edges; we include this dataset to compare our performance with previous works. The MAG-240M (homo) is the homogeneous version of the MAG-240M dataset, which only preserves one type of vertex and one type of edge in the original heterogeneous graph. Note that MAG-240M (homo) still contains 1.3 billion edges, making it a large-scale graph. Details of the datasets and the GNN-layer dimensions are shown in Table III. We use the Neighbor Sampler [2] to produce mini-batches; we set the mini-batch size as 1024, and the neighbor sampling size of each layer is 25 and 10.
### _System Implementation_
We show how the Processor-Accelerator Training Protocol is implemented using the libraries in the programming layer (Section III-C) in Listing 1; while we use GPU and FPGA as examples, the processor-accelerator interaction is similar if the protocol is adapted to other AI-accelerators. We implement the Runtime system using Pthreads. We launch multiple threads to exchange data, handshake, or launch accelerator kernels. Data transfer and kernel launching can be realized using APIs provided by the programming libraries such as CUDA and OpenCL. To implement the handshake, we utilize the _condition wait_ function in Pthreads. For example, the Synchronizer needs to wait for all the Trainers to complete GNN propagation before averaging the gradients. Each Trainer increments the "DONE" variable upon producing the gradients and then prompts the synchronizer. When "DONE" equals the number of Trainers, the Synchronizer proceeds to average the gathered gradients.
We list the hardware parameters and resource utilization of the FPGA design in Table IV. We use \(n\) and \(m\) to denote the parallelism of the aggregate kernel and update kernel, respectively. In particular, \(n\) indicates the number of scatter-gather PEs. \(m\) indicates the number of multiply-and-accumulate units in the systolic-array-based kernel design. Figure 6 shows an example for \(n=4\) and \(m=16\).
### _Evaluation of Performance Model_
We evaluate our performance model by comparing the predicted epoch time with the actual experimental result. Figure 8 shows the epoch time comparison on the MAG240M (homo) dataset under various number of FPGAs. The prediction error ranges from 5% to 14% on average. The error comes from extra latency that is not formulated in our model. First, there is an initial overhead when launching the kernel on an accelerator. Second, the overhead of pipeline flushing [32] is not included in the model. These two overheads are hard to predict as they depend on various factors such as the target accelerator and the version of the compiler.
### _Scalability_
We evaluate the scalability of HyScale-GNN using our performance model (Section V). We show the scalability of HyScale-GNN in Figure 9. Using the CPU-FPGA platform as an example, HyScale-GNN demonstrates good scalability to
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parallelism (\(n\),\(m\)) & LUTs & DSPs & URAM & BRAM \\ \hline \hline (8, 2048) & 72\% & 90\% & 48\% & 40\% \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Hardware Parameters and Resource utilization
16 FPGAs. The limiting factor of scalability is the CPU memory bandwidth. As we use more accelerators, more mini-batch feature matrices need to be loaded from the CPU memory. We observe that the CPU memory starts to saturate when more than 12 accelerators are used on the heterogeneous platform. The first set of experiments which runs a GCN model on the ogbn-products dataset shows lower scalability than other sets of experiments. This is because the first set of experiments is bottlenecked by the data transfer time (i.e., PCIe bandwidth), which limits the amount of workload that can be assigned to the accelerators and thus limits the achievable speedup.
### _Overall Performance_
#### Iv-E1 Performance evaluation
We evaluate the performance of HyScale-GNN using a CPU-GPU heterogeneous architecture, and a CPU-FPGA heterogeneous architecture. We compare the epoch time of HyScale-GNN with a state-of-the-art multi-GPU GNN training implementation using PyTorch-Geometric (PyG) [18]. The PyG baseline also runs on the CPU-GPU heterogeneous architecture; however, it does not utilize the CPU to perform hybrid training, so we regard it as a multi-GPU baseline. We show the result in Figure 10. By applying various optimizations and performing hybrid CPU-GPU training, HyScale-GNN achieves up to \(2.08\times\) speedup compared with the multi-GPU baseline. We discuss the effectiveness of each optimization in Section VI-F. On the CPU-FPGA heterogeneous architecture, HyScale-GNN achieves up to \(12.6\times\) speedup compared with the multi-GPU baseline, and \(5\times-6\times\) speedup compared with the CPU-GPU heterogeneous architecture. This is because FPGAs feature customizable datapath and memory organization, which allows the Accelerator Trainer to minimize external memory access during GNN training. In particular, all the intermediate results are stored on-chip using the abundant on-chip memory of FPGA, and only the final result is written back to the memory. In contrast, GPUs suffer from frequent memory access throughout the training since traditional cache policies fail to capture the data access pattern in GNN training [33].
#### Iv-E2 Comparison with State-of-the-art
Many works [9, 17, 18, 19, 20, 21, 34, 35] have been proposed to accelerate GNN training. However, only a few of the works are capable of training GNN models on large-scale graphs. We choose three representative GNN training systems for comparison, namely PaGraph [20], \(P^{3}\)[19], and DistDGLv2 [35]. We list the platform setup of each work in Table V. We use _SAGE_ to indicate the GraphSAGE [2] model. Among the three large-scale GNN training systems, PaGraph is the only work that runs on a single node; \(P^{3}\) and DistDGLv2 run on a distributed platform with four nodes and eight nodes, respectively.
We compare the epoch time of our work, which runs on a single node using only 4 FPGAs, with the aforementioned training systems. For each set of experiments, we set the same model configuration (sample size, hidden dimension) as the work we are comparing with. As shown in Table VI, HyScale-GNN achieves up to \(6.9\times\) and \(5.27\times\) speedup compared with PaGraph and \(P^{3}\), respectively. To provide a fair comparison, we normalize the epoch time w.r.t. platform peak performance; this metric shows the effectiveness and efficiency of the system design itself, rather than relying on powerful hardware to
Fig. 8: Predicted performance vs. actual performance on two GNN models
Fig. 10: Cross platform comparison
Fig. 9: Scalability of our hybrid training system
deliver high performance. As shown in Table VII, HyScale-GNN achieves \(21\times\)-\(71\times\) speedup compared with state-of-the-art systems after normalization. HyScale-GNN achieves speedup for several reasons: (1) resource utilization: HyScale-GNN utilizes both the processors and the accelerators to train GNN models collaboratively. In particular, HyScale-GNN utilizes both the CPU cores and the accelerators to compute; and utilizes both the CPU memory and device memory to read data concurrently. Our DRM engine (Section IV-A) further ensures the tasks are efficiently mapped onto our platform. On the other hand, PaGraph and \(P^{3}\) do not take advantage of the processors on the platform. (2) communication overhead: as mentioned in Section VI-E1, FPGA-based solutions can efficiently reduce the external memory access overhead compared with GPU-based solutions. In addition, PaGraph only caches a portion of the vertex features in the device memory, and needs to fetch data from the CPU memory if it encounters a cache miss; thus, the PCIe communication overhead becomes large when training on large-scale graphs like ogbn-papers100M since cache miss occurs frequently. \(P^{3}\) incurs inter-node data communication since the graph is partitioned and distributed on each node, which causes extra communication overhead compared with HyScale-GNN. Compared with DistDGLv2, which runs on eight nodes with a total of 64 GPUs, HyScale-GNN is able to achieve \(0.45\times\) of its performance using only 4 FPGAs on a single node machine. DistDGLv2 utilizes both the processor and the accelerator to train GNN models collaboratively. However, DistDGLv2 adopts a static task mapping, which can be inefficient. In addition, DistDGLv2 partitions the input graph and distributes the partitions to each node, which incurs inter-node communication overhead like \(P^{3}\).
### _Ablation Study_
In this section, we evaluate the effectiveness of the optimizations applied in HyScale-GNN. We show the evaluation on a CPU-FPGA heterogeneous architecture in Figure 11; evaluation on the CPU-GPU heterogeneous architecture also shows similar results. We start from a baseline design, which adopts a traditional task mapping that offloads most of the tasks (except tasks like sampling, synchronization, etc.) to the FPGA. Then, we apply hybrid CPU-FPGA training with a static task mapping; this leads to up to \(1.13\times\) speedup. The system achieves up to \(1.33\times\) speedup after applying the DRM optimization (Section IV-A). With the TFP (Section IV-B) optimization applied, HyScale-GNN achieves up to \(1.79\times\) speedup. This is because the data loading stage is often a bottleneck in GNN training; if the training is dominated by the GNN propagation stage (e.g., GraphSAGE model on the ogbn-papers100M in Figure 11), then the TFP optimization does not provide speedup.
## VII Related Work
Several works have been proposed [9, 17, 34] to accelerate GNN training on a single node. However, these works focus on using a single accelerator to perform GNN training and do not support training with multiple accelerators. In addition, works like GraphACT [9] and HP-GNN [17] stores the input graph in the device memory, and thus cannot support large-scale graphs [7] that exceed the device memory size. Recently, several works [19, 21, 36] have been proposed to train GNN on a multi-node platform. However, these works require graph partitioning, which leads to issues like workload imbalance, and high inter-node communication overhead. In addition, graph partitioning may affect the convergence rate and model accuracy [21]. In this work, we show that it is feasible to train large-scale GNNs on a single node and achieve high training throughput.
## VIII Conclusion
In this work, we proposed HyScale-GNN, a hybrid training system that is optimized for training GNN models on large-scale graphs. We proposed several optimizations to reduce the
Fig. 11: Impact of optimizations
communication overhead and perform efficient task mapping. Our system achieved up to \(12.6\times\) speedup compared with a multi-GPU baseline. In addition, using only four FPGAs on a single node, HyScale-GNN is able to achieve \(1.76\times-4.57\times\) speedup compared with state-of-the-art training systems that employ 8 to 16 GPUs.
We also observed some limitations of HyScale-GNN. First, HyScale-GNN did not provide an effective solution if the performance is bottlenecked by the Data Transfer stage (i.e., limited by PCIe bandwidth). In this case, the DRM engine would reduce the workload assigned to the accelerator, which limits the achievable speedup and scalability of the system. Second, HyScale-GNN could not be directly extended to a distributed platform with multiple nodes. If an input graph is partitioned and distributed to each node like in DistDGL [21], inter-node communication and synchronization are needed. However, our protocol defines how the processor and the accelerator should interact on a single node. It does not support inter-node communication. In the future, we plan to exploit techniques like data quantization to relieve the stress on the PCIe bandwidth, and define a more general protocol for training GNN models on distributed and heterogeneous architectures.
## Acknowledgment
This work has been supported by the U.S. National Science Foundation (NSF) under grants SaTC-2104264 and OAC-2209563, and the DEVCOM Army Research Lab (ARL) under grant W911NF2220159.
|
2310.03569 | Design, fabrication and characterization of kinetic-inductive force
sensors for scanning probe applications | We describe a transducer for low-temperature atomic force microscopy based on
electromechanical coupling due to a strain-dependent kinetic inductance of a
superconducting nanowire. The force sensor is a bending triangular plate
(cantilever) whose deflection is measured via a shift in the resonant frequency
of a high-Q superconducting microwave resonator at 4.5 GHz. We present design
simulations including mechanical finite-element modeling of surface strain and
electromagnetic simulations of meandering nanowires with large kinetic
inductance. We discuss a lumped-element model of the force sensor and describe
the role of an additional shunt inductance for tuning the coupling to the
transmission line used to measure the microwave resonance. A detailed
description of our fabrication is presented, including information about the
process parameters used for each layer. We also discuss the fabrication of
sharp tips on the cantilever using focused electron beam-induced deposition of
platinum. Finally, we present measurements that characterize the spread of
mechanical resonant frequency, the temperature dependence of the microwave
resonance, and the sensor's operation as an electromechanical transducer of
force. | August K. Roos, Ermes Scarano, Elisabet K. Arvidsson, Erik Holmgren, David B. Haviland | 2023-10-05T14:34:55Z | http://arxiv.org/abs/2310.03569v2 | Design, fabrication and characterization of kinetic-inductive force sensors for scanning probe applications
###### Abstract
We describe a transducer for low-temperature atomic force microscopy, based on electromechanical coupling due to a strain-dependent kinetic inductance of a superconducting nanowire. The force sensor is a bending triangular plate (cantilever) whose deflection is measured via a shift in resonant frequency of a high Q superconducting microwave resonator at \(4.5\,\mathrm{GHz}\). We present design simulations including mechanical finite-element modeling of surface strain and electromagnetic simulations of meandering nanowires with large kinetic inductance. We discuss a lumped-element model of the force sensor and describe the role of an additional shunt inductance for optimal coupling to the transmission line used for the measurement of the microwave resonance. A step-by-step description of our fabrication is presented, including information about the process parameters used for each layer. We also discuss the fabrication of sharp tips on the cantilever using focused electron beam-induced deposition of platinum. Finally, we present measurements that characterize the spread of mechanical resonant frequency, temperature dependence of the microwave resonance, and the sensor's operation as an electromechanical transducer of force.
## I Introduction
Cavity optomechanics [1] deals with the detection and manipulation of massive "test objects" at the fundamental limits imposed by quantum physics [2]. By detecting the motion of the test object we can sense an external force, for example gravitational waves acting on a \(40\,\mathrm{kg}\) mirror in LIGO [3], or atomic-scale tip-surface force acting on a \(40\,\mathrm{pg}\) cantilever in an atomic force microscope (AFM). For AFM cantilevers operating at room temperature close to their fundamental resonant frequency in the kilohertz to megahertz range, optical interferometric [4; 5; 6] and beam-deflection [7; 8; 9] detectors of motion are sufficient to resolve the thermal noise force determined by the damping of the cantilever eigenmode in thermal equilibrium with its environment. Operation in high vacuum and at cryogenic temperatures reduces this force noise, improving sensitivity to the point where motion detection becomes the limiting source of noise. In this context the principles of cavity optomechanics may improve the sensitivity of AFM force sensors. Cryogenic AFM further enables the use of superconducting microwave resonators in a cavity optomechanical detection scheme [10; 11; 12; 13]. We recently introduced such a sensor based on the electromechanical coupling between surface strain and kinetic inductance of a superconducting nanowire [14]. Here we describe in detail the fabrication and characterization methods of these Kinetic Inductance Mechano-Electric Coupling (KIMEC) sensors.
Kinetic inductance is an electromechanical phenomenon, resulting from Cooper pair mass and the kinetic energy of a supercurrent. It can be orders of magnitude larger than geometric (electromagnetic) inductance in thin films and nanowires made of amorphous superconductors [15] and therefore useful in applications that require compact microwave resonators with low loss [16], including microwave filters [17] and resonant radiation detectors [18]. Large kinetic inductance also comes with intrinsic nonlinearity, or current dependence of the inductance, which enables low-noise microwave parametric amplification [19]. Different materials studied in the literature include niobium nitride (Nb-N) [20; 21], titanium nitride (Ti-N) [22; 23], niobium titanium nitride (Nb-Ti-N) [19; 24], or granular aluminum (grAl) [25; 26], wolfram (W) [27], and silicon doped with boron (Si:B) [28]. In this work we describe force sensors designed for AFM employing thin-film Nb-Ti-N meandering nanowire kinetic inductors on top of a flexible silicon nitride (Si-N) cantilever substrate. The deflection of the cantilever strains the nanowire, modulating its kinetic inductance and shifting the resonant frequency of a microwave mode.
## II Design
Figure 1 gives an overview of a finished sensor, showing its main components which we cover in detail in subsequent sections. The cantilever is a \(600\,\mathrm{nm}\) thick Si-N triangular plate released from a much thicker silicon (Si) support, as shown in Figs. 1(a) and 1(g). Figure 1(a) shows the microwave resonant circuit with its interdigital capacitor in series with the nanowire inductor. The meandering nanowire is placed at the base of the cantilever for maximum coupling to surface strain, as shown in Fig. 1(c). Details of three different nanowire widths are shown in Figs. 1(d)-(f). The circuit is measured in reflection, as illustrated in the device schematic in Fig. 1(b), through a coaxial transmission line that launches to the coplanar waveguide on the sensor chip (not shown).
### Mechanical design
Several considerations determine the design of the Si-N cantilever in the scanning force sensor. For AFM the spring constant or stiffness \(k\) of the cantilever eigenmode (typically the fundamental bending mode) is ideally of the same order as the tip-surface force gradients detected while scanning, of order \(100\,\mathrm{N/m}\). For a given mechanical resonant frequency \(\omega_{m}\) this requirement places constraints on the cantilever dimensions.
Operating the sensor in the so-called sideband-resolved regime, where the mechanical eigenfrequency is larger than the linewidth of the microwave cavity \(\kappa\), allows for sideband cooling of the mechanical mode via the microwave pump [1; 29]. Our microwave cavities are designed for resonant frequency \(\omega_{c}/2\pi\sim 5\,\mathrm{GHz}\), where experience shows that we can achieve microwave resonances with internal loss rate \(\kappa_{\mathrm{int}}/2\pi\sim 500\,\mathrm{kHz}\), corresponding to internal quality factor \(Q_{\mathrm{int}}\sim 2\times 10^{4}\). The Si-N plate thickness is fixed when fabricating a wafer of sensor chips, and we design the cantilever's planewewing dimensions to achieve \(\omega_{m}/2\pi\) in the range 0.5-\(10\,\mathrm{MHz}\), allowing devices to be either in the sideband-resolved or sideband-unresolved regime. For the given thickness we simulate the eigenfrequencies of the cantilever using the finite-element method (FEM) implemented in gcomsol[30], with the boundary condition of a perfectly rigid clamp along the line where the plate meets the Si substrate.
The FEM model gives the distribution of strain at the surface. Figure 2(a) shows the distribution of longitudinal strain \(\epsilon_{xx}(x,y)\) for the fundamental bending mode of interest. The strain is normalized to its maximum value, at the center of the clamping line. Figure 2(b) displays this maximum value of the surface strain as a function of the length \(l\) and width \(b\) of the triangular plate calculated
Figure 1: (a) A scanning electron microscope (SEM) image of a fabricated sensor, seen from an angled topside view. The cantilever is formed from a Si-N plate protruding from and supported by a Si substrate. A thin film of Nb-Ti-N is deposited on top of the Si-N and patterned to form the microwave resonator. A short nanowire in parallel forms a shunt inductance \(L_{s}\), affecting the coupling between the resonator and the transmission line. The signal line of a coplanar waveguide is connected to the circuit. (b) Equivalent circuit diagram of the device, where the microwave mode is modeled as a series \(RL_{k}C\)-circuit in parallel with a shunt inductance \(L_{s}\), directly connected to a transmission line and measured in reflection. The series resistance \(R\) represents the total internal losses of the microwave mode. (c) A topside view of the meandering nanowire inductor at the base of the cantilever with kinetic inductance \(L_{k}\). The inductor is placed transverse to the base of the released cantilever. (d)–(f) SEM images of nanowires from three different devices, showing three different nominal nanowire widths: \(200\,\mathrm{nm}\), \(100\,\mathrm{nm}\) and \(75\,\mathrm{nm}\). (g) An SEM image of an underside view of the clamping line of a released cantilever using an isotropic silicon etch. The etch produces an uneven clamping line, affecting the mechanical frequency of the cantilever.
for a \(1\,\mathrm{nm}\)\(z\)-displacement at the apex of the triangle, a typical tip displacement for measuring surface forces in AFM.
We place the nanowire inductor in this region of maximum longitudinal surface strain, with the long segments of the nanowire oriented parallel to the \(x\)-axis. The nanowire will thus experience compression or tension as the tip deflects in positive or negative \(z\)-direction, respectively. We can then vary \(b\) and \(l\) of the triangular cantilever to maximize the strain for a given deflection, and to achieve the desired mechanical resonant frequency \(\omega_{m}\), as shown in Fig. 2(c). We see that the length of the cantilever is the main factor affecting the strain and the resonant frequency. The width \(b\) must also be adjusted to accommodate a meandering nanowire with a total length \(\ell\) large enough to realize the desired kinetic inductance. To understand this constraint we turn to electromagnetic simulations.
### Electromagnetic design
The cavity is modeled as a series \(RL_{k}C\)-circuit as shown in Fig. 1(b). Our target resonant frequency \(\omega_{c}=1/\sqrt{L_{k}C}\sim 5\,\mathrm{GHz}\) and the relatively small coplanar capacitance at the scale shown in Fig. 1(a), motivates the use of kinetic inductance to achieve a compact lumped-element inductor with negligible stray capacitance. The kinetic inductance of a straight wire with length \(\ell\) is given by
\[\frac{L_{k}}{\ell}=\frac{m_{e}}{2n_{s}e^{2}A}, \tag{1}\]
where \(2m_{e}\) is the mass and \(n_{s}\) is the density of Cooper pairs, and \(A\) is the cross-sectional area of the wire. From this expression, we see that large \(L_{k}\) requires a long wire with a small cross-section, made from a material with small \(n_{s}\).
Alternatively, Mattis-Bardeen theory [31] relates the kinetic inductance of a film with thickness \(t\) much less than the London penetration depth to the normal-state sheet resistance \(R_{\square}\) and the superconducting energy gap \(\Delta_{0}\):
\[L_{k\square}=\frac{\hbar R_{\square}}{\pi\Delta_{0}}. \tag{2}\]
From this expression, we see that we require a superconducting film with large normal state resistance in a geometry with a large number of squares, i.e. a long and narrow strip. We used Nb-Ti-N as our lab was equipped for depositing such films, which are known to have large kinetic inductance while having a relatively high bulk critical temperature around \(14\,\mathrm{K}\).
For a given total inductance \(L\), the meandering structure allows for a physically compact shape. To increase the length \(\ell\) we wind the nanowire into a meandering structure with long, narrow straight sections oriented parallel to the \(x\)-axis. Such meandering nanowires are used in the context of single photon detectors [32, 33] and the effect on the kinetic inductance of current crowding at the bends has been investigated theoretically [34, 35, 36].
In this work, we explore thin-film nanowires of width \(w=75\,\mathrm{nm}\), \(100\,\mathrm{nm}\) and \(200\,\mathrm{nm}\) that we can make with a high degree of uniformity using electron-beam lithography and reactive-ion etching. We simulate the electromagnetic response of the meandering nanowire inductors using sonnet, a quasi-3D electromagnetic simulator [37]
Figure 2: (a) The distribution of longitudinal strain at the surface for displacement in the \(z\)-direction, \(\epsilon_{xx}(x,y)\), a dimensionless quantity (\(\epsilon=\Delta l/l\)). The strain is simulated for a \(1\,\mathrm{nm}\) displacement at the apex of the triangular Si-N plate in the \(z\)-direction, assuming a perfect clamp along the base of the plate. Lighter colour indicates larger strain. The displacement of the cantilever is exaggerated for clarity. (b) The maximum strain \(|\epsilon_{xx}^{\mathrm{max}}|\) at the point indicated by the dark dot [see panel (a)], as a function of cantilever width \(b\) and length \(l\). (c) The resonant frequency of the cantilever \(\omega_{m}\) as a function of its width \(b\) and length \(l\).
that has the feature of including sheet kinetic inductance \(L_{k,\square}\). We begin by simulating the meandering inductor itself to find the lowest-frequency self-resonant mode. We desire that this self-resonant frequency falls well above the target frequency of our resonator \(\sim 5\,\mathrm{GHz}\) so that we may, to a good approximation, treat the meandering nanowire as a lumped-element inductor. Figure 3(a) shows simulations of the current distribution of a typical inductor for all three nanowire widths at their lowest self-resonant frequency in the range 18-28 GHz, where we see the current node located in the center of the meander. Figure 3(b) shows the microwave simulation of the entire circuit, including a shunt inductance formed from a short nanowire. We see that at the lower resonant frequency of the inductor and series capacitor, the current is uniformly distributed inside the meandering nanowire, confirming that it behaves as a lumped-element inductor. We also see that on resonance, the current in the impedance-transforming shunt inductor reaches a similar magnitude to that in the meandering inductor.
We can further increase the internal quality factor \(Q_{\mathrm{int}}=(1/R)\sqrt{L/C}\) by increasing the total inductance \(L\), either geometrically, i.e. making the nanowire longer, or by increasing the kinetic inductance per square \(L_{k,\square}\), i.e. making the film thinner or the nanowire narrower. We run into several trade-offs. On the one hand, if we arbitrarily increase the total length \(\ell\) of the nanowire, the parasitic capacitance of the meandering structure will eventually become significant enough to decrease the first self-resonant frequency to the point that it can no longer be treated as a lumped-element inductor. Additionally, to maintain the resonant frequency in the band 4-8 GHz, an increasing \(L\) must be matched by a decreasing capacitance \(C\), which cannot be made too small in relation to the parasitic capacitance. On the other hand, making the nanowire narrower or thinner increases the kinetic inductance per square but decreases the critical current \(I_{c}\) of the nanowire, which lowers the number of photons that can circulate in the cavity until the onset of the nonlinearity. The measured parameters of our devices given below, represent an adequate trade-off between these design considerations.
Figure 3: (a) Microwave simulations of the normalized current density of the first electromagnetic self-resonant mode \(\omega\) of the meandering nanowire, for widths \(w=75\,\mathrm{nm}\), \(100\,\mathrm{nm}\) and \(200\,\mathrm{nm}\) and using \(L_{k,\square}=36\,\mathrm{pH}/\square\), with the current node at the center of the nanowire. The frequencies land in the range 18–28 GHz. (b) Simulation of the first resonant mode of the full structure, using nanowire width \(w=200\,\mathrm{nm}\). On resonance, the current density is uniformly distributed in the meandering nanowire, behaving as a lumped-element inductor.
### Circuit Design
The lumped-element equivalent circuit of the sensor shown in Fig. 1(b) must be optimized for optimal coupling between the microwave resonator and the transmission line. The coupling parameter \(\eta\) is given by the ratio of external losses, i.e. useful signal, to total cavity losses, or equivalently,
\[\eta=\frac{Q_{\mathrm{int}}}{Q_{\mathrm{int}}+Q_{\mathrm{ext}}} \tag{3}\]
where \(Q_{\mathrm{int}}\) and \(Q_{\mathrm{ext}}\) are the internal and external quality factors of the microwave resonator, respectively. Critical coupling is achieved when \(\eta=0.5\), where the resonator loss rate due to radiation into the transmission line is equal to the internal loss rate. In the under-coupled case \(\eta<0.5\), the internal losses of the resonator dominate, while in the over-coupled case \(\eta>0.5\), the cavity losses are dominated by signal flowing to the following amplifier.
To control the coupling of the cavity to the \(50\,\Omega\) transmission line we introduce a shunt inductor \(L_{s}\), which we realize with a very short section of superconducting nanowire, as shown in a fabricated device in Fig. 1(a) and in the electromagnetic simulation in Fig. 3(b). In the absence of the shunt inductance, the cavity is strongly over-coupled but with a poor total quality factor. To maximize the output signal, which is the microwave response at the motional sideband frequency, one should ideally maximize both the total quality factor and the transmission line coupling coefficient \(\eta\). A smaller shunt inductance increases \(Q_{\mathrm{ext}}\), however, the total quality factor is bounded by the internal losses. For optimal coupling, the shunt inductor \(L_{s}\) should satisfy \(50\,\Omega>i\omega_{c}L_{s}>R\) where \(R\sim 0.1\,\Omega\) models the internal microwave losses of the cavity. As derived in Appendix A, the coupling parameter is
\[\eta(L_{s})=\left(1+\frac{Z_{c}R}{\omega^{2}L_{s}^{2}}\right)^{-1} \tag{4}\]
where \(Z_{c}=\sqrt{L_{k}/C}\) is the characteristic impedance of the resonator.
Figure 4(a) shows the simulated external quality factor and Fig. 4(b) shows the coupling parameters as a function of the shunt inductance, using typical circuit parameters for our devices. Figures 4(c) and 4(d) display the measured magnitude and phase response of two nominally identical devices both with nanowire width \(w\) = 200 nm, one device with the shunt inductance and the other without. For a shunt with inductance \(L_{s}=195\,\)pH, we increase \(Q_{\mathrm{ext}}\) by a factor of roughly twenty at the cost of a small reduction in \(\eta\) while remaining over-coupled.
### Design summary
One design goal is to maximize the electromechanical coupling for the specific device. In the cavity optomechanical detection scheme this coupling is proportional to the shift of cavity resonant frequency with tip displacement \(G=\partial\omega_{c}/\partial z\). The narrative above attempts to convey the interplay between the mechanical and electromagnetic considerations when reaching toward this goal. We started by fixing the cavity resonant frequency \(\omega_{c}/2\pi\sim 5\,\)GHz, inside the band 4-8 GHz for which our microwave measurement system is optimized. We also fixed the thickness of the Si-N plate which forms the cantilever, which is natural as all chips on the same wafer must be fabricated on the same Si-N layer. Variations on these points of departure open new directions for increasing \(G\).
By increasing the thickness of the Si-N plate we increase the surface strain for the same curvature of the cantilever substrate, giving larger \(G\) but at the same time increasing the stiffness of the bending mode. We can compensate for the latter by increasing the length and reducing the width of the triangular plate. Eventually, we run out of space to accommodate the kinetic inductor. However, the footprint of the inductor can be reduced by increasing its sheet kinetic inductance using a thinner superconducting film. Another option is to work with smaller inductance, which would require increasing the capacitance of the resonator to keep the resonant frequency constant. This option is particularly attractive as the change of inductance \(\Delta L\) is causing the detected shift of resonance, and therefore the so-called participation ratio \(\Delta L/L\) becomes larger. However, as discussed in Sec. II.2, \(L\) or \(C\) cannot be made arbitrarily large or small without taking into account parasitic inductance or capacitance. Additional variations on cavity designs with a larger capacitance and smaller inductance include the parallel \(LC\) resonator with capacitive coupling to the transmission line. Clearly, there is plenty of room for design variation to achieve larger \(G\).
## III Fabrication
### Force sensors
The main fabrication steps are illustrated in Fig. 5. The fabrication produces around 400 chips per wafer with a yield of about 80 %. We start with double-side polished Si wafer, 100 mm diameter and 525 um thick, coated on both sides with 600 nm low-stress (\(<100\,\)MPa) Si-N films. Sensor chips are 1.6 mm by 3.4 mm, about the size of a standard AFM cantilever chip. The steps are as follows:
* **Superconducting film.** We first deposit a 15 nm thick thin film of superconducting Nb\({}_{60}\)Ti\({}_{40}\)N by reactive co-sputtering from separate niobium and titanium targets [38] in an ATC2200 from AJA International Inc., with a deposition rate of roughly 3 nm/min.
* **Pads and markers.** A lift-off process defines the
gold contact pads and alignment marks. We spin a 400 nm thick photoresist (maN1407), bake on a hotplate at 100 \({}^{\circ}\)C for 60 s and expose with a dose of 450 mJ/cm\({}^{2}\) using an MLA150 from Heidelberg Instruments. We develop the pattern in maD533s for roughly 45 s and then deposit 10 nm chromium (Cr) and 40 nm of gold by electron-beam evaporation in an Auto306 from Edwards Vacuum. Lift-off in mrREM700 removes the resist mask and the patterned wafer is ultrasonically cleaned and rinsed with IPA.
3. **Backside mask.** Before fabricating the chromium etch mask on the backside, we first protect the front side of the wafer with a thin layer of PMMA. We then define the lift-off mask on the wafer back side by spinning a 400 nm thick photoresist (maN1407), baking at 100 \({}^{\circ}\)C on a hotplate for 60 s. We expose the pattern with a dose of 450 mJ/cm\({}^{2}\), aligning to the markers on the front, and we develop in maD533s for roughly 45 s. A subsequent short soft-ashing step in a Plasmalab 80 ICP65 from Oxford Instruments removes residual resist and improves the adhesion of the following 150 nm deposition of chromium with electron-beam evaporation in the Auto306 from Edwards. After the lift-off in mrREM700, we also strip the protective PMMA layer on the front side with AR600-71, and we clean the wafer in IPA.
4. **Coarse circuit pattern.** A layer of photolithography defines the coarse circuit features in the superconducting film, such as the coplanar waveguide, ground planes, and signal line. We use the same recipe as in steps (b) and (c) to define a resist etch mask and after development, we trans
Figure 4: (a) Simulated external quality factor \(Q_{\mathrm{ext}}\) and (b) coupling parameter \(\eta=Q_{\mathrm{int}}/(Q_{\mathrm{int}}+Q_{\mathrm{ext}})\) of the circuit in Fig. 1(b) as a function of the shunt inductance \(L_{s}\) for typical values \(R=0.1\,\Omega\), \(C=13.5\,\mathrm{fF}\) and \(L_{k}=100\,\mathrm{nH}\). The circuit is under-coupled to the transmission line for low values of \(L_{s}\). As the inductance of the shunt \(L_{s}\) increases, the external losses increase and the circuit is first critically coupled for \(L_{s}=\sqrt{Z_{c}R/\omega_{c}^{2}}\) and then over-coupled for \(L_{s}>\sqrt{Z_{c}R/\omega_{c}^{2}}\). The phase response is sharper when the circuit is slightly over-coupled. The line is panel (b) given by Eq. (4). (c) Measured magnitude and (d) phase response of two samples, with and without a shunt inductance (dots). The dotted line is the fit of the model to the data of an ideal \(RLC\)-circuit for the data without a shunt and a \(RLC\)-circuit in parallel with a shunt \(L_{s}\) for the shunted sample. The lack of a shunt is equivalent to \(L_{s}\rightarrow\infty\). Both samples are designed to be largely over-coupled to the transmission line. The sample with the shunt displays a sharper phase response and a larger dip in the magnitude response, showing that it is closer to the critical coupling than the sample without a shunt. The resonant frequencies for the samples are \(\omega_{c}/2\pi=4.482\,\mathrm{GHz}\) (no shunt) and \(4.378\,\mathrm{GHz}\) (shunt).
fer the pattern into the superconducting film with a CF\({}_{4}\)/O\({}_{2}\) reactive-ion etch (RIE) process in a Plasmapro 100 ICP300 from Oxford Instruments, with an etch rate of roughly 8 nm/min.
5. **Fine circuit pattern.** Electron-beam lithography defines the finer structures, such as the meandering nanowire inductor, the shunt inductor, and the interdigital gap of the capacitor. We first spin a thin layer of an adhesion promoter (AR 300-80), before spinning a roughly 170 nm thick layer of the electron-beam resist ARP-6200-09 (CSAR 09), baking at 150 \({}^{\circ}\)C for 1 min. We expose with a dose of 110 \(\mathrm{\SIUnitSymbolMicro C}\)/cm\({}^{2}\) in a Voyager EBL system from Raith Nanofabrication, and etch the Nb-Ti-N film using the same CF\({}_{4}\)/O\({}_{2}\) RIE-process as in step (d). In our design, we vary the widths (\(w=75\) nm, 100 nm and 200 nm) of the nanowires across the wafer, adjusting the total number of squares (total inductance) and the capacitor to obtain a resonant frequency \(\omega_{c}/2\pi\sim\) 4.5 GHz, see Figs. 1(d)-(f).
6. **Cantilever pattern.** Photolithography defines the chip and cantilever. We spin a 1.7 \(\mathrm{\SIUnitSymbolMicro m}\) thick photoresist maP1225, bake at 105 \({}^{\circ}\)C for 2 min. We then expose with dose 300 mJ/cm\({}^{2}\) in the MLA150, and develop in maD331 for 45 s. We etch through the Si-N layer using a CHF\({}_{3}\)/SF\({}_{6}\) process with an etch rate of roughly 100 nm/min in the Plasmapro 100 ICP300.
7. **Backside through-etch.** Before etching through the back side of the wafer, we first spin a protective positive resist on the front side and pattern an opening, or a "trench", around the chip that we will use to complete the etch once a larger portion of the wafer has been etched through from the back. We design the trench so that all cantilevers on the wafer are released at the same time, irrespective of their length. To this end we spin a roughly 2.2 \(\mathrm{\SIUnitSymbolMicro m}\) thick layer of photoresist (maP1225), bake it at 105 \({}^{\circ}\)C for 3 min, expose with dose 550 \(\mathrm{\SIUnitSymbolMicro C}\)/cm\({}^{2}\) and develop it in maD331 for 60 s. With the front side of the wafer protected, we flip over the wafer and etch through the Si-N using the same CHF\({}_{3}\)/SF\({}_{6}\) process as in step (f) and the etch mask defined in step (c). We then use a Bosch process to etch through most of the Si substrate (approximately 450 \(\mathrm{\SIUnitSymbolMicro m}\) deep) with an etch rate of roughly 6 \(\mathrm{\SIUnitSymbolMicro m}\)/min. This results in all the samples on the wafer being supported by a thin layer of silicon close to the top side.
8. **Release and cantilever under-etch.** A simple and fast method of release uses an isotropic dry-etch that both completes the release of the chip from the wafer and removes the unwanted silicon support underneath the silicon nitride cantilever. We etch the silicon through the "trench" defined in step (g), from the topside with a short Bosch etch, followed by an isotropic etch that undercuts the cantilever. We use an SF\({}_{6}\)/O\({}_{2}\) RIE-process in the Plasmapro 100 ICP300 with lateral etch rate of 10 \(\mathrm{\SIUnitSymbolMicro m}\)/min. The isotropic etch results in an uneven clamping line, as shown in Fig. 1(g), leading to some variation in the mechanical resonant frequency (see Sec. IV.1), and possibly additional clamping losses.
Before the final etch and release step, we fix the back of the wafer to a sticky layer of bluetape which holds the wafer together during release. After release the wafer is cleaned in mrREM7000 and IPA, and the individual
Figure 5: (a)-(h) Illustration of the main fabrication steps for one device (not to scale). Details of each step are given in the main text. (i) Photograph of a 100-mm wafer after sensor release, frame removal, and removal of broken chips, with remaining chips attached to the bluetape. The inset is an optical image of one chip. The cavity and cantilever, too small to be seen in this image, are located at the left-pointed end of the chip
sensor chips are separated from the wafer in a single step by lifting away the outer frame, with the chips remaining on the bluetape, as shown in Fig. 5(i).
We tested an alternative method to release the cantilever using a wet-etch in potassium hydroxide (KOH). The wet-etch has a high selectivity between silicon and silicon nitride, and KOH etches silicon with different rates in the \(<100>\) and \(<111>\) crystalline directions. With proper orientation of the cantilever mask to the crystalline axes of the wafer one can etch under the triangular silicon nitride plate and form a very straight clamping line to the Si substrate. However the KOH etch is slow in comparison to the isotropic RIE process and it attacks the Nb-Ti-N superconducting film. An additional lithography step was needed to protect the superconducting circuit with a mask consisting of a \(190\,\mathrm{nm}\) thick layer of Cr and PMMA. After the release, we strip the PMMA and Cr layer, while taking care to not break the cantilevers. The difficulties associated with using KOH lead us to prefer the dry-etch described in step (h).
### Tip deposition
In scanning probe microscopy (SPM) the tip plays a fundamental role in the achievable lateral resolution of the image. The focused electron-beam induced deposition (FEBID) [39] technique has been adapted to fabricate tips for SPM, for example to enhance commercial platinum-iridium alloy (Pt:Ir) coated conductive tip [40], or to realize laterally grown high-aspect ratio nanopillars [41]. We realize sharp, vertically grown conductive tips at the apex of the Si-N cantilever using FEBID with a Pt precursor gas. Figure 6 shows the resulting structure. We obtain the conical shape by stacking multiple depositions with different radii to achieve a total tip height in the range \(1\)-\(2\,\mathrm{\SIUnitSymbolMicro m}\). This conical structure gives added rigidity to lateral forces while scanning. We form a sharp tip at the apex of the cone by exposing a circular area with a diameter of \(10\,\mathrm{nm}\), which is smaller than the nominal electron-beam spot size, and by setting the deposition height to \(10\,\mathrm{\SIUnitSymbolMicro m}\). Defocusing of the electron spot during vertical growth naturally forms a narrowing conical structure. At the apex of this cone, we routinely achieve a curvature radius of less than \(10\,\mathrm{nm}\), as verified by the SEM image in Fig. 6(c). Finally, we deposit a thin strip to connect the base of the cone to the Nb-Ti-N film which is the ground plane of the microwave circuit. This feature enables the measurement of the tunneling current between the tip (grounded) when a DC bias is applied to a conductive sample surface. Scanning tunneling microscopy (STM) operation was verified both at room temperature and in a cryogenic environment. Thus the deposited material is suitably conductive for STM as well as various electrostatic AFM techniques which require applying a low-frequency voltage to the tip.
## IV Characterization
### Mechanical mode
Our chips were the same size as an AFM cantilever chip, making it easy to load them into commercial AFM for mechanical characterization of the cantilever. For the design with nominal cantilever width \(40\,\mathrm{\SIUnitSymbolMicro m}\), length \(50\,\mathrm{\SIUnitSymbolMicro m}\) and thickness \(600\,\mathrm{nm}\), the optical leaver detector in our AFM had sufficient bandwidth to detect the fundamental bending mode. We measured \(60\) chips, detecting the thermal fluctuations at room temperature in ambient conditions and fitting to a Lorentzian lineshape. We found that \(\omega_{m}\) decreases with the increasing radial distance \(D\) from the center of the wafer, as shown in Fig. 7. The observed trend and considerable scatter for chips near the edge might be explained by non-uniform etching conditions across the wafer. To some extent, one could optimize the mask design and adjust the dimensions of the cantilever to compensate for this effect. Using the mean value of \((641\pm 42)\,\mathrm{kHz}\) and adjusting the Young's modulus of our Si-N plate to \(208\,\mathrm{GPa}\), we find good agreement between the mechanical simulation presented in Sec. II.1 and experiment.
### Electrical mode
From the measured normal-state resistance of our nanowires and the measured thickness and width, we find a sheet resistance \(R_{\square}=243\,\mathrm{\SIUnitSymbolMicro m}/\square\), corresponding to a resistivity of \(\rho_{n}=365\,\mathrm{\SIUnitSymbolMicro m}\Omega\,\mathrm{cm}\). We monitor the microwave response during cool-down and estimate a critical temperature \(T_{c}=9.6\,\mathrm{K}\), from which we estimate the superconducting energy gap with the BCS relation \(\Delta_{0}=1.76k_{B}T_{c}=1.46\,\mathrm{meV}\). Using Eq. (2) we find a kinetic inductance per square \(L_{k,\square}=35\,\mathrm{pH}/\square\) for the \(200\)-nm-wide nanowires. This corresponds to a kinetic inductance per unit length \(L_{k}/\ell=175\,\mathrm{pH}/\mathrm{\SIUnitSymbolMicro m}\). We compare this to the estimated geometric inductance per unit length, using the thin-ribbon formula [26, 42]\(L_{g}\approx(\mu_{0}/2\pi)\ell\ln(2\ell/w)\), from which we obtain \(L_{g}/\ell=17\,\mathrm{pH}/\mathrm{\SIUnitSymbolMicro m}\) for our \(200\)-nm-wide nanowires. The ratio of kinetic inductance to total inductance \(\alpha=L_{k}/(L_{k}+L_{g})\simeq 1\), meaning that we can safely neglect the geometric contribution to the total inductance of our nanowires. This approximation is also valid for other samples with smaller nanowire widths which are expected to have a higher \(L_{k}/\ell\).
We measured the microwave cavity resonant frequency \(\omega_{c}\) on \(26\) chips. In some cases we studied the temperature dependence of \(\omega_{c}\) and verified electromechanical coupling between the cavity mode and cantilever mode. These measurements were performed in a dry cryostat (DynaCool Physical Properties Measurement System [43]), with a base temperature of \(1.7\,\mathrm{K}\). We modified a measurement stick adding high-frequency coaxial cabling for microwave signals to probe the cavity response,
and twisted pairs for lower-frequency signals such as the voltage applied to the piezo disk that inertially actuates cantilever vibration. The stick is equipped with a cold attenuator, directional coupler and cryogenic amplifier for low-noise measurement of microwave reflection. Both low-frequency and high-frequency signals are synchronously synthesized and measured with a digital multifrequency microwave measurement device (Vivace from Intermodulation Products [44]) for easy measurement of phase-sensitive electromechanical transduction [14] (see Sec. IV.3).
We sweep the microwave frequency and measure the reflected signal (amplitude and phase) to locate the resonance. A sharp dip in reflection is easily observed in the more slowly varying background. We zoom in on the dip to capture the resonance lineshape at low power, in the linear response regime. Using standard methods [45] we analyze the frequency dependence of the reflected amplitude and phase to determine the cavity resonant frequency \(\omega_{c}\), internal quality factor \(Q_{\mathrm{int}}\) and external quality factor \(Q_{\mathrm{ext}}\). From the measured \(\omega_{c}\) and the simulated capacitance, we extract a kinetic inductance per square \(L_{k,\square}\) for the meandering structures of different nanowire widths. For the sample shown in Fig. 8, we find \(\omega_{c}/2\pi=4.637\,\mathrm{GHz}\), \(Q_{\mathrm{ext}}=4747\) and \(Q_{\mathrm{int}}=17\,690\), with a coupling parameter \(\eta=0.79\). However, a majority of the samples tested were strongly over-coupled, making it difficult to extract a reliable value for \(Q_{\mathrm{int}}\) using the standard fitting methods. Nevertheless, we can reliably determine the resonant frequency for all measured samples and from this we calculate the kinetic inductance using the simulated capacitance and nominal number of squares in the meander. Table 1 summarizes the kinetic inductance per square thus determined for different nanowire widths \(w\). Values of \(L_{k,\square}\), in the range \(32\)-\(60\,\mathrm{pH}/\square\) across all nanowire widths, are in approximate agreement with nanowires of similar materials and dimensions [46; 47; 20; 42].
\begin{table}
\begin{tabular}{l l l l l l} width & \(w\) & nr. samples & \(\bar{L}_{k,\square}\) & \(L_{k,\square}\)-range & \(\bar{f}_{0}\) & \(f_{0}\)-range \\ (nm) & - & (pH/\(\square\)) & (pH/\(\square\)) & (GHz) & (GHz) \\ \hline
75 & 8 & 52 & 41–60 & 4.82 & 4.49–5.34 \\
100 & 7 & 51 & 46–54 & 4.43 & 4.25–4.64 \\
200 & 11 & 34 & 32–38 & 4.42 & 4.22–4.56 \\ \end{tabular}
\end{table}
Table 1: Summary of the mean kinetic inductance per square \(\bar{L}_{k,\square}\), mean resonant frequency \(\bar{f}_{0}\), and the range of measured values, grouped by nominal nanowire width \(w\).
Figure 6: (a) An SEM image of a top-tilted view of an electron-beam deposited platinum tip on the released Si-N cantilever of one chip. The tip has total height in the range \(1\)–\(2\,\mathrm{\SIUnitSymbolMicro m}\). (b) An SEM image of the reverse cone structure, deposited using multiple layers of platinum. Note the larger connection to the Nb-Ti-N thin film to the left of the tip, ensuring an ohmic contact with the ground plane of the chip. (c) An SEM-image of a deposited tip, showing a radius of curvature smaller than \(10\,\mathrm{nm}\).
Figure 7: Resonant frequency \(\omega_{m}\) of a long cantilever as a function of radial distance \(D\) from the center of the wafer. The resonant frequency decreases with the radial distance and with increased spread. Finite-element method simulations give an expected resonant frequency of \(700\,\mathrm{kHz}\).
We also studied the temperature dependence of the microwave response in the range 1.7-5 K where we find a shift of the resonant frequency \(\omega_{c}\) by many linewidths, and a change of coupling parameter \(\eta\). Figure 8(a) shows the amplitude and phase of the reflected signal for one of the chips, as a function of temperature. At each temperature we fit to extract \(\omega_{c}\), \(Q_{\text{ext}}\) and \(Q_{\text{int}}\). As shown in Fig. 8(b), the external losses are roughly independent of temperature, while the internal quality factor degrades with temperature. The change in \(\omega_{c}\) results from a temperature dependence of \(L_{k}\), which, together with the change in \(Q_{\text{int}}\), results in a transition from over-coupled to under-coupled at \(T\approx 2.8\) K, as shown in Fig. 8(c).
### Electromechanical coupling
Electromechanical coupling allows us to detect force on the tip by measuring the bending of the cantilever. To this end we need a phase-sensitive detector of the cantilever's harmonic motion. Such a measurement is achieved by simultaneously applying two microwave tones at \(\omega_{c}\pm\omega_{m}\). Sidebands generated by the mechanical motion interfere with each other at the cavity's resonant frequency \(\omega_{c}\), either constructively or destructively, depending on the phase of the mechanical motion, as illustrated in Fig. 9(a). Figure 9(b) shows this phase-sensitive detection of the signal as a function of the applied mechanical phase \(\phi_{m}\) relative to the microwave drive tones. We note that this measurement was made in a dilution refrigerator on a sample with \(\omega_{m}/2\pi\) = 5.828 MHz.
## V Conclusions
We described our approach to designing cantilever force sensors with integrated microwave cavity electromechanical sensing of flexural motion, based on the strain-dependent kinetic inductance of a superconducting nanowire. This type of force sensor is potentially very interesting for low-temperature AFM as the superconducting cavity readout scheme is intrinsically very low noise. At ultra-low temperatures where the microwave mode can be in a coherent state, the detection scheme could potentially be used together with sideband or feedback cooling to approach the standard quantum limit,
Figure 8: (a) The temperature-dependent magnitude and (b) phase response of the resonator as a function of frequency with a fit to theory (dashed lines). The cavity is over-coupled at low temperatures \(T=1.7\) K, apparent from the \(2\pi\)-phase flip in the reflection measurement. As the temperature increases over the range 1.7–5 K, the cavity becomes critically coupled at \(T\approx 2.8\) K, above which it becomes under-coupled. (c) The extracted internal and external quality factors \(Q_{\text{int}}\) and \(Q_{\text{ext}}\) as a function of temperature. (d) The coupling parameter \(\eta\) as a function of temperature.
where force noise is limited by the quantum zero-point motion of the cantilever \(z_{\rm{zpf}}\).
Sideband cooling requires a device in the sideband-resolved regime where the mechanical frequency is larger than the cavity loss rate, \(\omega_{m}>\kappa\). We achieve this regime in over-coupled devices with \(\omega_{m}\sim 5\,\)MHz [14]. But effective cooling also requires a device with a large enough single-photon coupling rate \(g_{0}=z_{\rm{zpf}}\partial\omega_{c}/\partial z\) and a cavity that we can pump to large photon number while maintaining linearity. In this regard, variations on the design presented here and alternative designs need to be fabricated and tested.
The sensors described here represent the first generation of devices, where there is room for improvement. These devices served to establish the fabrication process that we described in detail herein and to verify that kinetic inductive mechano-electric coupling (KIMEC) as a useful, albeit not entirely understood, physical effect. Further investigation of alternative designs will help to shed light on the underlying physical mechanism behind KIMEC. With a deeper understanding of KIMEC, we may aim to make AFM force sensors where the back-action of measurement, or force resulting from the microwave readout, becomes the limiting source of force noise. We hope that this study motivates future work in this direction.
## Data availability statement
All raw and processed data, as well as supporting code for measurement libraries, data processing, and figure generation, are available on Zenodo [48].
###### Acknowledgements.
We thank the Quantum-Limited Atomic Force Microscopy (QAFM) team for fruitful discussions: T. Glatzel, M. Zutter, E. Tholen, D. Forchheimer, I. Ignat, M. Kwon, and D. Platz. The European Union Horizon 2020 Future and Emerging Technologies (FET) Grant Agreement No. 828966 -- QAFM and the Swedish SSF Grant No. ITM17-0343 supported this work.
## Conflict of interest
David B. Haviland is a part owner of Intermodulation Products AB, which manufactures and sells the microwave measurement platform used in this experiment. August K. Roos, Ermes Scarano, Elisabet K. Arvidsson, and Erik Holmgren have no conflicts to disclose.
## Appendix A Coupling parameter
The resonator is modeled by a series \(RLC\)-circuit connected to a transmission line with characteristic impedance \(Z_{0}\) [see Fig. 1(b)]. The shunt inductance \(L_{s}\) is introduced to modify the coupling between the microwave resonator and the transmission line, described by the parameter \(\eta\). The resonator impedance is given by
\[Z_{c}=R+i\omega L_{k}+\frac{1}{i\omega C} \tag{12}\]
Figure 9: (a) Illustration of the pump-drive scheme for phase-sensitive detection of the mechanical oscillation. Two microwave tones of equal power and fixed relative phase are applied symmetrically about the cavity’s resonant frequency \(\omega_{c}\) and detuned by \(\pm\omega_{m}\). Simultaneously, a separate tone drives the mechanical oscillator coherently at its resonant frequency \(\omega_{m}\) with a variable mechanical phase \(\phi_{m}\). The electromechanical coupling mixes the mechanical frequency with each microwave tone, leading to an interfering response at \(\omega_{c}\). (b) Normalized measured magnitude of the response at \(\omega_{c}\) as a function of the applied mechanical drive phase \(\phi_{m}\). The interference fringes show an excellent fit to \(|\sin\phi_{m}|\), characteristic of a balanced interferometer.
where \(L_{k}\) is the kinetic inductance of the meandering nanowire, \(C\) is the interdigital coplanar capacitance and \(R\) models internal losses. The impedance of the environment \(Z_{\text{ext}}\) is given by the parallel of the impedance of the shunting inductance \(Z_{s}=i\omega L_{s}\) and the characteristic impedance of the transmission line \(Z_{0}\),
\[Z_{\text{ext}}=Z_{0}\parallel Z_{s}=\frac{iZ_{0}\omega L_{s}}{Z_{0}+i\omega L_{ s}} \tag{10}\]
The internal quality factor \(Q_{\text{int}}\) and external quality factor \(Q_{\text{ext}}\) of the circuit are defined as
\[Q_{\text{int}}=\frac{1}{R}\sqrt{\frac{L}{C}} \tag{11}\]
\[Q_{\text{ext}}=\frac{1}{\text{Re}\left[Z_{\text{ext}}\right]}\sqrt{\frac{L}{C}} \tag{12}\]
Since \(\omega L_{s}\ll 1\), the real part of the environment's impedance may be approximated as
\[\text{Re}\left[Z_{\text{ext}}\right]\approx\frac{\omega^{2}L_{s}^{2}}{Z_{c}}. \tag{13}\]
The coupling parameter is given by the ratio of the external losses to the total losses, or equivalently
\[\eta=\frac{Q_{\text{int}}}{Q_{\text{int}}+Q_{\text{ext}}}=\frac{1}{1+\frac{R} {\text{Re}[Z_{\text{ext}}]}}=\left(1+\frac{Z_{0}R}{\omega^{2}L_{s}^{2}}\right) ^{-1}. \tag{14}\]
|
2303.03451 | Improved Differentially Private Regression via Gradient Boosting | We revisit the problem of differentially private squared error linear
regression. We observe that existing state-of-the-art methods are sensitive to
the choice of hyperparameters -- including the ``clipping threshold'' that
cannot be set optimally in a data-independent way. We give a new algorithm for
private linear regression based on gradient boosting. We show that our method
consistently improves over the previous state of the art when the clipping
threshold is taken to be fixed without knowledge of the data, rather than
optimized in a non-private way -- and that even when we optimize the
hyperparameters of competitor algorithms non-privately, our algorithm is no
worse and often better. In addition to a comprehensive set of experiments, we
give theoretical insights to explain this behavior. | Shuai Tang, Sergul Aydore, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu | 2023-03-06T19:20:52Z | http://arxiv.org/abs/2303.03451v2 | # Improved Differentially Private Regression via Gradient Boosting
###### Abstract
We revisit the problem of differentially private squared error linear regression. We observe that existing state-of-the-art methods are sensitive to the choice of hyperparameters -- including the "clipping threshold" that cannot be set optimally in a data-independent way. We give a new algorithm for private linear regression based on gradient boosting. We show that our method consistently improves over the previous state of the art when the clipping threshold is taken to be fixed without knowledge of the data, rather than optimized in a non-private way -- and that even when we optimize the hyperparameters of competitor algorithms non-privately, our algorithm is no worse and often better. In addition to a comprehensive set of experiments, we give theoretical insights to explain this behavior.
## 1 Introduction
Squared error linear regression is a basic, foundational method in statistics and machine learning. Absent other constraints, it has an optimal closed-form solution. A consequence of this is that linear regression parameters have a deterministic relationship with the data they are fitting, which can leak private information. As a result, there is a substantial body of work aiming to approximate the solution to least squares linear regression with the protections of differential privacy [14, 15, 16, 17, 18, 19, 20].
We highlight the AdaSSP ("Adaptive Sufficient Statistics Perturbation") algorithm [20] which obtains state-of-the-art theoretical and practical performance when the maximum norm of the features and labels are known--these bounds are used to scale the noise added for privacy. When a data-independent bound on the magnitude of the data is not known, in order to promise differential privacy, they must be clipped at some data-independent threshold, which can substantially harm performance. In this work, we give a new algorithm for private linear regression that substantially mitigates this issue and leads to improved accuracy across a range of datasets and clipping thresholds.
Our approach is both conceptually and computationally simple: we apply gradient boosting [13], using a linear model as the base learner, and to incorporate privacy guarantees, at each boosting round, the linear model is solved using AdaSSP. When applied to a squared error objective, gradient boosting is exceedingly simple: it maintains a linear combination of regression models, repeatedly fitting a new regression model to the _residuals_ of the current model, and then adding the new model to the linear combination. Absent privacy constraints, gradient boosting for linear regression does not improve performance, because linear models are closed under linear combinations, and squared error regression can be optimally solved over the set of all linear models in closed form. Nevertheless, in the presence of privacy constraints and in the absence
of knowledge of the data scale (so that we must use a data independent clipping threshold), we show in an extensive set of experiments that gradient BoostedAdaSSP substantially improves on the performance of AdaSSP alone. Moreover, we show that our BoostedAdaSSP algorithm outperforms other competitive differentially private solutions to linear regression in different conditions, including gradient descent on the squared loss objective, and interestingly performs better than a tree-based private boosting algorithm. We also show that our algorithm is less sensitive to hyperparameter selection.
We also provide stylized theoretical explanations of the empirical results. In the zero-dimensional case, AdaSSP reduces to computing the empirical mean of the clipped data, and aggressive clipping thresholds can cause the bias of empirical mean to be arbitrarily large. In this setting, gradient boosting with AdaSSP as a base learner corresponds to iteratively updating an estimator of the mean by the clipped empirical residuals, i.e. the empirical mean of the difference between the current mean estimate and the data. In Section 5, we show that, for Gaussian data, the boosting method converges to the true mean for _any_ non-zero clipping threshold. The intuition behind this improvement of boosting over the one-shot empirical mean is that, even clipped estimates of the mean are directionally correct, which serves to further de-bias the current estimate and reduce the negative effect of aggressive clipping. The convergence of our boosted algorithm under arbitrary clipping provides a significant improvement over AdaSSP, especially when the clipping bound must be independent to the data.
Finally, we show that BoostedAdaSSP can sometimes out-perform differentially private boosted trees [12] as well, a phenomenon that we do not observe absent privacy. This contributes to an important conceptual message: that the best learning algorithms under the constraint of differential privacy are not necessarily "privatized" versions of the best learning algorithms absent privacy--differential privacy rewards algorithmic simplicity.
### Additional Related Work
Because of its fundamental importance, linear regression has been the focus of a great deal of attention in differential privacy [13, 14, 15, 16, 17, 18, 19, 20], using techniques including private gradient descent [1, 1], output and objective perturbation [10], and perturbation of sufficient statistics [21]. As already mentioned, the AdaSSP (a variant of the sufficient statistic perturbation approach) [19] has stood out as a method obtaining both optimal theoretical bounds and strong empirical performance -- both under the assumption that the magnitude of the data is known.
[1] have previously noted that AdaSSP can perform poorly when the data magnitude is unknown and clipping bounds must be chosen in data-independent ways. They also give a method -- TukeyEM [1] -- aiming to remove these problematic hyperparameters for linear regression. TukeyEM privately aggregates multiple non-private linear regressors learned on disjoint subsets of the training set. The private aggregate uses the approximate Tukey depth and removes the risk of potential privacy leaks in choosing hyperparameters. However, because each model is trained on a different partition of the data, as [1] note, TukeyEM performs well when the number of samples is roughly \(1,000\) times larger than the dimension of the data. We include a comparison to both TukeyEM and AdaSSP in our experimental results.
Another line of work has studied differentially private gradient boosting methods, generally using a weak learner class of classification and regression trees (CARTs) [15, 16]. [12] gives a particularly effective variant called DP-EBM, which we compare to in our experiments.
There is a line of work that aims to privately optimize hyperparameters (e.g. [1, 16, 17]) -- we do not directly compare to these approaches, but our experiments show that our algorithm dominates comparison methods even when their hyperparameters are optimized non-privately.
## 2 Preliminaries
We study the standard squared error linear regression problem. Given a joint distribution \(\mathcal{D}\) over \(p\) dimensional features \(x\in\mathbb{R}^{p}\) and real-valued labels \(y\in\mathbb{R}\). Our goal is to learn a parameter vector \(\theta\in\mathbb{R}^{p}\) to minimize
squared error:
\[\mathcal{L}(\theta,\mathcal{D})=\mathbb{E}_{(x,y)\sim\mathcal{D}}[( \langle\theta,x\rangle-y)^{2}]. \tag{1}\]
In order to protect privacy of individuals in the training data when the learnt parameter vector \(\theta\) is released, we adopt the notion of Differential Privacy.
### Differential Privacy (DP)
Differential privacy is a strong formal notion of individual privacy. DP ensures that, for a randomized algorithm, when two neighboring datasets that differ in one data point are presented, the two outputs are indistinguishable, within some probability margin defined using \(\epsilon\) and \(\delta\in[0,1)\).
**Definition 2.1** (Differential Privacy [16]).: A randomized algorithm \(\mathcal{M}\) with domain \(\mathcal{D}\) is \((\epsilon,\delta)\)-differentially private for all \(\mathcal{S}\subseteq\text{Range}(\mathcal{M})\) and for all pairs of neighboring databases \(D,D^{\prime}\in\mathcal{D}\),
\[\Pr[\mathcal{M}(D)\in\mathcal{S}]\leq e^{\epsilon}\Pr[\mathcal{M }(D^{\prime})\in\mathcal{S}]+\delta, \tag{2}\]
where the probability space is over the randomness of the mechanism \(\mathcal{M}\).
A refinement of differential privacy, a single-parameter privacy definition (Gaussian differential privacy, GDP) was later proposed [14]. In this work, we use GDP in order to achieve better privacy bounds. We present several key results in [14] that we use in our privacy analysis.
**Definition 2.2** (\(\ell_{2}\)-sensitivity).: The \(\ell_{2}\)-sensitivity of a statistic \(m\) over the domain of dataset \(D\) is \(\Delta(m)=\sup_{D,D^{\prime}}\|m(D)-m(D^{\prime})\|_{2}\), where \(\|\cdot\|_{2}\) is the vector \(\ell_{2}\)-norm, and the supremum is over all neighboring datasets.
**Theorem 2.3** (Gaussian Mechanism, Theorem 2.7 from of [14]).: _Define a randomized algorithm \(GM\) that operates on a statistic \(m\) as \(GM(x,\mu)=m(x)+\eta\), where \(\eta\sim\mathcal{N}(0,\Delta(m)^{2}/\mu^{2})\) and \(\Delta(m)\) is the \(\ell_{2}\)-sensitivity of the statistics \(m\). Then, \(GM\) is \(\mu\)-GDP._
For \(n\) GDP mechanisms with privacy parameters \(\mu_{1},\cdots,\mu_{n}\),the following composition theorem holds:
**Corollary 2.4** (Composition of GDP, Corollary 3.3 of [14]).: _The \(n\)-fold composition of \(\mu_{i}\)-GDP mechanisms is \(\sqrt{\mu_{1}^{2}+\cdots+\mu_{n}^{2}}\)-GDP._
There is a tight relationship between \(\mu\)-GDP and \((\epsilon,\delta)\)-DP that allows us to perform our analysis using GDP, and state our results in terms of \((\epsilon,\delta)\)-DP.
**Corollary 2.5** (Conversion between GDP and DP, Corollary 2.13 of [14]).: _A mechanism is \(\mu\)-GDP if and only if it is \((\epsilon,\delta(\epsilon))\)-DP for all \(\epsilon\geq 0\), such that_
\[\delta(\epsilon)=\Phi\left(-\frac{\epsilon}{\mu}+\frac{\mu}{2} \right)-e^{\epsilon}\Phi\left(-\frac{\epsilon}{\mu}-\frac{\mu}{2}\right) \tag{3}\]
_where \(\Phi\) denotes the standard Gaussian CDF._
## 3 Improved AdaSSP via Gradient Boosting
Our algorithm for private linear regression uses gradient boosting with AdaSSP as a weak learner.
### Gradient Boosting
For regression tasks, we assume that we have a dataset \(D=\{x_{i},y_{i}\}_{i=1}^{n}\), where \(x_{i}\in\mathbb{R}^{p}\) and \(y_{i}\in\mathbb{R}\), \(\forall i\in[n]\). Let \(T\) be the number of boosting rounds, and \(f_{t}\) be the model obtained at iteration \(t\in[T]\). Since our base learner is linear and the objective is the squared loss, at the \(t\)-th round, the objective of a gradient boosting algorithm is to obtain:
\[\theta_{t}=\arg\min_{\theta}\sum_{i=1}^{n}(y_{i}-(\sum_{k=1}^{t-1}\theta_{k}^{ \top}x_{i}+\theta^{\top}x_{i}))^{2}=\arg\min_{\theta}\sum_{i=1}^{n}\left(g_{i, t}-\theta^{\top}x_{i}\right)^{2}, \tag{4}\]
where \(g_{i,t}=y_{i}-\sum_{k=1}^{t-1}\theta_{k}^{\top}x_{i}\) is the steepest gradient of the objective function w.r.t. the ensemble predictions made by previous rounds. Therefore, each gradient boosting round is solving a squared error linear regression problem where the features are data, and the labels are gradients. The model update at \(t\)-th round is simply \(\hat{\theta}=\hat{\theta}+\theta_{t}\), and the final model is \(\hat{\theta}=\sum_{k=1}^{T}\theta_{k}\).
Since the update preserves the linearity of the model, and squared error regression can be solved optimally over linear models. Absent privacy, gradient boosting cannot improve the error of linear regression in the standard setting. Nevertheless, when we replace exact linear regression with differentially private approximations, the situation changes.
### Private Ridge Regression as a Base Learner
Let \(X\in\mathbb{R}^{n\times p}\) be the matrix with \(x_{i}\)'s in each row and \(g_{t}\in\mathbb{R}^{n}\) be a vector containing gradients of training samples at \(t\) (i.e., \(g_{i,t}\)). Absent privacy, there exists a closed-form soluion to Eq. 4, and it is
\[\theta_{t}=(X^{\top}X)^{-1}X^{\top}g_{t}, \tag{5}\]
To provide differential privacy guarantees, AdaSSP (Algorithm 2 of [21]) is applied to learn a private linear model at each round. It also requires us to adjust our solution at each round from OLS to Ridge Regression as follows:
\[\theta_{t}=(X^{\top}X+\lambda I)^{-1}X^{\top}g_{t}, \tag{6}\]
where \(\lambda\) controls the strength of regularization, and \(I\in\mathbb{R}^{p\times p}\) is the identity matrix.
Let \(\mathcal{X}\) and \(\mathcal{Y}\) be the domain of our features and labels, respectively. We define bounds on the data domain \(||\mathcal{X}||=\sup_{x\in\mathcal{X}}||x||\) and \(||\mathcal{Y}||=\sup_{y\in\mathcal{Y}}||y||\). Given as input privacy parameters \(\epsilon\) and \(\delta\), and bounds on the data scale \(||\mathcal{X}||\) and \(||\mathcal{Y}||\) for \(x_{i}\) and \(g_{i,t}\), AdaSSP chooses a noise scale to obtain \(\mu\)-GDP for the appropriate value of \(\mu\), and adds calibrated Gaussian noise to three sufficient statistics: 1) \(X^{\top}X\), 2) \(X^{\top}g_{t}\), and 3) \(\lambda\). The adaptive aspect of AdaSSP comes from the fact that \(\lambda\) is chosen based on \(X^{\top}X\), therefore, we also need to allocate privacy budget for computing \(\hat{\lambda}\). Details of the AdaSSP algorithm for learning one ridge regressor are deferred to Appendix A.2.
Let \(\widehat{X^{\top}X}=GM(X^{\top}X,\mu_{1})\), \(\widehat{X^{\top}g_{t}}=GM(X^{\top}g_{t},\mu_{2})\), \(\widehat{\lambda}=GM(\lambda_{\min}(X^{\top}X),\mu_{3})\) be the private release of sufficient statistics from a single instantiation of AdaSSP to learn \(\theta_{t}\) and \(GM\) as defined in Theorem 2.3. The final model \(\hat{\theta}\) can be expressed as
\[\widehat{\theta}=\sum_{t=1}^{T}\widehat{\theta}_{t}=\left(\widehat{X^{\top}X} +\widehat{\lambda}I\right)^{-1}\sum_{t=1}^{T}\widehat{X^{\top}g_{t}} \tag{7}\]
Therefore, when running gradient boosting, we only need to release \(GM(X^{\top}X,\mu_{1})\) and \(GM(\lambda_{\min}(X^{\top}X),\mu_{3})\) once at the beginning of our algorithm, and at each stage, the only additional information we need to release is \(GM(X^{\top}g_{t},\mu_{2}/\sqrt{T})\); this provides a savings over naively repeating AdaSSP (given as Algorithm 4 in the Appendix) for \(T\) rounds.
Putting it all together, our final algorithm Boosted AdaSSP is shown in Algorithm 1, and the privacy guarantee is shown in Theorem 3.1.
**Theorem 3.1**.: _Algorithm 1 satisfies \((\epsilon,\delta)\)-DP._
Proof.: We use the privacy of the Gaussian mechanism, and the composition theorem stated in corollary 2.4, which gives us a GDP bound of: \(\sqrt{\mu_{1}^{2}+T\left(\mu_{2}/\sqrt{T}\right)^{2}+\mu_{3}^{2}}=\sqrt{\mu_{1}^ {2}+\mu_{2}^{2}+\mu_{3}^{2}}=\mu\). The conversion from GDP to DP follows from Corollary 2.5.
### Data-independent Clipping Bounds
As described in [20] and mentioned in [1], the clipping bounds on \(\mathcal{X}\) and \(\mathcal{Y}\) are taken to be known -- but if they are selected as a deterministic function of the data, this would constitute a violation of differential privacy. For \(\mathcal{Y}\), the most natural solution is to use a data-independent \(\tau\) to clip labels and enforce a bound of \(\tau\); but as we observe both empirically and theoretically, this introduces a difficult-to-tune hyperparameter that can lead to a substantial degradation in performance. For \(\mathcal{X}\), one way to resolve this issue, (as is done in the implementation of AdaSSP 1) is to normalize each individual data point to have norm 1, but this is not without loss of generality: it fundamentally changes the nature of the regression problem being solved, and so does not always constitute a meaningful solution to the original problem. Instead, we clip the norm of data points so that the maximum norm doesn't exceed a fixed data independent threshold (but might be lower).
Footnote 1: [https://github.com/yuxiangw/autodp/blob/master/tutorials/tutorial_AdaSSP_vs_noisyGD.ipynb](https://github.com/yuxiangw/autodp/blob/master/tutorials/tutorial_AdaSSP_vs_noisyGD.ipynb)
## 4 Experiments
We selected 33 tabular datasets with single-target regression tasks from OpenML 2[1] for evaluating and comparing our algorithm to other algorithms. Task details are presented in Table 2. The selected tasks include both categorical and numerical features. We assume that the schema of individual tables is public information, and so convert categorical features into one-hot encodings.
Footnote 2: [https://www.openml.org](https://www.openml.org)
We compare our approach with a number of other algorithms. First, we compare to other private linear regression methods: AdaSSP, DP Gradient Descent and TukeyEM. These represent the leading practical
methods (with accompanying code) used for solving linear regression problems. DP Gradient Descent solves the linear regression problem through noisy batch gradient descent with noise calibrated with clipped per-sample gradients, meanwhile, TukeyEM trains nonprivate linear models on disjoint subsets and privately aggregates the learned linear models. Since our algorithm is based on gradient boosting, in addition to algorithms that solve linear regression problems, we also compare to DP-EBM 3, the current state-of-the-art differentially private gradient boosting algorithm, which uses trees as its base learners. Rather than finding the optimal splits for each leave based on the data, DP-EBM uses random splits, which significantly improves the efficacy of the privacy budget.
Footnote 3: [https://github.com/interpretml/interpret](https://github.com/interpretml/interpret)
As each algorithm has it own hyperparameters (which are often tuned non-privately in reported results), we present three sets of comparisons. 1) First, we compare performance of the algorithms when the hyperparameters are non-privately optimized for each dataset, for each of the algorithms. This provides an (unrealistically) optimistic view of each algorithm's best case performance. 2) Next, we use a fixed set of hyperparameters for our algorithm (BoostedAdaSSP), which remain unchanged from dataset to dataset, while still non-privately optimizing the hyperparameters of each of our comparison partners on a dataset-by-dataset basis. This provides an (unrealistic) best-case comparison for the methods we benchmark against. 3) Finally, we show what we view as the fair comparison, which is when the hyperparameters of our method (BoostedAdaSSP) as well as those of all of our comparison partners are held constant across all of the datasets. For hyperparameter tuning, Optuna [1] is applied. The tuning ranges of hyperparameters, and the fixed hyperparameters for our method are reported in Table 1 in the Appendix. For each comparison partner, when we fix the parameters, we use parameters recommended in their papers.
**Gradient Boosting Improves AdaSSP.** When hyperparameters are non-privately tuned for both methods, then the mean squared error is quite similar on most datasets for both methods, but our method (BoostedAdaSSP) obtains lower error on the majority of datasets at all tested privacy levels. When BoostedAdaSSP uses fixed hyperparameters, it remains competitive with AdaSSP even when AdaSSP is non-privately tuned on each dataset. Finally, when both methods use fixed hyperparameters, BoostedAdaSSP has substantially improved error across a majority of datasets at all privacy levels. This indicates a substantial advantage for our method. Comparisons are presented in Fig. 1.
**BoostedAdaSSP outperforms DP-Gradient Descent**. Gradient descent and BoostedAdaSSP are similar iterative algorithms. But in all comparison settings (including when the hyper-parameters of gradient descent are non-privately optimized on individual datasets, and BoostedAdaSSP uses fixed hyperparameters across all datasets), BoostedAdaSSP substantially outperforms. BoostedAdaSSP can be viewed as gradient descent in function space rather than parameter space, and is able to take advantage of the optimized ridge regression estimator of AdaSSP at each step. Results are in Fig. 2
**BoostedAdaSSP outperforms TukeyEM**. BoostedAdaSSP also outperforms TukeyEM in all experimental regimes; we can see that the advantage that BoostedAdaSSP enjoys diminishes as the privacy parameter increases, since (when we optimize for the hyperparameters for both methods), both approach non-private (exact) linear regression. TukeyEM has only one hyperparameter, but it requires a massive number of data samples to train, due to its subsample-and-aggregate nature, and it produces an all-zero parameter vector in many scenarios. In contrast, our BoostedAdaSSP has only a couple more hyperparameters, and a common selection for them works well on many datasets. Comparisons are shown in Fig. 3.
**With privacy, gradient boosting over linear models outperforms gradient boosting over tree based models.** Results in Fig. 4 show that BoostedAdaSSP outperforms DB-EBM in all experimental regimes. DB-EBM is also a private gradient boosting algorithm, using tree based learners as base models. This is something that does not occur absent privacy (gradient boosting cannot improve on exact linear regression, as the update steps preserve linearity). This is emblematic of a more general message, that differential privacy rewards algorithmic simplicity (even when more complex algorithms outperform absent privacy constraints). This is because more complex algorithms require more noise addition for privacy, which is often ultimately not worth the tradeoff.
Theoretical Analysis
The improvement of BoostedAdaSSP over the base learner AdaSSP, from a theoretical perspective, can be attributed to the former's ability to adapt to arbitrary data clipping bounds. While the base learner AdaSSP is known to be optimal when the data clipping bounds are data-dependent and well-chosen ([12], Theorem 3), it suffers from significant bias when the data clipping bounds are mis-specified (i.e. much closer to \(0\) relative to the data range).
This bias exists even in the simplest "zero-dimensional" case where linear regression reduces to estimating the population mean of real-valued data. Consider a data set \(Y_{1},Y_{2},\cdots,Y_{n}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N}( \mu,1)\). With \(C_{\tau}(a)=a\min(1,|a|/\tau)\) denoting the clipping operator, the zero-dimensional AdaSSP estimator is simply \(\hat{\mu}_{1}=n^{-1}\sum_{i\in[n]}C_{\tau}(Y_{i})+Z\), where \(Z\) is the requisite Gaussian noise for differential privacy. The bias of the AdaSSP estimator, \(|\mathbb{E}\hat{\mu}_{1}-\mu|\), is then at least \(|\mu|-\tau\), since \(|\mathbb{E}\hat{\mu}_{1}|\leq\tau\).
In contrast, the BoostedAdaSSP algorithm converges to the population mean \(\mu\) for any non-zero clipping bound \(\tau\). The zero-dimensional BoostedAdaSSP algorithm for estimating \(\mu\) from \(Y_{1},Y_{2},\cdots,Y_{n}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N}( \mu,1)\) is defined in Algorithm 2.
**Theorem 5.1**.: _For every \(\tau=O(1)\) not depending on sample size \(n\), Algorithm 2 is Gaussian DP with parameter \(\rho\), and there exists a data-independent choice of number of boosting rounds \(R\) such that the estimator \(\hat{\mu}_{R}\) converges to the true parameter \(\mu\), with the rate of convergence_
\[\mathbb{E}|\hat{\mu}_{R}-\mu|=O\left(\frac{\log n}{\sqrt{n}}+\frac{\log^{3/2}n }{n\sqrt{\rho}}\right). \tag{8}\]
Proof of Theorem 5.1.: The Gaussian DP of Algorithm 2 follows from the Gaussian mechanism 2.3 and the composition theorem 2.4, by observing that the sensitivity of the clipped sample mean is \(2\tau/n\). Next, we establish the convergence of \(\hat{\mu}_{R}\) by comparing with Algorithm 3, an "infinite-sample" version of Algorithm 2.
```
0: Clipping function \(C_{\tau}\), number of rounds \(R\), Gaussian DP parameter \(\rho\).
0: Data \(Y_{1},Y_{2},\cdots,Y_{n}\stackrel{{\text{i.i.d.}}}{{\sim}} \mathcal{N}(\mu,1)\). Initialize: \(\hat{\mu}_{0}=0\) for\(t\in[R]\)do Compute DP residual mean \[\hat{\mu}_{j}=\hat{\mu}_{j-1}+\frac{1}{n}\sum_{i\in[n]}C_{\tau}(Y_{i}-\hat{\mu} _{j-1})+Z_{j},\] (9) where \(Z_{j}\sim\mathcal{N}\left(0,\frac{4R\tau^{2}}{n^{2}\rho}\right)\). endfor Output \(\hat{\mu}_{R}\)
```
**Algorithm 2** Zero-dimensional BoostedAdaSSP
```
0: Clipping function \(C_{\tau}\), number of rounds \(R\).
0: Infinite samples from \(\mathcal{N}(\mu,1)\). Initialize: \(\theta_{0}=0\) for\(t\in[R]\)do Compute true truncated residual mean \[\theta_{j}=\theta_{j-1}+\mathbb{E}_{Y\sim\mathcal{N}(\mu,1)}C_{\tau}(Y-\theta_{ j-1}).\] (10) endfor Output \(\theta_{R}\)
```
**Algorithm 3** Infinite-sample algorithm
By considering an idealized "infinite sample" setting where we have access to true distributional quantities \(\{\mathbb{E}C_{\tau}(Y-\theta_{j-1})\}_{j\in R}\), Algorithm 3 removes all the randomness in the finite-sample Algorithm 2 and allows us to focus entirely on the bias-reduction effect of boosting. Indeed, the infinite-sample "estimator" \(\theta_{R}\) converges deterministically to \(\mu\).
**Proposition 5.2**.: _Suppose the number of rounds \(R>\frac{\max(0,|\mu|-\tau)}{(\Phi(2\tau)-1/2)\tau}\). The error of \(\theta_{R}\) is bounded by_
\[|\theta_{R}-\mu|\leq\tau\left(3/2-\Phi(\tau)\right)^{R-\frac{\max(0,|\mu|-\tau )}{(\Phi(2\tau)-1/2)\tau}}. \tag{11}\]
That is, after a warm-up of \(\frac{\max(0,|\mu|-\tau)}{(\Phi(2\tau)-1/2)\tau}\) rounds, the error of \(\theta_{R}\) decays geometrically fast, as \(0<3/2-\Phi(\tau)<1\) for any \(\tau>0\). It now suffices to bound the difference \(|\theta_{R}-\hat{\mu}_{R}|\).
**Proposition 5.3**.: _The difference between outputs of Algorithms 2 and 3 is bounded by_
\[\mathbb{E}|\hat{\mu}_{R}-\theta_{R}|=O\left(\frac{R\tau}{\sqrt{n}}+\frac{R^{3/ 2}\tau}{n\sqrt{\rho}}\right). \tag{12}\]
By choosing an \(R=O(\log n)\) and \(R>\frac{\max(0,|\mu|-\tau)}{(\Phi(2\tau)-1/2)\tau}\), we have \(|\theta_{R}-\mu|=O(\tau/n)\) by Proposition 5.2, and then
\[\mathbb{E}|\hat{\mu}_{R}-\mu|\leq|\theta_{R}-\mu|+\mathbb{E}|\hat{\mu}_{R}- \theta_{R}|=O\left(\frac{\tau}{n}\right)+O\left(\frac{\tau\log n}{\sqrt{n}}+ \frac{\tau\log^{3/2}n}{n\sqrt{\rho}}\right). \tag{13}\]
As \(\tau=O(1)\) by assumption, the main proof is complete. Propositions 5.2 and 5.3 are proved in Section A.4.
## 4 Conclusion
Figure 1: **BoostedAdaSSP vs. AdaSSP. “Non-privately Tuned” indicates that hyperparameters of the algorithm are non-privately optimized on each dataset, and “Fixed” indicates that the hyperparameters are fixed and shared across all datasets. Each dataset is shown as a point on the plot, labeled with the error obtained by BoostedAdaSSP (y axis) and AdaSSP (x axis). Points below the diagonal are datasets on which BoostedAdaSSP improves over AdaSSP—the fractions of datasets lying above and below the diagonal are annotated.**
Figure 2: **BoostedAdaSSP vs. DP Gradient Descent**. BoostedAdaSSP outperforms DP Gradient Descent in all comparisons, even when our algorithm uses a fixed set of hyperparameters.
Figure 3: **BoostedAdaSSP vs. TukeyEM.** BoostedAdaSSP outperforms TukeyEM in all comparisons, even when our algorithm uses a fixed set of hyperparameters. TukeyEM has the advantage of only having a single hyperparameter (number of models), however, in our experiments we find that there isn’t a universally good selection for this hyperparameter.
## 5 Conclusion
Figure 4: **BoostedAdaSSP vs. DP-EBM. BoostedAdaSSP and DP-EBM are both gradient boosting algorithms. BoostedAdaSSP uses linear models as the base class, whereas DP-EBM uses tree based models. Our method outperforms in all experimental regimes.** |
2302.14204 | HalluAudio: Hallucinating Frequency as Concepts for Few-Shot Audio
Classification | Few-shot audio classification is an emerging topic that attracts more and
more attention from the research community. Most existing work ignores the
specificity of the form of the audio spectrogram and focuses largely on the
embedding space borrowed from image tasks, while in this work, we aim to take
advantage of this special audio format and propose a new method by
hallucinating high-frequency and low-frequency parts as structured concepts.
Extensive experiments on ESC-50 and our curated balanced Kaggle18 dataset show
the proposed method outperforms the baseline by a notable margin. The way that
our method hallucinates high-frequency and low-frequency parts also enables its
interpretability and opens up new potentials for the few-shot audio
classification. | Zhongjie Yu, Shuyang Wang, Lin Chen, Zhongwei Cheng | 2023-02-27T23:56:31Z | http://arxiv.org/abs/2302.14204v1 | # Halluaudio: Hallucinate Frequency as Concepts for Few-shot Audio Classification
###### Abstract
Few-shot audio classification is an emerging topic that attracts more and more attention from the research community. Most existing work ignores the specificity of the form of the audio spectrogram and focuses largely on the embedding space borrowed from image tasks, while in this work, we aim to take advantage of this special audio format and propose a new method by hallucinating high-frequency and low-frequency parts as structured concepts. Extensive experiments on ESC-50 and our curated balanced Kaggle18 dataset show the proposed method outperforms the baseline by a notable margin. The way that our method hallucinates high-frequency and low-frequency parts also enables its interpretability and opens up new potentials for the few-shot audio classification.
Zhongjie Yu, Shuyang Wang, Lin Chen, Zhongwei Cheng Wyze Labs, Inc. few-shot learning, audio classification
## 1 Introduction
Deep learning has shown extraordinary performance in recognizing and discriminating different sounds in recent years. However, such good performance relies on a large amount of high-quality labeled data. Although few-shot learning has been proposed to learn robust classifiers from only a few examples, most existing works only apply to the image classification tasks [1, 2, 3, 4, 5, 6, 7, 8]. Collecting a large amount of labeled image data is time-consuming and expensive, whereas collecting audio data annotations is even more difficult. For example, it is intuitive for humans to label an image with "dog" by looking at the entire image with a glimpse; however, it usually takes a much longer time to annotate audio with "dog barking" as it takes more efforts to listen and understand the entire audio clip. Furthermore, it is almost impossible for humans to annotate an audio clip by only looking at its spectrogram. Additionally, humans rely more heavily on visual cues than audio cues, therefore it is sometimes difficult to give precise labels by only listening to audio clips such as the classic confusion between "baby crying" and "cat meowing" [9]. All the above-mentioned challenges impose a great demand for few-shot audio classification algorithms.
However, there is only a handful of work addressing few-shot audio classification [10, 11, 12, 13]. Among those, most works attempted to directly apply general few-shot learning methods like Prototypical Network [2], MAML [4] on audio data. Beyond that, a very limited number of works tried to develop new methods for few-shot audio classification. For example, [14] proposed an attentional GNN for audio classification, [15] developed an attention similarity module and [16] integrated CTM [17], TPN [8] and MixUp [18] with audio data augmentation to build a task-adaptive module. Nonetheless, all these methods are still focusing only on the extracted unstructured embedding space rather than the audio spectrogram itself, just like the most common way for few-shot image classification. In other words, those methods could be reasonable for handling visual images but may not be capable of highlighting the special modality of the audio spectrogram in the image format.
In terms of images themselves rather than their embeddings, [19] is the first meta-learning work to dig out the utility of concepts from those. What are _concepts_ in images? They are some items with structured knowledge such as the head, wing, and tail of a bird. Given those human-interpretable concepts, [19] is able to improve the performance in a straightfor
Figure 1: Illustration of our HalluAudio idea. Detailed structure information in images is utilized as “concepts” to improve few-shot learning performance, but it lacks effectiveness due to the additional labeling cost and the restricted scope of “concepts”. However, for audio data, the frequency domain in the spectrogram is discriminative. We hallucinate the high-frequency and low-frequency areas as hidden concepts lying in the spectrogram, which are utilized to improve the few-shot audio classification.
ward way as well as by introducing reasoning for the recognition, which is different from other methods only targeting the unstructured embedding space that is prone to be a black box.
Although audio spectrograms can be presented in the same format as visual images and fed into similar neural networks, it is unclear whether it is possible to use interpretable concepts for audio spectrograms. First and foremost, does it exist the "real" structured concepts in audio spectrograms that can be cognized by humans? We can easily recognize the head, wings or tail of birds, but it remains unexplored whether there is a similar pattern for the audio spectrogram. Secondly, a strong prerequisite of using those interpretable concepts is that samples belonging to similar classes should share that structured knowledge. For example, sparrows and terms both have heads, wings and tails, whereas laptops have neither of those concepts, so there is a barrier to using concepts when classifying laptops and sparrows. Lastly, annotating the bounding boxes and labels for the concepts in images needs a large amount of extra workloads. Consequently, this is a notable restriction to apply the utilization of structured concepts. Only very limited numbers of datasets provide those extra detailed labels. For example, the CUB dataset [20] has the detailed locations of 15 concepts in each image, without which it's not feasible to learn those structured concepts.
Motivated by those challenges in audio spectrograms, we propose HalluAudio, a meta-learning method that hallucinates high-frequency and low-frequency parts in the audio spectrogram as structured concepts and then utilizes those concepts to build frequency-specific learners. More specifically, the high-frequency prototype and low-frequency prototype are constructed from the high-frequency part and low-frequency part in the spectrogram, respectively. Then HalluAudio aggregates high-frequency, low-frequency, and original prototypes as the representation for a given audio spectrogram. With this way of "hallucinating" audio spectrogram, the previous mentioned challenges are addressed as: (1) it provides a practical way of depicting concepts for audio spectrogram; (2) it does not rely on the assumption that samples should belong to similar classes because every audio spectrogram can have concepts in the high and low-frequency areas; (3) it needs no extra labeling work, because all the high and low-frequency areas can be derived from some specific ranges of Hz in the spectrogram. To the best of our knowledge, this is the first method directly exploring the pattern in the audio spectrogram for few-shot audio classification, which is essentially different to methods leveraging the unstructured embedding space.
## 2 Proposed Method
### Problem Definition
In few-shot audio classification, the goal is to train a robust classifier for novel-class audio data given only a few examples. During training, we are given a training dataset \(\mathcal{D}_{base}\) containing many-shot audios from base classes \(\mathcal{C}_{base}\). During testing, we are given a test dataset \(\mathcal{D}_{novel}\) containing few-shot audios from novel classes \(\mathcal{C}_{novel}\) where \(\mathcal{C}_{base}\cap\mathcal{C}_{novel}=\emptyset\). In an \(N\)-way \(K\)-shot task, we are given a support set \(\mathcal{S}=\{(\mathcal{X}_{s},\mathcal{Y}_{s})\}\) and one query sample \(\mathbf{x}_{q}\)[15], where \(\mathcal{X}_{s}\) consists of \(N\times K\) audios \((\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N\times K})\), \(\mathcal{Y}_{s}\) are their class labels \((y_{1},y_{2},\ldots,y_{N\times K})\) and \(\mathbf{x}_{q}\) belongs to those \(N\) classes.
### Method Description
As summarized in Figure 2, the input of the neural networks for a given audio file's waveform \(\mathbf{w}_{l}\) is the log mel spectrogram \(\mathbf{x}_{l}\), which represents the amplitude of the audio in \(T\)\(\times\)\(F\) dimension where \(T\) is the time-domain range and \(F\) is the frequency range in mel-scale. In our HalluAudio, we hallucinate frequency from different ranges as the structured concepts embedded in the log mel spectrogram. More specifically, we denote the frequency hallucination group as \(\mathcal{M}=\{\mathbf{m}^{(n)}\}_{n=1}^{N}\), where \(\mathbf{m}^{n}\) is the \(n\)-th binary vector masking the frequency area, and \(N\) is the number of masks. With this, the \(n\)-th frequency prototype for class \(k\) is \(\mathbf{p}_{k}^{(n)}=\frac{1}{|\mathcal{S}_{k}|}\sum_{(\mathbf{x}_{l},y_{l}) \in\mathcal{S}_{k}}f^{(n)}(\mathbf{x}_{l}\cdot\mathbf{m}^{(n)})\), where \(f^{(n)}(\cdot)\) is the feature extractor for \(n\)-th hallucination. Then, the final probability of a given \(\mathbf{x}\) belonging to class \(k\) combines the results from the original spectrogram and different frequency groups:
\[\frac{\exp\left(-d\left(f(\mathbf{x}),\mathbf{p}_{k}\right)-\sum_{n}d\left(f^ {(n)}(\mathbf{x}\cdot\mathbf{m}^{(n)}),\mathbf{p}_{k}^{(n)}\right)\right)}{ \sum_{k}\exp\left(-d\left(f(\mathbf{x}),\mathbf{p}_{k}\right)-\sum_{n}d\left( f^{(n)}(\mathbf{x}\cdot\mathbf{m}^{(n)}),\mathbf{p}_{k}^{(n)}\right)\right)},\]
Figure 2: The proposed HalluAudio in a 3-way 1-shot setting. We compute three types of distance and aggregate them for a final decision with the prototypical network. The first type is from the query audio’s embedding with the prototypes of support audios. The second and third types of distance are computed from the embedding of the query audio in the high-frequency and low-frequency domain with the corresponding prototypes of support audios, respectively. High-frequency and low-frequency domains in the spectrogram are served as two hallucinated concepts.
where \(f(\cdot)\) is the feature extractor for the whole spectrogram which has the same structure with \(f^{(n)}(\cdot)\), \(\mathbf{p}_{k}\) is the prototype of the whole spectrogram \(\mathbf{p}_{k}=\frac{1}{|\mathcal{s}_{k}|}\sum_{(\mathbf{x}_{k},y_{l})\in \mathcal{S}_{k}}f(\mathbf{x}_{l})\), and \(d(\cdot)\) is the Euclidean Distance.
**Remark:** We point out that a similar operation of setting time and frequency areas to zero introduced in SpecAug [21] is only a way of data augmentation, which aims to add noises to the log mel spectrogram to improve robustness. Our method significantly differs from this augmentation way on the whole idea and motivation.
## 3 Experiments
In this section, we evaluate our proposed HalluAudio and conduct an ablation study on the widely adopted ESC-50 dataset and our curated dataset from Kaggle18 for few-shot audio classification.
### Dataset Configuration
The current research for few-shot audio classification lacks common agreements on criteria from dataset choices, processing, and evaluation metrics. To be consistent with [15] and fit the current research focus on fixed-length data, we choose ESC-50 dataset [22] which contains 50 classes and 40 samples per class with a fixed length of 5 seconds. In addition, we curate a balanced fixed-length dataset from Kaggle18 dataset which is originally variable-length data of 11,073 audios from 41 classes of Audioset Ontology [23].
All audio samples from ESC-50 and Kaggle18 datasets are down-sampled from 44.1kHz to 16kHz. We extract log mel spectrogram using \(librosa\)[24]. The number of Mel bands is set to 128. The highest frequency is set to 8000Hz. The hop size is set to 502 for ESC-50 and 201 for Kaggle18 to generate spectrograms with 160\(\times\)128 dimensions. The power spectrogram is converted to decibel units. Because there are not enough details for generating the log mel spectrogram for ESC-50 in [15], our generated log mel spectrogram could be slightly different from their provided files using their codes.
### Training and Evaluation
With the very limited public codes in this domain, we strictly follow the training pipeline used by [15]. Note that the episode-building strategy in [15] is slightly different from the one commonly used in few-shot image classification. During the testing stage, each sample is served as a query and the corresponding N-way K-shot supports are randomly sampled from the rest of the test data to build an episode. To get more reliable results and confidence intervals, we conduct the sampling 50 times instead of only once as in [15].
The network backbone is used the same as [15], which is also adopted in [25]. This backbone consists of 3 blocks. Each block is composed of a 3\(\times\)3 convolutional layer, batch normalization, ReLU layer, and 4\(\times\)4 max pooling layer. The initial learning rate is set to \(0.01\) and SGD is used for optimization with the weight-decay set to 0.0001. For ESC-50, we use the same strategy as [15] in which the learning rate is divided by 10 after every 20 epochs. For Kaggle18, the learning rate is divided by 10 after every 30 epochs. Both are trained for 60 epochs.
### Experimental Results
Table 1 shows the results and confidence intervals for the baseline (prototypical network) and our proposed HalluAudio. It is clearly showing that our method outperforms the baseline by a large margin. For ESC-50, the gains are 2.11%, 2.99%, 2.61%, 3.84% for 5-way 1-shot, 5-way 5-shot, 10-way 1-shot, 10-way 5-shot, respectively. For Kaggle18 dataset, the gains are 1.77%, 3.23%, 0.83%, 3.68%, respectively.
To validate that the gain is from hallucinating high frequency and low frequency as concepts rather than additional weights in the network, we conduct an ablation study of hallucinating time domain as concepts. In particular, we hallucinate the first half of the time as one concept, and the second half of the time as another concept. Notably, the network constructed using this way of hallucinating "time" concepts has the same weights as the network using frequency concepts. As shown in Table 2, for ESC-50, although there is a little improvement for 1-shot, the time concepts make a negative contribution for 5-shot. This reflects the human intuition that audio with uncertain patterns has little structured information in the time domain. On Kaggle18, hallucination from the time domain improves the performance a little bit because we curate fixed-length audios around the peak of the waveform. In this case, the first half of the spectrogram in the time domain could stand for the starting period of the audio and the second half stands for the ending period. However, the significantly inferior performance by hallucinating time concepts compared with hallucinating frequency concepts strongly show the rationality of our method. To have a more comprehensive comparison, we add the results of three methods in 5-way K-shot settings for ESC-50 in Figure 3.
Figure 3: HalluAudio (Frequency Concept) vs. Time Concept vs. Baseline in 5-way K-shot settings.
### Frequency Importance
To better show the reasoning behind the hallucination of frequency areas, we calculate the frequency importance in some representative classes. Specifically, we select 5 representative classes and their 5-way 5-shot episodes. Given a query, we classify it by the distance only from (1) its high-frequency embedding and support samples' high-frequency prototypes; (2) its low-frequency embedding and support samples' low-frequency prototypes. In this way, we get two correctly classified numbers of queries in all episodes: \(Q_{high}\) and \(Q_{low}\).
Given the number, we calculate the frequency importance by \(\frac{Q_{high}}{Q_{low}}\). A ratio greater than 1 means the high frequency has more importance and a ratio less than 1 means the low frequency is more important. As shown in Figure 4, the ratio matches common sense: birds' chirping has more information in the high-frequency area, whereas thunderstorm is more depicted in the low-frequency area. Furthermore, we also show some examples in Figure 5 which matches our analysis.
## 4 Conclusion
We have proposed a simple yet effective method for few-shot audio classification. Our method, dubbed as HalluAudio, hallucinates high-frequency and low-frequency areas in the spectrogram as structured concepts. Compared with the real concepts used for few-shot image classification, hallucinating concepts take the advantage of the special format of spectrogram and do not need any extra labeling work or have any restrictions on specific classes. Extensive experiments on ESC-50 and Kaggle18 datasets demonstrate the rationality of our proposed solution. To the best of our knowledge, this is the first work focusing on and utilizing the specificity of audio spectrogram format with interpretability in few-shot audio classification and opens a new horizon in this area.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Dataset & Method & 5-way 1-shot & 5-way 5-shot & 10-way 1-shot & 10-way 5-shot \\ \hline \multirow{2}{*}{ESC-50} & Baseline [15] & 69.77 \(\pm\) 0.62 & 83.47 \(\pm\) 0.48 & 54.51 \(\pm\) 0.66 & 71.36 \(\pm\) 0.56 \\ & HalluAudio & **71.88 \(\pm\) 0.60** & **86.46 \(\pm\) 0.46** & **57.12 \(\pm\) 0.64** & **75.20 \(\pm\) 0.58** \\ \hline \multirow{2}{*}{Kaggle18} & Baseline [15] & 57.58 \(\pm\) 0.63 & 70.69 \(\pm\) 0.55 & 43.67 \(\pm\) 0.62 & 58.12 \(\pm\) 0.58 \\ & HalluAudio & **59.35 \(\pm\) 0.65** & **73.92 \(\pm\) 0.57** & **44.50 \(\pm\) 0.64** & **61.80 \(\pm\) 0.61** \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy (in %) on ESC-50 and Kaggle18 datasets with 95% confidence interval. Note Baseline result is not identical to [15] because of the parameters for the log mel spectrogram and testing sampling times as mentioned in section 3.2.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Dataset & Method & 5-way 1-shot & 5-way 5-shot & 10-way 1-shot & 10-way 5-shot \\ \hline \multirow{4}{*}{ESC-50} & Baseline [15] & 69.77 \(\pm\) 0.62 & 83.47 \(\pm\) 0.48 & 54.51 \(\pm\) 0.66 & 71.36 \(\pm\) 0.56 \\ & Time Concept & 70.66 \(\pm\) 0.61 & 82.89 \(\pm\) 0.48 & 55.67 \(\pm\) 0.67 & 70.60 \(\pm\) 0.53 \\ & Gain(time) & 0.89 & -0.58 & 1.16 & -0.76 \\ & **Gain(freq.)** & **2.11** & **2.99** & **2.61** & **3.84** \\ \hline \multirow{4}{*}{Kaggle18} & Baseline [15] & 57.58 \(\pm\) 0.63 & 70.69 \(\pm\) 0.55 & 43.67 \(\pm\) 0.62 & 58.12 \(\pm\) 0.58 \\ & Time Concept & 58.13 \(\pm\) 0.65 & 71.30 \(\pm\) 0.57 & 43.95 \(\pm\) 0.61 & 58.77 \(\pm\) 0.60 \\ \cline{1-1} & Gain(time) & 0.55 & 0.61 & 0.28 & 0.65 \\ \cline{1-1} & **Gain(freq.)** & **1.77** & **3.23** & **0.83** & **3.68** \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study of hallucinating concepts in time domain vs. frequency domain in the spectrogram. Taking concepts in the time domain with the same network does not notably improve the performance and it even harms the results in some cases.
Figure 4: The frequency importance in representative classes.
Figure 5: Illustration of high/low-frequency concepts for bird-chirping and thunderstorm. Bird-chirping has more similar patterns in high-frequency concepts, whereas thunderstorm is mostly depicted by low-frequency concepts. |
2304.07690 | A Measurement Study of the Impact of Adjacent Channel Interference
between C-band and CBRS | The 3.7 - 3.98 GHz frequency band (also known as C-band) was recently
allocated in the US for the deployment of 5G cellular services. Prior to this,
the lower adjacent band, 3.55 - 3.7 GHz, had been allocated to Citizens
Broadband Radio Service (CBRS), where the entire 150 MHz can be used for free
by Tier 3 General Authorized Access (GAA) users, but access to the spectrum
needs to be authorized by the Spectrum Access System (SAS). GAA users are
allowed on a channel only when there are no Tier 1 Incumbents (Navy radars) or
Tier 2 Priority Access License (PAL) users in the area. However, since there
are no guard bands between GAA and C-band, and both systems employ Time
Division Duplexing (TDD) where the uplink/downlink configurations are not
synchronized, adjacent channel interference can potentially reduce the
performance of both systems. In this paper, we quantify the effect of this
mutual interference by performing experiments with a real-world deployment. We
observe significant downlink throughput reductions on both systems when two
devices are in close proximity to each other, and one is transmitting uplink
while the other is transmitting downlink: 60% for 4G CBRS and 43% for 5G
C-band. We believe that this is the first paper to demonstrate this in a real
deployment. This throughput degradation was reduced when the CBSD changed its
channel and operated 20 MHz away from C-band, essentially creating a guard band
between the channels. We also demonstrate the improvement in latency under
adjacent channel interference by implementing MicroSlicing at the CBSD. Our
results indicate that addressing adjacent channel interference due to the lack
of guard bands and TDD configuration mismatch is crucial to improving the
performance of both CBRS and C-band systems. | Muhammad Iqbal Rochman, Vanlin Sathya, Bill Payne, Mehmet Yavuz, Monisha Ghosh | 2023-04-16T04:30:26Z | http://arxiv.org/abs/2304.07690v2 | # A Measurement Study of the Impact of Adjacent Channel Interference between C-band and CBRS
###### Abstract
The 3.7 - 3.98 GHz frequency band (also known as C-band) was recently allocated in the US for the deployment of 5G cellular services. Prior to this, the lower adjacent band, 3.55 - 3.7 GHz, had been allocated to Citizens Broadband Radio Service (CBRS), where the entire 150 MHz can be used for free by Tier 3 General Authorized Access (GAA) users, but access to the spectrum needs to be authorized by the Spectrum Access System (SAS). GAA users are allowed on a channel only when there are no Tier 1 Incumbents (Navy radars) or Tier 2 Priority Access License (PAL) users in the area. However, since there are no guard bands between GAA and C-band, and both systems employ Time Division Duplexing (TDD) where the uplink/downlink configurations are not synchronized, adjacent channel interference can potentially reduce the performance of both systems. In this paper, we quantify the effect of this mutual interference by performing experiments with a real-world deployment. We observe significant downlink throughput reductions on both systems when two devices are in close proximity to each other, and one is transmitting uplink while the other is transmitting downlink: 60% for 4G CBRS and 43% for 5G C-band. We believe that this is the first paper to demonstrate this in a real deployment. This throughput degradation was reduced when the CBSD changed its channel and operated 20 MHz away from C-band, essentially creating a guard band between the channels. We also demonstrate the improvement in latency under adjacent channel interference by implementing MicroSlicing at the CBSD. Our results indicate that addressing adjacent channel interference due to the lack of guard bands and TDD configuration mismatch is crucial to improving the performance of both CBRS and C-band systems.
5G, 4G, C-band, CBRS, interference, throughput, latency, measurements, TDD.
## I Introduction
The increased demands on cellular traffic in terms of range, throughput and latency require the use of frequency bands that combine the favorable propagation characteristics of low-band frequencies (\(<\) 1 GHz) with the wider bandwidths available in the high-band (\(>\) 24 GHz). This has led to increasing swathes of mid-band frequencies being allocated for 5G services. In the US, the three most recent allocations in the mid-band are the 3.7 - 3.98 GHz (C-band) [1], the immediately adjacent 3.55 - 3.7 GHz (Citizens Broadband Radio Services, or CBRS) [2] and the latest allocation of 3.45 - 3.55 GHz for cellular services [3].
The CBRS band was allocated prior to either the C-band or 3.45 GHz. Since the primary incumbent in the CBRS band is Navy radar, a 3-tier access mechanism was put in place to ensure shared access to the band as follows: the highest priority, or Tier 1 users are the Navy radars, followed by Tier 2 Priority Access Licensees (PAL) who acquired licenses through an auction process and finally Tier 3 General Authorized Access (GAA) users who are allowed to access channels that are not being used by Tier 1 or Tier 2 users. This access mechanism is orchestrated by the Spectrum Access System (SAS) which ensures that higher priority users do not face interference from lower priority users. All 150 MHz between 3.55 - 3.7 GHz can be used by Tier 3 GAA users but must not interfere with Tier 1 and Tier 2 as directed by the SAS. To protect the incumbent from harmful interference, the transmit power level of Tiers 2 and 3 is limited to 30 dBm/10 MHz indoors and 47 dBm/10 MHz outdoors, which is considerably lower than that allowed in the adjacent C-band and 3.45 GHz as shown in Fig. 1.
Prior to reallocation for terrestrial mobile services, the C-band was primarily used for satellite communications, which was not densely deployed and did not pose an adjacent channel interference concern for CBRS. Similarly, the 3.45 GHz band was a federal band used sparsely in most areas of the country. However, the new rules and auctions permit 5G cellular services to be deployed in both of these bands at much higher power levels of 62 dBm/MHz for urban and 65 dBm/MHz for rural areas as shown in Fig. 1. The lack of guard bands between these frequency allocations, combined with the power differences, leads to the potential of adjacent channel interference between CBRS and both upper and lower adjacent bands especially since Time Division Duplex (TDD) systems are being deployed in these bands. C-band services are widely deployed in many areas in the US, while 3.45 GHz services are just beginning.
In this paper, we leverage a real-world C-band deployment to perform experiments that allow an in-depth understanding of the potential impact of adjacent channel interference between CBRS and C-band: a similar situation will be present between
Fig. 1: Spectrum Chart from 3.1 to 3.9 GHz
3.45 GHz and CBRS as well, once 3.45 GHz becomes widely deployed. We identified an University of Chicago (UChicago) building with line-of-sight (LOS) to a macro-cell deployment of a C-band base-station (BS). An indoor CBRS device (CBSD, or CBRS base-station), was deployed in an area with LOS to the C-band BS. A spectrum analyzer (SA) was used to first quantify the adjacent power leakage between C-band and CBRS. Detailed experiments were then run with consumer smartphones to connect to both services and quantify the effect of different CBRS TDD configurations and operating channel on uplink (UL) throughput, downlink (DL) throughput and latency. Finally, we demonstrate the improvements to latency when subject to adjacent channel interference by implementing end-to-end microslicing
## II Background and Related work
C-band and CBRS are both TDD systems, i.e., uplink and downlink transmissions occur over the same frequency channel but separated in time. In most prior TDD cellular deployments, a single operator deployed their network over a wide area and hence the TDD Configuration (the partition between uplink and downlink transmissions) is centrally managed so that co-channel and/or adjacent channel interference is minimized: usually this is done by using the same TDD configuration across the entire deployment. This kind of TDD synchronization ensures that all devices are transmitting traffic in only one direction at any time, either uplink or downlink. However, CBRS users do not need to synchronize amongst each other, or with adjacent C-band, and in fact the use cases may **require** different TDD configurations: for example, a video-camera surveillance use case will require higher uplink throughput compared to a video-streaming use case that needs higher downlink throughput. If these use-cases are deployed in adjacent channels, then they may mutually interfere.
The above interference scenario in cellular deployments is not one that has been studied comprehensively in the literature. A few papers discuss similar problems and propose reducing adjacent channel interference by better filtering [4, 5]. We believe that the results presented in this paper are the first to show the effect of different TDD configurations on adjacent channel interference in a real world environment.
## III Deployment Overview, Measurement Tools, and Methodology
We leverage an outdoor C-band BS deployed on top of a 10-storey building at the intersection of 53rd and E Hyde Park Ave in Chicago. In order to study adjacent channel interference between CBRS and this C-band deployment, we deployed a Celona CBSD indoors on the 9th floor of a UChicago building at 5235 S Harper Court, where the C-band transmission can be received indoors with sufficient signal strength. The set-up is shown in Figs. 1(a) and 1(b). The CBSD is deployed in a cubicle facing the window with LOS to C-band. Table I summarizes the parameters of both systems. The two operators are labeled as: VZW (**Verizon**), with the C-band deployment, and CLN, the private CBRS network with its CBSD/BS connected to the University of Chicago backhaul.
_Overview of Deployments:_ The VZW deployment is a 5G NR non-standalone (NSA) configuration with a primary LTE Frequency Division Duplex (FDD) channel in band 66 (DL: 2.11 - 2.13 GHz, UL: 1.71 - 1.73 GHz), and a secondary NR TDD channel in band n77/C-band (3.7 - 3.76 GHz) with 30 kHz sub-carrier spacing. In our throughput analysis, we only consider data transmitted over the 60 MHz C-band and omit the LTE data. The TDD configuration used by VZW in C-band is shown in Fig. 3: 7 slots for DL and 2 slots for UL,
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Parameter** & \multicolumn{2}{|c|}{**Value**} \\ \hline \hline Operators & VZW & CLN \\ \hline Operating band & C-band & CBRS GAA \\ \hline Radio tech. & 5G & 4G \\ \hline Center freq. & 3.73 GHz & 3.69 and 3.67 GHz \\ \hline Bandwidth & 60 MHz & 20 MHz \\ \hline TDD config. scenarios & 7.4 DL + 2.2 UL & Sa1: 4 DL + 4 UL, Sa2: 6 DL + 2 UL \\ \hline TDD periodicity & 5 ms & 10 ms \\ \hline BS deployment & Outdoor & Indoor \\ \hline Max. BS Power & 79 dBm\({}^{a}\) & 23 dBm \\ \hline UE & Samsung S22+ & Samsung S22+ \\ \hline Traffics scenarios & DL, UL & pip, pip + DL, pip + UL \\ \hline pip target & N/A & CLN edge server \\ \hline DL/UL server & tierperf01.uchicago.edu & CLN edge server \\ \hline DL/UL parameters & target bandwidth 2 Gbps, TCP buffer size 8196 bytes, 10 parallel conns, 500 packets burst \\ \hline UE location scenarios & VZW @ A, CLN @ A, VZW @ B, CLN @ B, YZW @ B, CLN @ A, see Fig. 4 \\ \hline Exp. run time & 10 minutes per combination of scenarios \\ \hline Total exp. time & 660 minutes \\ \hline Time of exp. & Between 1 am - 6 am \\ \hline \end{tabular}
\end{table} TABLE I: Experiment parameters
Fig. 2: Experiment map and setup.
with a slot length of 0.5 ms. Additionally, the "Special" slot is defined to allow greater freedom for resource allocation: 6 symbols are reserved for DL, 4 symbols for UL, and 4 symbols for messaging. In total, there are 7.4 slots reserved for DL and 2.2 slots reserved for UL.
To evaluate potential adjacent channel interference, the CLN CBRS was deployed on the immediately lower adjacent channel, 3.68 - 3.7 GHz using the General Availability Access (GAA) tier of CBRS. As a comparison, we also deployed it on 3.66 - 3.68 GHz, essentially adding a 20 MHz guard band between the CBRS and C-band channels. Since the CBSD was under our control, we varied the TDD configuration of the CBSD between **Sa1** and **Sa2** where **Sa1** uses 4 DL and 4 UL subframes, and **Sa2** uses 6 DL and 2 UL subframes per radio frame with a subframe length of 1 ms, as shown in Fig. 3. Additionally, while both 5G slot and 4G subframes have the same 0.5 ms duration, issues may arise with the lack of synchronization between CLN and VZW. Since the CBSD was 4G, the TDD configurations could not be exactly matched to 5G in C-band.
**Measurement Tools and Methodology**: Two Samsung S22+ phones (running Android 12) are used as user equipment (UEs), one with a CLN SIM and the other with a VZW SIM. Both SIMs have unlimited data plans with no throttling of data rates. We also use a spectrum analyzer (R&S Spectrum Rider FPH) to measure power over the CBRS and C-band channels. Fig. 4 is a schematic of the deployment scenario. The CBSD is placed on top of a desk in the cubicle and UEs are deployed in two locations, A and B. Location A is \(\sim\)1 m from the CBSD, while location B is on top of a desk in an office \(\sim\)3 m from the CBSD. The spectrum analyzer is always at location A. Both locations are LOS to the VZW BS. We define three measurement scenarios: (1) both UEs at A representing the best condition for CLN UE, (2) both UEs at B representing the best condition for VZW UE, and (3) CLN UE at A and VZW UE at B representing the best condition for both UEs to their respective BSs.
Signal measurements are obtained from the Android phones using a commercial measurement app called Qualipoc [6] which utilizes the UE's root privilege to establish a low-level interface with the _Qualcomm Diag_ utility thus enabling extraction of detailed signal parameters such as primary and secondary channel's RSRP, RSRQ, SINR, MCS, Resource Block (RB) allocation, block error rate (BLER), TDD Config, and physical layer throughput. The DL and UL throughput values mentioned in this paper are physical layer throughput values extracted from Qualipoc. Qualipoc is also capable of actively creating traffic using iperf [7] and ping tools.
Experiments were run for 10 minutes per scenario, with a total experiment time of 660 minutes. The experiments were conducted between 1 am and 6 am to reduce the impact on performance due to the presence of other VZW users. Two data transmission scenarios were defined using iperf: DL and UL which generate full-buffer downlink (iperf server to UE) and uplink (UE to iperf server) transmission, respectively, with parameters defined in Table I. Table I also defines different iperf target servers for each operator, since there is a need to separate the backhaul used for each operator: the VZW UE uses the UChicago iperf server (_iperf01.uchicago.edu_), while the CLN UE uses an edge server as its target server. The edge server is connected directly to the CLN BS, so the CLN throughput closely reflects the wireless link performance, while VZW throughput includes the wireless + backhaul performance. Due to this difference, we do not compare the performance between the operators, rather we compare the relative performance for each operator between the two cases: (i) "single" case where one operator is active while the other is idle, and (ii) "coexistence" case where both operators are active concurrently.
Additionally, we also measure the performance of CLN for latency-sensitive applications using ping traffic (64 kbyte ping packets every 10 ms over 10 minutes per scenario) to a separate edge server. The latency metric collected by the ping tool is defined as the round trip time between UE and the ping target. To further emulate an intensive low-latency application, we implemented MicroSlicing [8], a network slicing technology that allows precise control over end-to-end resource and service allocation based on specific Quality of Service (QoS) metrics for different applications and devices. Network administrators can use the Celona Orchestrator or the developer APIs to customize network settings on a device or application
Fig. 4: Experiment set-up.
Fig. 3: Comparison of TDD configuration.
specific basis. The orchestrator offers control and adjustments for numerous service types, including data throughput, quality, latency, reliability, and network access policies among others. This enables users to set aside guaranteed portions of the network dedicated to the smooth functioning of the respective device and application. The platform also records application-specific service level agreements (SLA) and key performance indicators (KPI) across all devices, granting complete user visibility of device performance across the spectrum. In this experiment, MicroSlicing based resource allocation policy is specified to prioritize ping over DL and UL traffic.
## IV Experimental Results
### _Out-of-band (OOB) interference quantified by spectrum analyzer measurements_
Fig. 5 shows the spectrum analyzer measurements of OOB interference due to transmissions to and from the UEs. These spectrum measurements are done with all UEs and the spectrum analyzer in close proximity to each other in location A. The spectrum analyzer measures the power on the 3.68-3.7 GHz CBRS channel and the 3.7-3.76 GHz C-band channel in the following scenarios: (i) both UEs are turned off, (ii) only the CLN UE transmits DL/ UL, and (iii) only the VZW UE transmits DL/ UL. We also vary CLN TDD configurations between Sa1 and Sa2. Fig. 4(a) shows the OOB effect on C-band channel due to CLN transmission on the CBRS channel using both TDD configurations, _i.e.,_ there is clearly an increase of power observed in the adjacent C-band channel compared to when both UEs are turned off. Similarly, Fig. 4(b) shows the effect of VZW transmission in C-band on the CBRS channel, again demonstrating a power increase. This initial power analysis clearly demonstrates the potential of OOB interference on both operators. In the following sub-sections, we demonstrate the impact of increased OOB interference to ping latency and DL throughput performance at the UEs.
### _Latency performance of CLN_
Latency performance of CBRS with and without adjacent channel transmissions was evaluated only in location A, with CLN TDD configuration set to Sa1. Fig. 5(a) shows the "single" case, _i.e.,_ no interference from the VZW UE, without MicroSlicing and Fig. 5(c) shows the performance in the same scenario but with MicroSlicing. Similarly, Fig. 5(b) shows the "coex" case, _i.e.,_ interference from the VZW UE, without MicroSlicing and Fig. 5(d) shows the performance in the same scenario but with MicroSlicing. In both cases, overall, we observe less difference in latency when ping traffic is transmitted along with DL and UL when MicroSlicing is used compared to no MicroSlicing. In particular, we observe increased latency on 20% of the data, when VZW is using DL traffic without MicroSlicing while there is no impact of OOB interference to CLN's latency performance when it is using MicroSlicing (Fig. 5(d)). The reduction of the effect of the OOB interference can be explained by the MicroSlicing policy, which assigns a higher priority to ping packets thus ensuring timely packet arrival, even under OOB interference.
### _Impact of VZW's OOB interference on CLN's throughput_
Fig. 7 shows coexistence performance of CLN in terms of physical layer DL throughput with varying CLN TDD configurations. Due to page limitations, we omit the UL throughput results which did not demonstrate any effect of adjacent channel interference. Only ping + DL traffic is analyzed as there is **no difference** between the DL only and ping + DL, which is expected since the flow of ping packets are also counted in the throughput metric. Figs. 6(a) and 6(b) show the effect of OOB interference on CLN DL throughput, using Sa1 configuration, at location A and B, respectively. Firstly, we observe no impact of coexistence when CLN UE is at A and VZW UE is at B, _i.e.,_ the effect of OOB interference is only observed when the UEs are close to each other. When CLN is using Sa1, we observe the highest throughput degradation when CLN ping + DL @ A and VZW UL @ A coexist: 60 % reduction of CLN DL throughput compared to the "single" case. When the VZW UE is transmitting UL @ A, the proximity to the CLN CBSD causes a large throughput reduction which can be explained by the TDD configuration as shown on Fig. 3: in the worst case, CLN's DL capability is reduced to half due to the overlap with two of VZW's UL slots. This effect is not observed at B due to the greater distance to the CBSD. Only DL traffic from VZW at B affects CLN which is also at B. Similarly when CLN is using Sa2, the DL throughput degradation is observed on Figs. 6(c) and 6(d). The greatest throughput degradation in this case is between CLN ping + DL @ B and VZW DL @ B, which is 43% reduction from the single case. Additionally, VZW UL @ B affects CLN DL throughput on Sa2 (an effect not observed on Sa1), which is possibly due to the higher number of DL subframes in the Sa2 configuration which leads to a higher probability of overlap with VZW UL slots.
To further demonstrate the effect of OOB interference, we correlate RB, MCS, and BLER values when CLN is using ping + DL traffic as shown in Fig. 8. We could not correlate OOB interference to the captured RSRP and RSRQ values: while the RSRP and RSRQ are well-defined by 3GPP, we cannot confirm the correctness of its implementation inside the modem. A representative coexistence case of CLN ping + DL @ A and VZW DL @ A on CLN Sa1 is chosen for
Fig. 5: Spectrum Analyzer measurements of mutual OOB leakage between CBRS and C-band.
analysis, but the same conclusion is observed in other cases. First, Fig. (a)a shows the full RB allocation to the CLN UE in all cases, since the CLN UE is the only one connected to the CBSD. On the other hand, Figs. (b)b and (c)c respectively show a degradation of MCS allocation and increase in BLER under coexistence, leading to a reduction in throughput. Combined with the spectrum analyzer power analysis and the higher VZW BS transmit power, we are certain that CLN's DL throughput reduction is caused by OOB interference.
### _Impact of CLN's OOB interference on VZW throughput_
Similar to the previous analysis, we only focus on analyzing the coexistence between VZW DL and CLN ping + DL/UL (hereinafter shortened to CLN DL/UL). We omit the analysis on VZW UL due to no impact of OOB interference. Additionally, data containing CLN ping only traffic is also omitted from our analysis due to no interference from ping traffic's low network utilization. Fig. 9 shows VZW DL throughput on varying locations and CLN TDD configuration. When CLN is using Sa1 configuration, Fig. (a)a shows a similar reduction of DL throughput when coexisting with CLN DL and UL at location A, while Fig. (b)b shows the largest throughput reduction (in Sa1) of 17% when coexisting between VZW DL @ B and CLN UL @ B. We also observe a DL throughput reduction for VZW in the location scenario CLN @ A and VZW @ B, although this is lower compared to when both UEs are side-by-side. For Sa2, the highest DL throughput reduction is observed in scenario VZW DL at A and CLN UL at A as shown on Fig. (c)c, _i.e.,_ 43% reduction compared to the single case. Fig. (d)d shows the counterpart at location B, with the highest reduction of 27% when coexisting with CLN UL at B.
Most of the scenarios described above do not seem to exhibit a drastic change in terms of RB allocation, MCS, and BLER between the coexistence and single cases, except for the scenario of VZW @ A & CLN @ A, CLN uses Sa2 configuration. Thus, we focus on analyzing the said scenario as shown in Fig. 10. Fig. (a)a, (b)b, and (c)c shows the CDF comparison of RB allocation, MCS, and BLER, respectively. There is a slight decrease of RB allocation, MCS, and BLER on coexistence cases compared to the single case. As we see from Fig. 3, CLN UL using Sa2 should affects VZW DL less than Sa1 due to the fewer number of uplink subframes that
Fig. 8: Representative comparison of DL RB, MCS, and BLER for CLN @ A, on CLN Sa1.
Fig. 6: Ping latency performance of CLN.
Fig. 7: Coexistence performance in terms of CLN DL throughput under varying CLN TDD configurations.
overlap with VZW's downlink slots. However, our experiment is not capable of capturing the exact frame timing to determine interference. Thus, the effect of OOB interference is not directly apparent here: the VZW BS may have possibly reacted to the interference by lowering RB allocation and MCS thus resulting in better BLER performance but lower throughput.
### _Impact of OOB interference with a 20 MHz guard band_
In this analysis, we measured the throughput performance of both operators when the CLN operating channel was moved to 3.66 - 3.68 GHz, thus adding a 20 MHz guard band between the CBRS and C-band channels. We refer to these scenarios as **GAP**, while the prior scenarios as **non-GAP**. We omit showing the spectrum analysis in this scenario, since we observed no power leakage in the C-band channel and the new CBRS channel (3.66 - 3.68 GHz) as expected.
Fig. 11 shows the DL throughput performance of CLN under **GAP** scenarios. When CLN uses Sa1 configuration (Fig. (a)a, (b)b), we observe no throughput degradation due to the low number of DL subframes utilized. When CLN is using Sa2, we observe the highest degradation of 21% when coexisting with VZW UL @ A. This is an improvement from the highest degradation 60% in the same scenario without the guard band. Further, Fig. 12 shows the DL RB, MCS, and
Fig. 11: Coexistence performance in terms of CLN DL throughput with 20 MHz guard band.
Fig. 12: Representative comparison of DL RB, MCS, and BLER for CLN @ A, on CLN Sa2, with 20 MHz guard band.
Fig. 9: Coexistence performance in terms of VZW DL throughput under varying CLN TDD configurations.
BLER of the representative **GAP** results of \(\mathsf{CLN}\) and \(\mathsf{VZW}\) @ \(\mathsf{A}\), when \(\mathsf{CLN}\) is using Sa2. While the RB allocations stays at the maximum, we observe higher MCS allocation and lower BLER compared to representative **non-GAP** results on Fig. 8.
Next, Fig. 13 shows the DL throughput performance of \(\mathsf{VZW}\) under **GAP** scenarios. We observe throughput degradation on various parameters, with the highest reduction of 30% when \(\mathsf{VZW}\) is coexisting with \(\mathsf{CLN}\)\(\mathsf{UL}\) @ \(\mathsf{B}\) using Sa1. However, these reductions can be explained by the higher MCS used on the single cases. As a representative result, Fig. 14 shows the DL RB, MCS, and BLER of **GAP** results of \(\mathsf{CLN}\) and \(\mathsf{VZW}\) @ \(\mathsf{B}\), when \(\mathsf{CLN}\) is using Sa1. We observe a higher MCS and correspondingly, a slightly higher BLER on the single case. Additionally, we observed a median DL BLER of 0.1-0.12 on all cases. Therefore, these throughput degradations are not caused by interference, but network variations.
## V Conclusions and Future Work
This paper presents the first comprehensive measurement-based analysis of the effect of mutual OOB interference on throughput and latency between CBRS and C-band when the two systems are deployed in adjacent channels with and without a guard band, using both spectrum analyzer based power measurements and detailed throughput and error analyses. It is clear that the combination of no guard bands, power difference and lack of TDD synchronization pose obstacles in attaining the high throughputs expected of both CBRS and C-band deployments. When a 20 MHz gap is added between the CBRS and C-band channels, we observe a reduction in throughput degradation: from 60% degradation to 21% on CBRS and from 43% to 30% on C-band: this clearly indicates the impact of OOB interference.
There are many potential solutions that can minimize the effect of coexistence between CBRS and C-band: (i) having a static or dynamic guard band based on interfered resource block allocation, (ii) reducing the C-band transmission power on resource blocks that are adjacent to the CBRS frequency, and (iii) common TDD configuration between C-band and CBRS. Some of these solutions will require changes to the standard. We plan to study the above solutions with analysis and simulations, and further experiments using software defined radios to capture raw I/Q data on both systems for further detailed analysis on interference profiles in the frequency domain. We also plan to study coexistence scenarios that we have not yet explored, _e.g.,_ coexistence between outdoor CBRS and outdoor C-band, and between 5G CBRS and 5G C-band.
## Acknowledgment
We gratefully acknowledge the Facilities office at the University of Chicago for providing access to the Harper Court building.
|
2303.02997 | The Complexity of Geodesic Spanners | A geometric $t$-spanner for a set $S$ of $n$ point sites is an edge-weighted
graph for which the (weighted) distance between any two sites $p,q \in S$ is at
most $t$ times the original distance between $p$ and~$q$. We study geometric
$t$-spanners for point sets in a constrained two-dimensional environment $P$.
In such cases, the edges of the spanner may have non-constant complexity.
Hence, we introduce a novel spanner property: the spanner complexity, that is,
the total complexity of all edges in the spanner. Let $S$ be a set of $n$ point
sites in a simple polygon $P$ with $m$ vertices. We present an algorithm to
construct, for any fixed integer $k \geq 1$, a $2\sqrt{2}k$-spanner with
complexity $O(mn^{1/k} + n\log^2 n)$ in $O(n\log^2n + m\log n + K)$ time, where
$K$ denotes the output complexity. When we relax the restriction that the edges
in the spanner are shortest paths, such that an edge in the spanner can be any
path between two sites, we obtain for any constant $\varepsilon \in (0,2k)$ a
relaxed geodesic $(2k + \varepsilon)$-spanner of the same complexity, where the
constant is dependent on $\varepsilon$. When we consider sites in a polygonal
domain $P$ with holes, we can construct a relaxed geodesic $6k$-spanner of
complexity $O(mn^{1/k} + n\log^2 n)$ in $O((n+m)\log^2n\log m+ K)$ time.
Additionally, for any constant $\varepsilon \in (0,1)$ and integer constant $t
\geq 2$, we show a lower bound for the complexity of any
$(t-\varepsilon)$-spanner of $\Omega(mn^{1/(t-1)} + n)$. | Sarita de Berg, Marc van Kreveld, Frank Staals | 2023-03-06T09:49:15Z | http://arxiv.org/abs/2303.02997v2 | # The Complexity of Geodesic Spanners
###### Abstract
A _geometric \(t\)-spanner_ for a set \(S\) of \(n\) point sites is an edge-weighted graph for which the (weighted) distance between any two sites \(p,q\in S\) is at most \(t\) times the original distance between \(p\) and \(q\). We study geometric \(t\)-spanners for point sets in a constrained two-dimensional environment \(P\). In such cases, the edges of the spanner may have non-constant complexity. Hence, we introduce a novel spanner property: the spanner _complexity_, that is, the total complexity of all edges in the spanner. Let \(S\) be a set of \(n\) point sites in a simple polygon \(P\) with \(m\) vertices. We present an algorithm to construct, for any constant \(\varepsilon>0\) and fixed integer \(k\geq 1\), a \((2k+\varepsilon)\)-spanner with complexity \(O(mn^{1/k}+n\log^{2}n)\) in \(O(n\log^{2}n+m\log n+K)\) time, where \(K\) denotes the output complexity. When we consider sites in a polygonal domain \(P\) with holes, we can construct such a \((2k+\varepsilon)\)-spanner of similar complexity in \(O(n^{2}\log m+nm\log m+K)\) time. Additionally, for any constant \(\varepsilon\in(0,1)\) and integer constant \(t\geq 2\), we show a lower bound for the complexity of any \((t-\varepsilon)\)-spanner of \(\Omega(mn^{1/(t-1)}+n)\).
spanner, simple polygon, polygonal domain, geodesic distance, complexity 1
## 1 Introduction
In the design of networks on a set of nodes, we often consider two criteria: few connections between the nodes, and small distances. Spanners are geometric networks on point sites that replace the small distance criterion by a small detour criterion. Formally, a _geometric \(t\)-spanner_ for a set \(S\) of \(n\) point sites is an edge-weighted graph \(\mathcal{G}=(S,E)\) for which the (weighted) distance \(d_{\mathcal{G}}(p,q)\) between any two sites \(p,q\in S\) is at most \(t\cdot d(p,q)\), where \(d(p,q)\) denotes the distance between \(p\) and \(q\) in the distance metric we consider [32]. The smallest value \(t\) for which a graph \(\mathcal{G}\) is a \(t\)-spanner is called the _spanning ratio_ of \(\mathcal{G}\). The number of edges in the spanner is called the _size_ of the spanner.
In the real world, spanners are often constructed in some sort of environment. For example, we might want to connect cities by a railway network, where the tracks should avoid obstacles such as mountains or lakes. One way to model such an environment is by a polygonal domain. In this paper, we study the case where the sites lie in a polygonal domain \(P\) with \(m\) vertices and \(h\) holes, and we measure the distance between two points \(p,q\) by their _geodesic distance_: the length of the shortest path between \(p\) and \(q\) fully contained within \(P\). An example of such a spanner is provided in Figure 1.
The spanning ratio and the size of spanners are not the only properties of interest. Many different properties have been studied, such as total weight (or lightness), maximum degree, (hop) diameter, and fault-tolerance [4, 9, 11, 14, 20, 29, 30, 35]. When we consider distance metrics for which the edges in the spanner no longer have constant complexity, another interesting property of spanners arises: the spanner _complexity_, i.e. the total complexity of all edges in the spanner. In our railway example, this corresponds to the total number of bends in the tracks. A spanner with a low number of bends may be desired, as trains can
drive faster on straight tracks, and it makes construction cheaper. In this paper, we study this novel property for point sites in a polygonal domain, where the complexity of an edge is simply the number of line segments in the path. In this setting, a single edge may have complexity \(\Theta(m)\). Naively, a spanner of size \(E\) could thus have complexity \(\Theta(mE)\). Our goal is to compute an \(O(1)\)-spanner of size \(O(n\operatorname{polylog}n)\) with small complexity, preferably near linear in both \(n\) and \(m\).
When studying spanning trees of points, two variants exist: with or without Steiner points. The same is true for spanners, where Steiner points can be used to obtain lighter and sparser spanners [6, 29]. In this paper we focus on the variant where Steiner points are _not_ allowed, leaving the other variant to future research.
Related work.For the Euclidean distance in \(\mathbb{R}^{d}\), and any fixed \(\varepsilon>0\), there is a \((1+\varepsilon)\)-spanner of size \(O(n/\varepsilon^{d-1})\)[35]. For the more general case of metric spaces of bounded doubling dimension we can also construct a \((1+\varepsilon)\)-spanner of size \(O(n/\varepsilon^{O(d)})\)[13, 22, 25]. These results do not apply when the sites lie in a polygon, and we measure their distances using the geodesic distance. Abam et al. [1] show there is a set of \(n\) sites in a simple polygon \(P\) for which any geodesic \((2-\varepsilon)\)-spanner has \(\Omega(n^{2})\) edges. They also construct a geodesic \((\sqrt{10}+\varepsilon)\)-spanner of size \(O(n\log^{2}n)\) for sites in a simple polygon, and a geodesic \((5+\varepsilon)\)-spanner of size \(O(n\sqrt{h}\log^{2}n)\) for sites in a polygonal domain. Recently, Abam et al. [3] showed that a geodesic \((2+\varepsilon)\)-spanner with \(O(n\log n)\) edges exists for points on a polyhedral terrain, thereby almost closing the gap between the upper and lower bound on the spanning ratio. However, they show only the existence of such a spanner, and leave constructing one open. Moreover, all of these spanners can have high, \(\Omega(nm)\), complexity.
Abam et al. [3] make use of spanners on an _additively weighted_ point set in \(\mathbb{R}^{d}\). In this setting, the distance between two sites \(p,q\) is \(w(p)+|pq|+w(q)\) for \(p\neq q\), where \(w(p)\) is the non-negative weight of a site \(p\in S\) and \(|pq|\) denotes the Euclidean distance, and \(0\) for \(p=q\). Such additively weighted spanners were studied before by Abam et al. [2], who obtain an \(O(5+\varepsilon)\)-spanner of linear size, and an \(O(2+\varepsilon)\)-spanner of size \(O(n\log n)\). They also provide a lower bound of \(\Omega(n^{2})\) on the size of any \((2-\varepsilon)\)-spanner. Abam et al. [3] improve these results and obtain a nearly optimal additively weighted \((2+\varepsilon)\)-spanner of size \(O(n)\).
The other key ingredient for the geodesic \((2+\varepsilon)\)-spanner of Abam et al. [3] is a _balanced shortest-path separator_. Such a separator consists of either a single shortest path between two points on the boundary of the terrain, or three shortest paths that form a _shortest-path
Figure 1: A spanner on a set of point sites in a polygonal domain. Because of the orange edges, the spanner has a relatively high complexity.
triangle_. This separator partitions the terrain into two subterrains, and we call it balanced when each of these terrains contains roughly half of the sites in \(S\). In their constructive proof for the existence of such a balanced separator, they assume that the three shortest paths in a shortest-path triangle are disjoint, except for their mutual endpoints. However, during their construction it can actually happen that these paths are _not_ disjoint. When this happens, it is unclear exactly how to proceed. Just like for the \((2+\varepsilon)\)-spanner, the computation of a balanced separator is left for future research. We show how to get rid of the assumption that the shortest paths are disjoint, and thereby confirm the result claimed by Abam et al. [3].
Next to spanners on the complete Euclidean geometric graph, spanners under line segment constraints were studied [8, 10, 12, 16, 17]. In this setting, a set \(C\) of line segment constraints is provided, where each line segment is between two sites in \(S\) and no two line segments properly intersect. The goal is to construct a spanner on the _visibility graph_ of \(S\) with respect to \(C\). Clarkson [16] showed how to construct a linear sized \((1+\varepsilon)\)-spanner for this graph. Later, (constrained) Yao- and \(\Theta\)-graphs were also considered in this setting [8, 12]. If the segments in \(C\) form a polygonal domain \(P\), this setting is similar to ours, except that _all_ vertices of \(P\) are included as sites in \(S\). Thus the complexity of each edge is constant, and additionally it is required that there are short paths between the vertices of \(P\).
Low complexity paths are studied in the _minimum-link path_ problem. In this problem, the goal is to find a path the uses a minimal number of links (edges) between two sites in a domain, for example a simple polygon [34, 36, 21, 26]. Generally, this problem focuses only on the complexity of the path, with no restriction on the length of the path. Mitchell et al. [33] consider the related problem of finding the _shortest_ path with at most \(k\) edges between two points \(p,q\) in a simple polygon. They give an algorithm to compute a \(k\)-link path with length at most \((1+\varepsilon)\) times the length of the shortest \(k\)-link path, for any \(\varepsilon>0\). This result can not be applied to our setting, as the length of our paths should be bounded in terms of \(d(p,q)\), i.e. the shortest \(m\)-link path, instead of the shortest \(k\)-link path.
Our results.We first consider the simple setting where the sites lie in a simple polygon, i.e. a polygonal domain without holes. We show that in this setting any \((3-\varepsilon)\)-spanner may have complexity \(\Omega(nm)\), thus implying that the \((2+\varepsilon)\)-spanner of Abam et al. [3] may also have complexity \(\Omega(nm)\), despite having \(O(n\log n)\) edges.
To improve this complexity, we first introduce a simple 2-spanner with \(O(n\log n)\) edges for an additively weighted point set in a 1-dimensional Euclidean space; see Section 2. In Section 3, we use this result to obtain a geodesic \(2\sqrt{2}\)-spanner with \(O(n\log^{2}n)\) edges for a point set in a simple polygon. We recursively split the polygon by a chord \(\lambda\) such that each subpolygon contains roughly half of the sites, and build a 1-dimensional spanner on the sites projected to \(\lambda\). We then extend this spanner into one that also has bounded complexity. For any constant \(\varepsilon>0\) and fixed integer \(k\geq 1\), we obtain a \((2k+\varepsilon)\)-spanner with complexity \(O(mn^{1/k}+n\log^{2}n)\). Furthermore, we provide an algorithm to compute such a spanner that runs in \(O(n\log^{2}n+m\log n+K)\) time, where \(K\) denotes the output complexity. When we output each edge explicitly, \(K\) is equal to the spanner complexity. However, as each edge is a shortest path, we can also output an edge implicitly by only stating the two sites it connects. In this case \(K\) is equal to the size of the spanner.
In Section 4 and 5, we extend our results for a simple polygon to a polygonal domain. There are two significant difficulties in this transition: _(i)_ we can no longer partition the polygon by a line segment such that each subpolygon contains roughly half of the sites, and _(ii)_ the shortest path between two sites \(p,q\) may not be homotopic to the path from \(p\) to \(q\) via another site \(c\).
We solve problem _(i)_ by using a shortest-path separator similar to Abam et al. [3]. To apply the shortest-path separator in a polygonal domain, we need new additional ideas, which we discuss in Section 4. In particular, we allow one additional type of separator in our version of a shortest-path separator: two shortest paths from a point in \(P\) to the boundary of a single hole. We show that this way there indeed always exists such a separator in a polygonal domain, and provide an \(O(n^{2}\log m+nm\log m)\) time algorithm to compute one.
To overcome problem _(ii)_, we allow an edge \((p,q)\) to be any path from \(p\) to \(q\). In networks, the connections between two nodes are often not necessarily optimal paths, the only requirement being that the distance between two hubs does not become too large. Thus allowing other paths between two sites seems a reasonable relaxation. This way, we obtain in a geodesic \((2k+\varepsilon)\)-spanner of size \(O(n\log^{2}n)\) and complexity \(O(mn^{1/k}+n\log^{2}n)\) that can be computed in \(O(n^{2}\log m+nm\log m+K)\) time. Because our edges always consist of at most three shortest paths, we can again output the edges implicitly in \(O(n\log^{2}n)\) time. We also provide an alternative \((2k+\varepsilon)\)-spanner of size \(O(\sqrt{h}n\log^{2}n)\) and complexity \(O(\sqrt{h}(mn^{1/k}+n\log^{2}n))\) that can be constructed more efficiently, i.e., in \(O(\sqrt{h}n\log^{2}n+m\log m+K)\) time.
Finally, in Section 6, we provide lower bounds on the complexity of geodesic spanners. For any constant \(\varepsilon\in(0,1)\) and integer constant \(t\geq 2\), we show a lower bound for the complexity of a \((t-\varepsilon)\)-spanner in a simple polygon of \(\Omega(mn^{1/(t-1)}+n)\). Therefore, the \(2k+\varepsilon\) spanning ratio of our \(O(mn^{1/k}+n\log^{2}n)\) complexity spanners is about a factor two off optimal. For the case of a \((3-\varepsilon)\)-spanner, we prove an even stronger lower bound of \(\Omega(nm)\).
Throughout the paper, we make the general position assumption that all vertices of \(P\) and sites in \(S\) have distinct \(x\)- and \(y\)-coordinates. Symbolic perturbation, in particular a shear transformation, can be used to remove this assumption [18].
## 2 A 1-dimensional additively weighted 2-spanner
We consider how to compute an additively weighted spanner \(\mathcal{G}\) in 1-dimensional Euclidean space, where each site \(p\in S\) has a non-negative weight \(w(p)\). The distance \(d_{w}(p,q)\) between two sites \(p,q\in S\) is given by \(d_{w}(p,q)=w(p)+|pq|+w(q)\), where \(|pq|\) denotes the Euclidean distance. Without loss of generality, we can map \(\mathbb{R}^{1}\) to the \(x\)-axis, and the weights to the \(y\)-axis, see Figure 2. This allows us to speak of the sites left (or right) of some site \(p\).
To construct a spanner \(\mathcal{G}\), we first partition the sites into two sets \(S_{\ell}\) and \(S_{r}\) of roughly equal size by a point \(O\) with \(w(O)=0\). The set \(S_{\ell}\) contains all sites left of \(O\), and \(S_{r}\) all sites right of \(O\). Sites that lie on the vertical line through \(O\) are not included in either of the sets. We then find a site \(c\in S\) for which \(d_{w}(c,O)\) is minimal. For all \(p\in S\), \(p\neq c\), we add the edge \((p,c)\) to \(\mathcal{G}\). Finally, we handle the sets \(S_{\ell}\) and \(S_{r}\), excluding the site \(c\), recursively.
Figure 2: Construction of the additively weighted 1-dimensional spanner. The green triangle represents all points that are at distance at most \(d_{w}(c,O)\) from \(O\).
**Lemma 1**.: _The graph \(\mathcal{G}\) is a 2-spanner of size \(O(n\log n)\) and can be constructed in \(O(n\log n)\) time._
Proof.: As we add \(O(n)\) edges in each level of the recursion, the total number of edges in \(\mathcal{G}\) is \(O(n\log n)\). Consider two sites \(p,q\in S\). Let \(c\) be the chosen center at the level of the recursion where \(p\) and \(q\) are assigned to different subsets \(S_{\ell}\) and \(S_{r}\). Assume w.l.o.g. that \(p\in S_{\ell}\) and \(q\in S_{r}\). Note that, because \(p\in S_{\ell}\) and \(q\in S_{r}\) we have \(d_{w}(p,q)=d_{w}(p,O)+d_{w}(q,O)\). Furthermore, \(d_{w}(c,O)\leq d_{w}(p,O)\) and \(d_{w}(c,O)\leq d_{w}(q,O)\), by the choice of \(c\). Because both edges \((p,c)\) and \((q,c)\) are in \(\mathcal{G}\), we get for \(d_{\mathcal{G}}(p,q)\):
\[d_{\mathcal{G}}(p,q)\leq d_{w}(p,O)+2d_{w}(c,O)+d_{w}(q,O)\leq 2d_{w}(p,O)+2d_{ w}(q,O)=2d_{w}(p,q). \tag{1}\]
When there is no such center point, so \(p\) or \(q\) lies on the vertical line through \(O\) at some level, then it still holds for this level that \(d_{w}(p,q)=d_{w}(p,O)+d_{w}(q,O)\). Equation (1) again gives that \(d_{\mathcal{G}}(p,q)\leq 2d_{w}(p,q)\).
We can find a point \(O\) that separates the points into two sets \(S_{\ell}\) and \(S_{r}\) of equal size at each level of the recursion in linear time. Additionally, a linear number of edges is added to the spanner at each level. The running time is thus \(O(n\log n)\).
## 3 Spanners in a simple polygon
### A simple geodesic spanner
Just like Abam et al. [3], we use our 1-dimensional spanner to construct a geodesic spanner. We are more interested in the simplicity of the spanner than its spanning ratio, as we base our low complexity spanners, to be discussed in Section 3.2, on this simple geodesic spanner. Let \(P\) be a simple polygon, and let \(\partial P\) denote the polygon boundary. We denote by \(d(p,q)\) the geodesic distance between \(p,q\in P\), and by \(\pi(p,q)\) the shortest (geodesic) path from \(p\) to \(q\). We analyze the simple construction using any 1-dimensional additively weighted \(t\)-spanner of size \(O(n\log n)\). We show that restricting the domain to a simple polygon improves the spanning ratio from \(3t\) to \(\sqrt{2}t\). The construction can be refined to achieve a spanning ratio of \(t+\varepsilon\), see Section 3.1.1.
As in [1] and [3], we first partition \(P\) into two subpolygons \(P_{\ell}\) and \(P_{r}\) by a line segment \(\lambda\), such that each subpolygon contains at most two thirds of the sites in \(S\)[7]. We assume, without loss of generality, that \(\lambda\) is a vertical line segment and \(P_{\ell}\) is left of \(\lambda\). Let \(S_{\ell}\) be the sites in the closed region \(P_{\ell}\), and \(S_{r}:=S\setminus S_{\ell}\). For each site \(p\in S\), we then find the point \(p_{\lambda}\) on \(\lambda\) closest to \(p\). Note that this point is unique, because the shortest path to a line segment is unique in a simple polygon. We denote by \(S_{\lambda}\) the set of all projected sites. As \(\lambda\) is a line segment, we can define a weighted 1-dimensional Euclidean space on \(\lambda\), where \(w(p_{\lambda}):=d(p,p_{\lambda})\) for each \(p_{\lambda}\in S_{\lambda}\). We compute a \(t\)-spanner \(\mathcal{G}_{\lambda}=(S_{\lambda},E_{\lambda})\) for this set. For each pair \((p_{\lambda},q_{\lambda})\in E_{\lambda}\), we add the edge \((p,q)\), which is \(\pi(p,q)\), to our spanner \(\mathcal{G}\). Finally, we recursively compute spanners for \(S_{\ell}\) and \(S_{r}\), and add their edges to \(\mathcal{G}\) as well.
The graph \(\mathcal{G}\) is a geodesic \(\sqrt{2}t\)-spanner of size \(O(n\log^{2}n)\).
Proof.: As \(\mathcal{G}_{\lambda}\) has \(O(n\log n)\) edges (Lemma 1) that directly correspond to edges in \(\mathcal{G}\), and the recursion has \(O(\log n)\) levels, we have \(O(n\log^{2}n)\) edges in total. Let \(p,q\) be two sites in \(S\). If both are in \(S_{\ell}\) (or \(S_{r}\)), then there is a path of length \(\sqrt{2}td(p,q)\) by induction. So, we assume w.l.o.g. that \(p\in S_{\ell}\) and \(q\in S_{r}\). Let \(r\) be the intersection point of \(\pi(p,q)\) and \(\lambda\). Observe that \(p_{\lambda}\) and \(q_{\lambda}\) must be on opposite sides of \(r\), otherwise \(r\) cannot be on the shortest path. We assume, without loss of generality, that \(p_{\lambda}\) is above \(r\) and \(q_{\lambda}\) below \(r\). Because
\(\mathcal{G}_{\lambda}\) is a \(t\)-spanner, we know that there is a weighted path from \(p_{\lambda}\) to \(q_{\lambda}\) of length at most \(td_{w}(p_{\lambda},q_{\lambda})\). As \(w(p_{\lambda})=d(p,p_{\lambda})\), this directly corresponds to a path in the polygon. So,
\[d_{\mathcal{G}}(p,q)\leq d_{\mathcal{G}_{\lambda}}(p_{\lambda},q_{\lambda})\leq td _{w}(p_{\lambda},q_{\lambda})=t(d(p,p_{\lambda})+|p_{\lambda}r|+|rq_{\lambda}|+d (q_{\lambda},q)). \tag{2}\]
Let \(z\) be the point where the shortest paths from \(p\) to \(p_{\lambda}\) and \(r\) separate. See Figure 3 for an illustration. Consider the right triangle \(\mathcal{T}=(z,z^{\prime},r)\), where \(z^{\prime}\) is the intersection point of the line perpendicular to \(\lambda\) through \(z\) and the line containing \(\lambda\). Note that \(z^{\prime}\) does not necessarily lie within \(P\). For this triangle we have that
\[|zr|\geq\frac{\sqrt{2}}{2}(|zz^{\prime}|+|z^{\prime}r|). \tag{3}\]
Next, we show that the path from \(z\) to \(p_{\lambda}\) is a \(y\)-monotone convex polygonal chain ending at or below \(z^{\prime}\). Consider the vertical ray through \(z\) upwards to the polygon boundary. We call the part of \(\partial P\) between where the ray hits \(\partial P\) and \(\lambda\) the _top_ part of \(\partial P\). Similarly, for a downwards ray, we define the _bottom_ part of \(\partial P\). There are no vertices on \(\pi(z,p_{\lambda})\) from the bottom part of \(\partial P\), because such a vertex would then also occur on the shortest path to \(r\). This is in contradiction with the definition of \(z\). If \(z\) sees \(z^{\prime}\), then \(p_{\lambda}=z^{\prime}\), otherwise the chain must bend at one or more vertices of the top part of \(\partial P\), and thus lie below \(z^{\prime}\). It follows that \(\pi(z,p_{\lambda})\) is contained within \(\mathcal{T}\). Similarly, we conclude that \(\pi(z,r)\) is contained within \(\mathcal{T}\). Additionally, this gives us that \(d(z,p_{\lambda})\leq|zz^{\prime}|+|z^{\prime}p_{\lambda}|\), and \(d(z,r)\geq|zr|\). Together with Equation (3) this yields \(d(z,p_{\lambda})+|p_{\lambda}r|\leq|zz^{\prime}|+|z^{\prime}r|\leq\sqrt{2}|zr| \leq\sqrt{2}d(z,r)\). And thus
\[d(p,p_{\lambda})+|p_{\lambda}r|=d(p,z)+d(z,p_{\lambda})+|p_{\lambda}r|\leq d( p,z)+\sqrt{2}d(z,r)\leq\sqrt{2}d(p,r).\]
Symmetrically, we find for \(q\) that \(d(q,q_{\lambda})+|q_{\lambda}r|\leq\sqrt{2}d(q,r)\). From this, together with Equation (2), we conclude that \(d_{\mathcal{G}}(p,q)\leq t\left(\sqrt{2}d(p,r)+\sqrt{2}d(r,q)\right)=\sqrt{2} td(p,q)\).
Applying Lemma 2 to the spanner of Section 2 yields a \(2\sqrt{2}\)-spanner of size \(O(n\log^{2}n)\).
#### A refinement to obtain a \((t+\varepsilon)\)-spanner
Abam et al. [3] refine their spanner construction to obtain a \((2+\varepsilon)\)-spanner for any constant \(\varepsilon>0\). In the following lemma, we apply their refinement to the construction of the spanner
Figure 3: The shortest path \(\pi(p,q)\) crosses \(\lambda\) at \(r\). The difference in length between the direct path from \(z\) to \(r\) and the path through \(p_{\lambda}\) can be bounded by considering the triangle \(\mathcal{T}=(z,z^{\prime},r)\).
proposed in Section 3.1 and obtain a \((t+\varepsilon)\)-spanner. For the rest of the paper, we generally discuss the results based on the simple spanner from Section 3.1, as this is somewhat easier to follow, and apply the refinement only in the final results.
Using any 1-dimensional additively weighted \(t\)-spanner of size \(O(n\log n)\), we can construct a \((t+\varepsilon)\)-spanner for a set \(S\) of \(n\) point sites that lie in a simple polygon \(P\) of size \(O(c_{\varepsilon}n\log^{2}n)\), where \(c_{\varepsilon,t}\) is a constant depending on \(\varepsilon\) and \(t\). The number of sites used to construct the 1-dimensional spanner is \(O(c_{\varepsilon,t}n)\).
Proof.: In the refined construction, instead of adding only a single point \(p_{\lambda}\) to \(S_{\lambda}\) for each site \(p\), we additionally add a collection of \(O(1/\delta^{2})\) points on \(\lambda\) "close" to \(p_{\lambda}\) to \(S_{\lambda}\), where \(\delta\) is a constant depending on \(\varepsilon\). These additional points all lie within distance \((1+2/\delta)\cdot d(p,p_{\lambda})\) of \(p_{\lambda}\). The points are roughly equally spaced on the line segment within this distance. To be precise, the segment is partitioned into \(O(1/\delta^{2})\) pieces of length \(\delta\cdot d(p,p_{\lambda})\), and for each piece \(i\) the point \(p_{\lambda}^{(i)}\) closest to \(p\) is added to \(S_{\lambda}\). The weight of each point is again chosen as the geodesic distance to \(p\). The 1-dimensional spanner(s) \(\mathcal{G}_{\lambda}\) is then computed on this larger set \(S_{\lambda}\). For each edge \((p_{\lambda}^{(i)},q_{\lambda}^{(j)})\) in \(\mathcal{G}_{\lambda}\), we again add the edge \((p,q)\) to the final spanner \(\mathcal{G}\).
Abam et al. prove that for each \(p,q\in S\), there are points \(p_{\lambda}^{(i)},q_{\lambda}^{(j)}\in S_{\lambda}\) such that \(d_{w}(p_{\lambda}^{(i)},t_{\lambda}^{(j)})\leq(1+\delta)\cdot d(p,q)\). As \(\mathcal{G}_{\lambda}\) is a \(t\)-spanner for \(S_{\lambda}\), choosing \(\delta=\varepsilon/t\) implies that \(d_{\mathcal{G}}(p,q)\leq t\cdot d_{w}(p_{\lambda}^{(i)},t_{\lambda}^{(j)}) \leq t(1+\delta)d(p,q)=(t+\varepsilon)d(p,q)\). In other words, \(\mathcal{G}\) is a \((t+\varepsilon)\)-spanner of \(S\). Note that the number of edges in the spanner has increased because we use \(O(n/\delta^{2})\) instead of \(O(n)\) points to compute the 1-dimensional spanner. This results in a spanner of size \(O(c_{\varepsilon,t}n\log^{2}n)\), where \(c_{\varepsilon,t}=O(t^{2}/\varepsilon^{2})\) is a constant depending on \(\varepsilon\).
### Low complexity geodesic spanners
In general, a geodesic spanner \(\mathcal{G}=(S,E)\) in a simple polygon with \(m\) vertices may have complexity \(O(m|E|)\). It is easy to see that the \(2\sqrt{2}\)-spanner of Section 3.1 can have complexity \(\Omega(nm)\), just like the spanners in [3]. As one of the sites, \(c\), is connected to all other sites, the polygon in Figure 4 provides this lower bound. The construction in Figure 4 even shows that the same lower bound holds for the complexity of any \((3-\epsilon)\)-spanner. Additionally, the following theorem implies a trade-off between the spanning ratio and the spanner complexity.
For any constant \(\varepsilon\in(0,1)\) and integer constant \(t\geq 2\), there exists a set of \(n\) point sites in a simple polygon \(P\) with \(m=\Omega(n)\) vertices for which any geodesic \((t-\varepsilon)\)-spanner has complexity \(\Omega(mn^{1/(t-1)})\).
The proofs of these lower bounds are in Section 6. Next, we present a spanner that almost matches this bound. We first present a \(4\sqrt{2}\)-spanner of bounded complexity, and then generalize the approach to obtain a \((2k+\varepsilon)\)-spanner of complexity \(O(mn^{1/k}+n\log^{2}n)\), for any integer \(k\geq 2\).
#### A \(4\sqrt{2}\)-spanner of complexity \(O(m\sqrt{n}+n\log^{2}n)\)
To improve the complexity of the geodesic spanner, we adapt our construction for the additively weighted spanner \(\mathcal{G}_{\lambda}\) as follows. After finding the site \(c_{\lambda}\in S_{\lambda}\) for which \(d_{w}(c_{\lambda},O)\) is minimal, we do not add all edges \((p_{\lambda},c_{\lambda})\), \(p_{\lambda}\in S_{\lambda}\), to \(\mathcal{G}_{\lambda}\). Instead, we form groups of sites whose original sites (before projection) are "close" to each other in \(P\). For each group \(S_{i}\), we add all edges \((p_{\lambda},c_{i,\lambda})\), \(p_{\lambda}\in S_{i}\), to \(\mathcal{G}_{\lambda}\), where \(c_{i,\lambda}\) is the site in \(S_{i}\) for which \(d_{w}(c_{i,\lambda},O)\) is minimal. Finally, we add all edges \((c_{i,\lambda},c_{\lambda})\) to \(\mathcal{G}_{\lambda}\).
To make sure the complexity of our spanner does not become too large, we must choose the groups in such a way that the edges in our spanner do not cross "bad" parts of the polygon too often. We achieve this by making groups of roughly equal size, where shortest paths within each group \(S_{i}\) are contained within a region \(R_{i}\) that is (almost) disjoint from the regions \(R_{j}\) of other groups. We first show how to solve a recursion that is later used to bound the complexity of the spanner. The subsequent lemma formally states the properties that we require of our groups, and bounds the complexity of such a spanner.
The recursion \(T(n,m)=T(n/2,m_{1})+T(n/2,m_{2})+O(mn^{1/k}+n\log^{c_{1}}n)\), where \(m_{1}+m_{2}=m+c_{2}\) and \(c_{1},c_{2}\) integer constants, solves to \(T(n)=O(mn^{1/k}+n\log^{c_{1}+1}n)\).
Proof.: We will write the recursion as a sum over all levels \(i\) and all subproblems \(j\) at each level. Because \(n\) is halved in each level, the recursion has \(O(\log n)\) levels. There are \(2^{i}\) subproblems at level \(i\) of the recursion.
Let \(m_{i,j}\) denote the \(m\) value used in subproblem \(j\) of level \(i\). We consider the sum of the \(m\) values over all subproblems at level \(i\) which we denote by \(M_{i}\), so \(M_{i}=\sum_{j=0}^{2^{i}}m_{i,j}\). We prove by induction that \(M_{i}=m+c_{2}\cdot(2^{i}-1)\) for any \(i\geq 1\). For \(i=1\) this states that \(M_{1}=m+c_{2}\cdot(2^{1}-1)=m+c_{2}\), which is equivalent to \(m_{1}+m_{2}=m+c_{2}\). Suppose that the hypothesis \(M_{i}=m+c_{2}\cdot(2^{i}-1)\) holds for \(i=k\). The \(2^{k+1}\) subproblems at level \(k+1\) consist of \(2^{k+1}/2=2^{k}\) pairs of subproblems \((j_{1},j_{2})\), each generated by a subproblem \(j\) at level \(k\), for which \(m_{k+1,j_{1}}+m_{k+1,j_{2}}=m_{k,j}+c_{2}\). We can thus find \(M_{k+1}\) by summing over these pairs. So, \(M_{k+1}=\sum_{(j_{1},j_{2})}(m_{k,j}+c_{2})=M_{k}+c_{2}\cdot 2^{k}=m+c_{2}\cdot(2 ^{k}-1)+c_{2}\cdot 2^{k}=m+c_{2}\cdot(2^{k+1}-1)\).
We are now ready to formulate the recursion as a summation. For simplicity we use that \(M_{i}\leq m+c_{2}2^{i}\).
Figure 4: Any \((3-\varepsilon)\)-spanner in a simple polygon with \(m\) vertices may have complexity \(\Omega(nm)\).
\[T(n,m) =\sum_{i=0}^{O(\log n)}\sum_{j=0}^{2^{i}}\left(m_{i,j}\left(\frac{n} {2^{i}}\right)^{1/k}+\frac{n}{2^{i}}\log^{c_{1}}\left(\frac{n}{2^{i}}\right)\right)\] \[=\sum_{i=0}^{O(\log n)}\sum_{j=0}^{2^{i}}m_{i,j}\left(\frac{n}{2^{ i}}\right)^{1/k}+\sum_{i=0}^{O(\log n)}\sum_{j=0}^{2^{i}}\frac{n}{2^{i}}\log^{c_{1} }\left(\frac{n}{2^{i}}\right)\] \[=\sum_{i=0}^{O(\log n)}\left(\frac{n}{2^{i}}\right)^{1/k}\sum_{j =0}^{2^{i}}m_{i,j}+O(n\log^{c_{1}+1}n)\] \[\leq\sum_{i=0}^{O(\log n)}\left(\frac{n}{2^{i}}\right)^{1/k}(m+c_ {2}2^{i})+O(n\log^{c_{1}+1}n)\] \[=mn^{1/k}\sum_{i=0}^{O(\log n)}\frac{1}{2^{i/k}}+c_{2}n^{1/k} \sum_{i=0}^{O(\log n)}2^{(1-1/k)i}+O(n\log^{c_{1}+1}n)\] \[=O(mn^{1/k})+c_{2}n^{1/k}\cdot O(n^{1-1/k})+O(n\log^{c_{1}+1}n)\] \[=O(mn^{1/k}+n\log^{c_{1}+1}n).\qed\]
If the groups adhere to the following properties, then \(\mathcal{G}\) has \(O(m\sqrt{n}+n\log^{2}n)\) complexity:
1. each group contains \(\Theta(\sqrt{n})\) sites, and
2. each vertex of \(P\) is only used by shortest paths within \(O(1)\) groups.
Proof.: We will first prove the complexity of the edges in one level of the 1-dimensional spanner is \(O(m\sqrt{n}+n)\). Two types of edges are added to the spanner: (a) edges from some \(c_{i}\) to \(c\), and (b) edges from some \(p\in S_{i}\) to \(c_{i}\). According to property 1, there are \(\Theta(\sqrt{n})\) groups, and thus \(\Theta(\sqrt{n})\) type (a) edges, that each have a complexity of \(O(m)\). Thus the total complexity of these edges is \(O(m\sqrt{n})\). Let \(r_{i}\) be the maximum complexity of a shortest path between any two sites in \(S_{i}\) and let \(V_{i}\) be the set of vertices this path visits. Property 2 states that for any \(v\in V_{i}\) it holds that \(|\{j\mid v\in V_{j}\}|=O(1)\), which implies that \(\sum_{i}r_{i}=O(m)\). The complexity of all type (b) edges is thus \(O(n)+\sum_{i}r_{i}O(\sqrt{n})=O(m\sqrt{n}+n)\).
Next, we show that in both recursions, the 1-dimensional recursion and the recursion on \(P_{\ell}\) and \(P_{r}\), not only the number of sites, but also the complexity of the polygon is divided over the two subproblems. Splitting the sites into left and right of \(O\) corresponds to splitting the polygon horizontally at \(O\): all sites left (right) of \(O\) in the 1-dimensional space lie in the part of the polygon below (above) this horizontal line segment. Thus, shortest paths between sites left of \(O\) use part of the polygon that is disjoint from the shortest paths between the sites right of \(O\). This means that for two subproblems we have that \(m_{1}+m_{2}=m\), where \(m_{i}\) denotes the maximum complexity of a path in subproblem \(i\). The recursion for the complexity is now given by
\[T(n,m)=T(n/2,m_{1})+T(n/2,m_{2})+O(m\sqrt{n}+n),\,\text{with}\,\,m_{1}+m_{2}=m.\]
According to Lemma 5 this solves to \(T(n)=O(m\sqrt{n}+n\log n)\).
Similarly, the split by \(\lambda\) divides the polygon into two subpolygons, while adding at most two new vertices. As all vertices, except for the endpoints of \(\lambda\), are in \(P_{\ell}\) or \(P_{r}\) (not both), the total complexity of both subpolygons is at most \(m+4\). We obtain the following recursion
\[T(n,m)=T(n/2,m_{1})+T(n/2,m_{2})+O(m\sqrt{n}+n\log n),\,\text{with}\,\,m_{1}+m _{2}=m+4.\]
Lemma 5 again states this solves to \(T(n)=O(m\sqrt{n}+n\log^{2}n)\).
To form groups that adhere to these two properties, we consider the shortest path tree \(\mathit{SPT}_{c}\) of \(c\): the union of all shortest paths from \(c\) to the vertices of \(P\). We include the sites \(p\in S\setminus\{c\}\) as leaves in \(\mathit{SPT}_{c}\) as children of their apex, i.e., the last vertex on \(\pi(c,p)\). This gives rise to an ordering of the sites in \(S\), and thus of the weighted sites in \(S_{\lambda}\), based on the in-order traversal of the tree. We assign the first \(\lceil\sqrt{n}\rceil\) sites to \(S_{1}\), the second \(\lceil\sqrt{n}\rceil\) to \(S_{2}\), etc. See Figure 5.
Clearly these groups adhere to property 1. Proving that they also adhere to property 2 is more involved. For each group \(S_{i}\), consider the minimal subtree \(\mathcal{T}_{i}\) of \(\mathit{SPT}_{c}\) containing all \(p\in S_{i}\). \(\mathcal{T}_{i}\) defines a polygonal region \(R_{i}\) in \(P\) as follows. Refer to Figure 5 for an illustration. Let \(v_{i}\) be the root of \(\mathcal{T}_{i}\). Consider the shortest path \(\pi(v_{i},a)\), where \(a\) is the first site of \(S_{i}\) in \(\mathcal{T}_{i}\) by the ordering used before. Let \(\pi_{a}\) be the path obtained from \(\pi(v_{i},a)\) by extending the last segment of \(\pi(v_{i},a)\) to the boundary of \(P\). Similarly, let \(\pi_{b}\) be such a path for the last site of \(S_{i}\) in \(\mathcal{T}_{i}\). We take \(R_{i}\) to be the region in \(P\) rooted at \(v_{i}\) and bounded by \(\pi_{a}\), \(\pi_{b}\), and some part of the boundary of \(P\), that contains the sites in \(S_{i}\). In case \(v_{i}\) is \(c\), we split \(R_{i}\) into two regions \(R_{j}\) and \(R_{k}\), such that the angle of each of these regions at \(c\) is at most \(\pi\). The set \(S_{i}\) is then also split into two sets \(S_{j}\) and \(S_{k}\) accordingly. The following three lemmas on \(R_{i}\) and \(\mathcal{T}_{i}\) together imply that the groups adhere to property 2.
Only vertices of \(P\) that are in \(\mathcal{T}_{i}\) can occur in \(R_{i}\).
Proof.: Let \(v\) be a vertex in \(R_{i}\). The paths \(\pi_{a}\) and \(\pi_{b}\) are shortest paths from the polygon boundary to \(v_{i}\). Additionally, we can extend these paths with \(\pi(v_{i},c)\) to obtain shortest paths to the site \(c\). The shortest path from \(v\) to \(c\) cannot intersect either of these shortest paths twice, thus \(v\) is in \(\mathcal{T}_{i}\).
All shortest paths between sites in \(S_{i}\) are contained within \(R_{i}\).
Proof.: Suppose there is a shortest path \(\pi(p,q)\), \(p_{\lambda},q_{\lambda}\in S_{i}\), that is not contained within \(R_{i}\). Then \(\pi(p,q)\) must exit \(R_{i}\) through \(\pi_{a}\) and enter again through \(\pi_{b}\), or the other way around. The path thus goes around \(v_{i}\). Consider a line through \(v_{i}\) that does not pass through the interior of \(R_{i}\). Note that this exists because \(v_{i}\) is not a reflex vertex of \(R_{i}\). As \(P\) is a simple polygon, the line must intersect \(\pi(p,q)\) twice. This provides a shortcut of the shortest path by following along the line, which is a contradiction.
Figure 5: The shortest path tree of \(c\). The polygon vertices are grey in the tree. Each group \(S_{i}\) has an associated polygonal region \(R_{i}\) in \(P\).
Any vertex \(v\in\mathit{SPT}_{c}\) occurs in at most two trees \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) as a non-root node.
Proof.: Let \(v\in\mathcal{T}\), and \(\mathcal{T}(v)\) be the subtree rooted at \(v\). Suppose \(v\) is a non-root node of three trees \(\mathcal{T}_{i}\), \(\mathcal{T}_{j}\), and \(\mathcal{T}_{k}\), \(i<j<k\). Then there must be three sites \(p_{i}\in S_{i}\), \(p_{j}\in S_{j}\), and \(p_{k}\in S_{k}\) in \(\mathcal{T}(v)\). As our groups occur in order, we know \(p_{i}\) is before \(p_{j}\), and \(p_{j}\) is before \(p_{k}\) in the in-order traversal of \(\mathit{SPT}_{c}\). Let \(p\) be the parent of \(v\). As \(v\) is a non-root node of each of the subtrees, we know that \(p\) is in \(\mathcal{T}_{i}\), \(\mathcal{T}_{j}\), and \(\mathcal{T}_{k}\) as well. This implies that there must be a site \(p^{\prime}_{j}\in S_{j}\) in \(\mathcal{T}(p)\setminus\mathcal{T}(v)\). If \(p^{\prime}_{j}\) is before \(p_{j}\), then \(p^{\prime}_{j}\) is also before \(p_{i}\), because \(p^{\prime}_{j}\notin\mathcal{T}(v)\). This implies that \(p_{i}\) is in \(S_{j}\), because it lies between \(p^{\prime}_{j}\in S_{j}\) and \(p_{j}\in S_{j}\), which is a contradiction. If \(p^{\prime}_{j}\) is after \(p_{j}\), then the same reasoning implies \(p_{k}\in S_{j}\), which is also a contradiction.
Note that the root \(r\) of \(\mathcal{T}_{i}\) is never used in a shortest path between sites in \(S_{i}\), because \(r\) cannot be a reflex vertex of \(R_{i}\). Consequently, Lemma 3.1 states that the spanner has complexity \(O(m\sqrt{n}+n\log^{2}n)\).
The graph \(\mathcal{G}\) is a geodesic \(4\sqrt{2}\)-spanner of size \(O(n\log^{2}n)\).
Proof.: We prove the \(1\)-dimensional spanner \(\mathcal{G}_{\lambda}\) is a \(4\)-spanner with \(O(n\log n)\) edges. Together with Lemma 3.1, this directly implies \(\mathcal{G}\) is a \(4\sqrt{2}\)-spanner with \(O(n\log^{2}n)\) edges.
In each level of the recursion, we still add only a single edge for each site. Thus, the total number of edges is \(O(n\log n)\). Again, consider two sites \(p_{\lambda},q_{\lambda}\in S_{\lambda}\), and let \(c_{\lambda}\) be the chosen center point at the level where \(p_{\lambda}\) and \(q_{\lambda}\) are separated by \(O\). Let \(S_{i}\) be the group of \(p_{\lambda}\) and \(S_{j}\) the group of \(q_{\lambda}\). Both the edges \((p_{\lambda},c_{i,\lambda})\) and \((c_{i,\lambda},c_{\lambda})\) are in \(\mathcal{G}_{\lambda}\), similarly for \(q_{\lambda}\). We thus have a path \(p_{\lambda}\to c_{i,\lambda}\to c_{\lambda}\to c_{j,\lambda}\to q_{\lambda}\) in \(\mathcal{G}_{\lambda}\). Using that \(d_{w}(p_{\lambda},c_{i,\lambda})\leq d_{w}(p_{\lambda},O)+d_{w}(c_{i,\lambda},O)\), because of the triangle inequality, and \(d_{w}(c_{i,\lambda},O)\leq d_{w}(p_{\lambda},O)\), we find:
\[d_{\mathcal{G}_{\lambda}}(p_{\lambda},q_{\lambda}) =d_{w}(p_{\lambda},c_{i,\lambda})+d_{w}(c_{i,\lambda},c_{\lambda} )+d_{w}(c_{\lambda},c_{j,\lambda})+d_{w}(c_{j,\lambda},q_{\lambda})\] \[\leq d_{w}(p_{\lambda},O)+2d_{w}(c_{i,\lambda},O)+2d_{w}(c_{ \lambda},O)+2d_{w}(c_{j,\lambda},O)+d_{w}(q_{\lambda},O)\] \[\leq 4d_{w}(p_{\lambda},O)+4d_{w}(q_{\lambda},O)\] \[=4d_{w}(p_{\lambda},q_{\lambda})\qed\]
#### 3.2.2 A \((2k+\varepsilon)\)-spanner of complexity \(O(mn^{1/k}+n\log^{2}n)\)
We first sketch how to generalize the approach of Section 3.2.1 to obtain a spanner with a trade-off between the (constant) spanning ratio and complexity, and then formally prove the result in Lemma 3.1. Fix \(N=n^{1/k}\), for some integer constant \(k\geq 1\). Instead of \(\Theta(\sqrt{n})\) groups, we create \(\Theta(N)\) groups. For each of these groups we select a center, and then partition the groups further recursively. By connecting each center to its parent center, we obtain a tree of height \(k\). This results in a spanning ratio of \(2\sqrt{2}k\). Using the refinement discussed in Lemma 3.1, we even obtain a \((2k+\varepsilon)\)-spanner.
For any constant \(\varepsilon>0\) and integer constant \(k\geq 1\), there exists a geodesic \((2k+\varepsilon)\)-spanner of size \(O(c_{\varepsilon,k}n\log^{2}n)\) and complexity \(O(c_{\varepsilon,k}(mn^{1/k}+n\log^{2}n))\), where \(c_{\varepsilon,k}\) is a constant depending only on \(\varepsilon\) and \(k\).
Proof.: We prove that the \(1\)-dimensional spanner we just sketched is a \(2k\)-spanner of complexity \(O(mn^{1/k}+n\log n)\) with \(O(n\log n)\) edges. Lemma 3.1, together with Lemma 3.1, then implies that \(\mathcal{G}\) is a geodesic \(2\sqrt{2}k\)-spanner of complexity \(O(mn^{1/k}+n\log^{2}n)\) and size \(O(n\log^{2}n)\). The refinement discussed next in Lemma 3.1 can also be applied to this spanner,
which yields a \((2k+\varepsilon)\)-spanner. However, this slightly increases the spanner complexity to \(O(c_{\varepsilon,k}(mn^{1/k}+n\log^{2}n))\), where \(c_{\varepsilon,k}\) is a constant depending on \(\varepsilon\) and \(k\), see Lemma 3. We first describe the \(1\)-dimensional spanner in more detail, then analyze its complexity, and finally analyze its size and spanning ratio.
Fix \(N=n^{1/k}\). When building a single level of the \(1\)-dimensional spanner \(\mathcal{G}_{\lambda}\), instead of \(\Theta(\sqrt{n})\) groups of size \(\Theta(\sqrt{n})\), we create \(\Theta(N)\) groups of size \(\Theta(n^{1-1/k})\), based on the shortest path tree of \(c\), for some integer constant \(k\geq 1\). After selecting a center \(c_{i}\) for each group \(S_{i}\), we recursively split the groups further based on the shortest path trees of the new centers \(c_{i}\), until we reach groups of size one. For each group \(S_{i}\) at level \(j\) of this recursion, a center \(c_{i}^{(j)}\) is selected as the site in \(S_{i}\) for which \(d_{w}(c_{i}^{(j)},O)\) is minimal. We add an edge from each \(c_{i}^{(j)}\) to its parent \(c_{i}^{(j-1)}\). The final tree \(T\) obtained this way has height \(k\).
Let \(r_{i}^{(j)}\) be the number of vertices in \(R_{i}^{(j)}\) of a group \(S_{i}\) at level \(j\), where \(R_{i}^{(j)}\) is defined as in Section 3.2.1. In other words, \(r_{i}^{(j)}\) is the maximum complexity of a shortest path between two sites in \(S_{i}\) at level \(j\), as in the proof of Lemma 6. Let \(S_{1}^{(j)},...,S_{O(N)}^{(j)}\) be the subgroups of some group \(S_{i^{\prime}}^{(j-1)}\). Then the corresponding regions \(R_{1}^{(j)},...,R_{O(N)}^{(j)}\) partition \(R_{i^{\prime}}^{(j-1)}\), and Lemma 9 implies that a vertex of \(R_{O(N)}^{(j)}\) can be in at most two of the smaller regions. Thus, we still have that \(\sum_{i}r_{i}^{(j)}=O(m)\). This implies that the complexity all edges from level \(j\) to \(j-1\) is \(O(mn^{1/k}+n)\). As \(T\) has height \(O(k)\), the total complexity of the edges in \(T\) is also \(O(mn^{1/k}+n)\). Lemma 5 implies that this results in a \(1\)-dimensional spanner of complexity \(O(mn^{1/k}+n\log n)\).
In a single level of the recursion to build the \(1\)-dimensional spanner \(\mathcal{G}_{\lambda}\), only one edge is added for each site, namely to its parent in the tree. We thus still add \(O(n)\) edges in each level of the recursion, and \(O(n\log n)\) edges in total.
Consider two sites \(p,q\in S_{\lambda}\), and let \(c\) be the chosen center point at the level where \(p\) and \(q\) are separated by \(O\). The spanning ratio of the \(1\)-dimensional spanner is determined by the number of sites we visit on a path from \(p\in S_{\ell}\) to \(q\in S_{r}\). Observe that the height of the tree \(T\) is \(k\). We assume w.l.o.g. that the path from \(p\) to \(q\) is as long as possible, i.e. visits \(2k-1\) vertices. In a slight abuse of notation, we denote by \(c_{p}^{(j)}\) the centers on the path in \(T\) from \(p\) to the root \(c\). Then there is a path \(p\to c_{p}^{(k-1)}\to c_{p}^{(k-2)}\to...\to c^{(0)}(=c)\to...\to c_{q}^{(k-2)}+ c_{q}^{(k-1)}\to q\). Using that \(d_{w}(c_{p}^{(j)},c_{p}^{(j+1)})\leq d_{w}(c_{p}^{(j)},O)+d_{w}(c_{p}^{(j+1)},O)\) and that \(d_{w}(c_{p}^{(j)},O)\leq d_{w}(p,O)\), we find:
\[d_{\mathcal{G}_{\lambda}}(p,q) =d_{w}(p,c_{p}^{(k-1)})+\sum_{j=0}^{k-2}d_{w}(c_{p}^{(j+1)},c_{p }^{(j)})+\sum_{j=0}^{k-2}d_{w}(c_{q}^{(j)},c_{q}^{(j+1)})+d_{w}(c_{q}^{(k-1)},q)\] \[\leq d_{w}(p,O)+2\sum_{j=1}^{k-1}d_{w}(c_{p}^{(j)},O)+2d_{w}(c,O )+2\sum_{j=1}^{k-1}d_{w}(c_{q}^{(j)},O)+d_{w}(q,O)\] \[\leq d_{w}(p,O)+2(k-1)d_{w}(p,O)+2d_{w}(c,O)+2(k-1)d_{w}(q,O)+d_{ w}(q,O)\] \[\leq 2k(d_{w}(p,O)+d_{w}(q,O))\] \[=2kd_{w}(p,q).\]
Thus the spanning ratio of the \(1\)-dimensional spanner is \(2k\).
### Construction algorithm
In this section we propose an algorithm to construct the spanners of Section 3.2. The following gives a general overview of the algorithm, which computes a \(2\sqrt{2}k\)-spanner in
\(O(n\log^{2}n+m\log n)\) time. In the rest of this section we will discuss the steps in more detail.
1. Preprocess \(P\) for efficient shortest path queries and build both the vertical decomposition \(\mathcal{VD}\) and horizontal decomposition \(\mathcal{HD}\) of \(P\).
2. For each \(p\in S\), find the trapezoid in \(\mathcal{VD}\) and \(\mathcal{HD}\) that contains \(p\). For each trapezoid \(\nabla\in\mathcal{VD}\), store the number of sites of \(S\) that lies in \(\nabla\) and sort these sites on their \(x\)-coordinate.
3. Recursively compute a spanner on the sites \(S\) in \(P\): 1. Find a vertical chord \(\lambda\) of \(P\) such that \(\lambda\) partitions \(P\) into two polygons \(P_{\ell}\) and \(P_{r}\), and each subpolygon contains at most \(2n/3\) sites using the algorithm of Lemma 12. 2. For each \(p\in S\), find the point \(p_{\lambda}\) on \(\lambda\) and its weight using the algorithm of Lemma 13, and add this point to \(S_{\lambda}\). 3. Compute an additively weighted \(1\)-dimensional spanner \(\mathcal{G}_{\lambda}\) on the set \(S_{\lambda}\) using the algorithm of Lemma 14 or Lemma 15. 4. For every edge \((p_{\lambda},q_{\lambda})\in E_{\lambda}\) add the edge \((p,q)\) to \(\mathcal{G}\). 5. Recursively compute spanners for \(S_{\ell}\) in \(P_{\ell}\) and \(S_{r}\) in \(P_{r}\).
In step 1, we preprocess the polygon in \(O(m)\) time such that the distance between any two points \(p,q\in P\) can be computed in \(O(\log m)\) time [15, 23]. We also build the horizontal and vertical decompositions of \(P\), and a corresponding point location data structure, as a preprocessing step in \(O(m)\) time [15, 28]. We then perform a point location query for each site \(p\in S\) in \(O(n\log m)\) time in step 2 and sort the sites within each trapezoid in \(O(n\log n)\) time in total. The following lemma describes the algorithm to compute a vertical chord that partitions \(P\) into two subpolygons such that each of them contains roughly half of the sites in \(S\). It is based on the algorithm of Bose et al. [7] that finds such a chord without the constraint that it should be vertical. Because of this constraint, we use the vertical decomposition of \(P\) instead of a triangulation in our algorithm.
In \(O(n+m)\) time, we can find a vertical chord of \(P\) that partitions \(P\) into two subpolygons \(P_{\ell}\) and \(P_{r}\), such that each subpolygon contains at most \(2n/3\) sites of \(S\).
Proof.: Consider the dual tree of the vertical decomposition \(\mathcal{VD}\). Because our polygon vertices have distinct \(x\)- and \(y\)-coordinates, the maximum degree of any node in the tree is four; at most two neighbors to the right and two to the left of the trapezoid. We select an arbitrary node \(r\) as root of the tree. For each node \(v\), we compute \(c(v)\): the number of sites in \(S\) that lie in some trapezoid of the subtree rooted at \(v\). These values can be computed in linear time (in the size of the polygon) using a bottom-up approach, because we already know the number of sites that lie in each trapezoid.
Let \(v\) be an arbitrary node in the tree and \(\nabla_{v}\) the corresponding trapezoid. We first show that there is a vertical segment contained in \(\nabla_{v}\) that partitions the polygon such that each subpolygon contains at most \(2/3\) of the sites if
1. \(n/3\leq c(v)\leq 2n/3\), or
2. \(\nabla_{v}\) contains at least \(2n/3\) sites, or
3. for each child \(w\) of \(v\) we have \(c(w)<n/3\), and \(c(v)>2n/3\).
In case 1, we choose \(\lambda\) as the vertical segment between the trapezoid of \(v\) and its parent. In case 2, we choose \(\lambda\) as a segment for which exactly \(n/3\) sites in \(\nabla_{v}\) lie left of \(\lambda\). As \(\nabla_{v}\) contains more than \(2n/3\) sites, at most \(n/3\) sites lie outside of \(\nabla_{v}\), thus \(P_{\ell}\) contains between \(n/3\) and \(2n/3\) sites.
In case 3, we choose \(\lambda\) to lie within \(\nabla_{v}\). Note that \(v\) must have at least two children, otherwise \(\Delta_{v}\) would contain at least \(2n/3-n/3=n/3\) sites, thus we would be in case 1 or case 2. Assume that \(\nabla_{p(v)}\) lies right of \(\nabla_{v}\), where \(p(v)\) denotes the parent of \(v\). We consider the two children \(u,w\) for which the trapezoids lie left of \(\nabla_{v}\), see Figure 6. When there is only one such site \(u\), we consider \(c(w)=0\). It holds that \(c(u)+c(w)<2n/3\). We choose \(\lambda\) such that \(\max(0,n/3-c(u)-c(w))\) sites in \(\nabla_{v}\) lie left of \(\lambda\). Thus \(P_{\ell}\) contains between \(n/3\) and \(2n/3\) sites of \(S\). Similarly, we consider the children on the right when \(\nabla_{p(v)}\) lies left of \(\nabla_{v}\).
We now show that if none of the above conditions hold, there is a path in the tree to a node for which one of these conditions holds. If none of the above conditions hold, then either \(c(v)<n/3\), or \(c(v)>2n/3\) and there is a child \(w\) of \(v\) with \(c(w)\geq n/3\). In the first case, we consider the node \(p(v)\), in the second case we consider the node \(w\) with \(c(w)\geq n/3\). By continuing like this, a path in the tree is formed. If the path ends up in \(r\), it must hold that \(c(r)>2n/3\), and thus condition 2 holds. If the path ends up in a leaf node, then either condition 1 or 2 must hold for the leaf node.
This proves not only that there exists such a vertical segment, but also provides a way to find such a segment. As the tree contains \(O(m)\) nodes, we can find a trapezoid that can contain \(\lambda\) in \(O(m)\) time. We can separate the sites in a trapezoid by a vertical line segment such that exactly \(x\) sites lie left of the segment in linear time. Thus, the algorithm runs in \(O(n+m)\) time.
The following lemma states that we can find the projections \(p_{\lambda}\) efficiently. The algorithm produces not only these projected sites, but also the shortest path tree \(\mathit{SPT}_{\lambda}\) of \(\lambda\).
We can compute the closest point \(p_{\lambda}\) on \(\lambda\) and \(d(p,p_{\lambda})\) for all sites \(p\in S\), and the shortest path tree \(\mathit{SPT}_{\lambda}\), in \(O(m+n\log m)\) time.
Proof.: Consider the horizontal decomposition \(\mathcal{HD}\) of \(P\) and its dual tree \(T\). Let \(x_{\lambda}\) denote the \(x\)-coordinate of the vertical line segment \(\lambda\) and let \(t\) be its top endpoint and \(b\) its bottom endpoint. We choose the root \(r\) of the tree as the trapezoid \(\nabla_{r}\) for which \(t\in\nabla_{r}\) and \(\nabla_{r}\cap\lambda\neq\emptyset\). We color \(T\) as follows, see Figure 7 for an example. Every node \(v\) for which the corresponding trapezoid \(\nabla_{v}\) contains the top or bottom endpoint of \(\lambda\) is colored orange. Note that the root \(r\) is thus colored orange. Every node \(v\) for which \(\nabla_{v}\) is crossed from top to bottom by \(\lambda\) is colored blue. For every other node \(v\in T\), consider its lowest ancestor \(w\) that is colored either blue or orange. If \(w\) is blue, then \(v\) is colored green, if \(w\) is orange, \(v\) is colored purple. Next, we describe how to find \(p_{\lambda}\) for a site \(p=(x_{p},y_{p})\in\nabla_{v}\) for each color of \(v\).
_Blue:_ The horizontal line segment connecting \(p\) and \(\lambda\) is contained within \(P\). So, \(p_{\lambda}=(x_{\lambda},y_{p})\).
_Orange:_ The shortest path to \(\lambda\) is again a line segment. If \(p\) lies above \(t\) or below \(b\), then \(p_{\lambda}\) is \(t\) or \(b\), respectively. Otherwise, \(p_{\lambda}=(x_{\lambda},y_{p})\).
_Green:_ Let \(w\) be the highest green ancestor of \(v\) (possibly \(v\) itself). Suppose that \(\nabla_{w}\) lies above the trapezoid \(\nabla_{p(w)}\) of its (blue) parent and \(\nabla_{w}\) lies in \(P_{\ell}\). Let \(q\) be the bottom right corner of \(\nabla_{w}\). Then \(q_{\lambda}=(x_{\lambda},y_{q})\), as \(q\) also lies in \(\nabla_{p(w)}\). We will show that \(p_{\lambda}=q_{\lambda}\). Suppose to the contrary that \(p_{\lambda}\neq q_{\lambda}\). As \(v\) is a descendant of \(w\), any path from \(p\) to \(\lambda\) must intersect the bottom of \(\nabla_{v}\). Let \(q^{\prime}\) be the this intersection point. Because \(q^{\prime}\) lies on the bottom boundary of \(\nabla_{w}\), the horizontal line segment \(q^{\prime}q_{\lambda}\) is contained within \(P\) and thus \(q^{\prime}_{\lambda}=q_{\lambda}\). Because \(d(q^{\prime},q^{\prime}_{\lambda})=d(q^{\prime},q_{\lambda})<d(q^{\prime},p_{ \lambda})\), subpath optimality implies that \(d(p,q_{\lambda})<d(p,p_{\lambda})\), which is a contradiction. So \(p_{\lambda}=q_{\lambda}\). Symmetrically, we can show a similar statement when \(\nabla_{v}\) lies below \(\nabla_{p(v)}\) and/or \(\nabla_{v}\) lies in \(P_{r}\).
_Purple:_ Let \(w\) be the highest purple ancestor of \(v\). Assume that \(\nabla_{p(w)}\) lies below \(\nabla_{w}\). Note that the bottom segment of \(\nabla_{w}\) lies above \(t\). It follows that for all points \(q\) on this segment we have \(q_{\lambda}=t\). According to the same argument as for the green trapezoids, we thus have \(p_{\lambda}=t\). Symmetrically, if \(\nabla_{p(w)}\) lies above \(\nabla_{w}\), then \(p_{\lambda}=b\).
We can thus find \(p_{\lambda}\) for all \(p\in S\) as follows. We perform a depth first search on \(T\) starting at \(r\). When we visit a node \(v\), we first determine its color in \(O(1)\) time. Then for all \(p\in\nabla_{v}\) we determine \(p_{\lambda}\) as described before. Note that this can also be done in constant time, because for a green/purple node we already computed the projected sites for the parent trapezoid. This thus takes \(O(m+n)\) time overall.
The shortest path tree \(\mathit{SPT}_{\lambda}\) can now be computed as follows. All vertices of \(P\) in a blue trapezoid are children of \(\lambda\). At most four of these vertices lie on the boundary of the blue and green region (such as \(q\) in Figure 7). For each such vertex \(q\), we include the shortest path tree of \(q\) restricted to its respective green region as a subtree of \(q\). We include the shortest path trees of \(t\) and \(b\) restricted to the top and bottom orange and purple regions as a subtree of \(\lambda\). Because the regions where we construct a shortest path trees are disjoint, we can compute all of these shortest path trees in \(O(m)\) time [24]. Finally, for all \(p\in S\) we compute \(d(p,p_{\lambda})\) in \(O(n\log m)\) time, and include each site in \(\mathit{SPT}_{\lambda}\) as a child of their apex.
Given \(\mathit{SPT}_{\lambda}\), we can construct a \(4\)-spanner \(\mathcal{G}_{\lambda}\) on the additively weighted points \(S_{\lambda}\), where the groups adhere to the properties of Lemma 3.1, in \(O(n\log n+m)\) time.
Proof.: The \(4\)-spanner of Section 3.2.1 requires an additional step at each level of the recursion,
Figure 7: An example of the coloring of the horizontal decomposition in a simple polygon.
namely the formation of \(\Theta(\sqrt{n})\) groups. We first discuss the running time to construct a 4-spanner when forming the groups as in Section 3.2.1, and then improve the running time by introducing a more efficient way to form the groups.
In Section 3.2.1, the groups are formed based on the shortest path tree of the site \(c\). Building the shortest path tree, and a corresponding point location data structure, takes \(O(m)\) time [24]. Then, we perform a point location query for each site to find its apex in the shortest path tree, and add the sites to the tree. These queries take \(O(n\log m)\) time in total. We form groups based on the traversal of the tree. Note that we do not distinguish between sites with the same parent in the tree, as the tree \(\mathcal{T}_{i}\) (and thus the region \(R_{i}\)) obtained contains the same vertices of \(P\) regardless of the order of these sites. After obtaining the groups, we again add only \(O(n)\) edges to the spanner. The overall running time of the algorithm is thus \(O((m+n\log m)\log n)\).
This running time can be improved by using another approach to form the groups. To form groups that adhere to the properties of Lemma 6, and thus result in a spanner of the same complexity, we can use any partition of \(P\) into regions \(R_{i}\), as long \(R_{i}\) as contains \(\Theta(\sqrt{n})\) sites and \(\sum_{i}r_{i}=O(m)\). Next, we describe how to form such groups efficiently using \(\mathit{SPT}_{\lambda}\).
We first define an ordering on the sites. This is again based on the traversal of some shortest path tree. Instead of considering the shortest path tree of a point site, we consider the shortest path tree \(\mathit{SPT}_{\lambda}\) of \(\lambda\). Again, all sites in \(S\) are included in this shortest path tree. Additionally, we split the node corresponding to \(\lambda\) into a node for each distinct projection point on \(\lambda\) (of the vertices and the sites) and add an edge between each pair of adjacent points, see Figure 8. We root the tree at the node corresponding to the bottom endpoint of \(\lambda\). Whenever a node \(t\) on \(\lambda\) has multiple children, in other words, when multiple sites are projected to the same point \(t\), our assumption that all \(y\)-coordinates are distinct ensures that all these sites lie either in \(P_{\ell}\) or \(P_{r}\).
The groups are formed based on the in-order traversal of this tree, which can be performed in \(O(m+n)\) time. As before, the first \(\lceil\sqrt{n}\rceil\) are in \(S_{1}\), the second in \(S_{2}\), etc. The groups thus adhere to the first property. Next, we show they also adhere to the second property.
For each group \(S_{i}\), we again consider the minimal subtree \(\mathcal{T}_{i}\) of \(\mathit{SPT}_{\lambda}\) containing all \(p\in S_{i}\). \(\mathcal{T}_{i}\) defines a region \(R_{i}\) in \(P\) as follows. Let \(a\) be the first site of \(S_{i}\) in \(\mathcal{T}_{i}\) by the ordering used before. Assume that \(a\) lies in \(P_{\ell}\). We distinguish two cases: \(a_{\lambda}\in\mathcal{T}_{i}\), or \(a_{\lambda}\notin\mathcal{T}_{i}\). When \(a_{\lambda}\in\mathcal{T}_{i}\), then let \(\pi_{a}\) be the path obtained from \(\pi(a_{\lambda},a)\) by extending the last segment to the boundary of \(P\). Additionally, we extend \(\pi_{a}\) into \(P_{r}\) horizontally until we hit the polygon
Figure 8: The shortest path tree \(\mathit{SPT}_{\lambda}\) and associated polygonal region \(R_{i}\) for each group \(S_{i}\).
boundary. When \(a_{\lambda}\notin\mathcal{T}_{i}\), consider the root \(v_{i}\) of \(\mathcal{T}_{i}\). Let \(\pi_{a}\) be the path obtained from \(\pi(v_{i},a)\) by extending the last segment of the path to the boundary of \(P\). Similarly, let \(\pi_{b}\) be such a path for the rightmost site of \(S_{i}\) in \(\mathcal{T}_{i}\). We take \(R_{i}\) to be the region in \(P\) bounded by \(\pi_{a}\), \(\pi_{b}\), and some part of the boundary of \(P\) that contains the sites in \(S_{i}\). See Figure 8. Note that, as before, only vertices of \(P\) that are in \(\mathcal{T}_{i}\) can occur in \(R_{i}\). All shortest paths between sites in \(S_{i}\) are contained within \(R_{i}\). Just as for the shortest path tree of \(c\), Lemma 9 implies that any vertex \(v\in\mathit{SPT}_{\lambda}\) occurs in at most two trees \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) as a non-root vertex. We conclude that any vertex is used by shortest paths within at most two groups.
After splitting \(\lambda\) at a point \(O\), the tree \(\mathit{SPT}_{\lambda}\) is also split into two trees \(\mathcal{T}_{\ell}\) and \(\mathcal{T}_{r}\) that contain exactly the sites in \(S_{\ell}\) and \(S_{r}\). We can thus reuse the ordering to form groups at each level of the recursion. This way, the total running time at a single level of the recursion is reduced to \(O(n)\). The overall running time thus becomes \(O(n\log n+m)\).
Given \(\mathit{SPT}_{\lambda}\), we can construct a \(2k\)-spanner \(\mathcal{G}_{\lambda}\) on the additively weighted points \(S_{\lambda}\), where groups are formed as in Section 3.2.2, in \(O(n\log n+m)\) time.
Proof.: To construct the \(1\)-dimensional \(2k\)-spanner of Section 3.2.2, we can use the shortest path tree of \(\lambda\) to form the groups as before. Note that we can select a center for each group after computing the groups, as including the center in the subgroups does not influence spanning ratio or complexity. After ordering the sites based on the in-order traversal of \(\mathit{SPT}_{\lambda}\), we can build the tree of groups in linear time using a bottom up approach. As before, fix \(N=n^{1/k}\). We first form the \(\Theta(N^{k})\) lowest level groups, containing only a single site, and select a center for each group. Each group at level \(i\) is created by merging \(\Theta(N)\) groups at level \(i-1\), based on the same ordering. We do not perform this merging explicitly, but for each group we select the site closest to \(O\) of the merged level-\((i-1)\) centers as the center. Because our center property, being the closest to \(O\), is decomposable, this indeed gives us the center of the entire group. This way, we can compute the edges added in one level of the recursion in linear time, so the running time remains \(O(n\log n+m)\).
The total running time thus becomes \(O((n(\log n+\log m)+m)\log n)=O(n\log^{2}n+m\log n)\). Here, we used that \(n\log n\log m=O(n\log^{2}n)\) for \(m<n^{2}\), and \(n\log n\log m=O(m\log n)\) for \(m\geq n^{2}\). By splitting the polygon alternately based on the sites and the polygon vertices, we can replace the final \(O(\log n)\) factor by \(O(\min(\log n,\log m))\).
When we apply the refinement of Lemma 3, step 3 is performed on \(c_{\varepsilon,k}n\) sites, where \(c_{\varepsilon,k}\) is a constant depending only on \(\varepsilon\) and \(k\), increasing the running time for this step by a factor \(O(c_{\varepsilon,k})\). Together with Lemma 11, we obtain the following theorem.
Let \(S\) be a set of \(n\) point sites in a simple polygon \(P\) with \(m\) vertices, and let \(k\geq 1\) be any integer constant. For any constant \(\varepsilon>0\), we can build a geodesic \((2k+\varepsilon)\)-spanner of size \(O(c_{\varepsilon,k}n\log^{2}n)\) and complexity \(O(c_{\varepsilon,k}(mn^{1/k}+n\log^{2}n))\) in \(O(c_{\varepsilon,k}n\log^{2}n+m\log n+K)\) time, where \(c_{\varepsilon,k}\) is a constant depending only on \(\varepsilon\) and \(k\), and \(K\) is the output complexity.
## 4 Balanced shortest-path separators
Let \(S\) be a set of \(n\) point sites inside a polygonal domain \(P\) with \(m\) vertices. We denote by \(\partial P\) and \(\partial H\) the boundary of the outer polygon and the boundary of the holes, respectively. In this section, we develop an algorithm to partition the polygonal domain \(P\) into two subdomains \(P_{\ell}\) and \(P_{r}\) such that roughly half of the sites in \(S\) lie in \(P_{\ell}\) and half in \(P_{r}\). Additionally, we require that the curve bounding \(P_{\ell}\) consists of at most three shortest paths and possibly part of \(\partial P\) or \(\partial H\).
In a polygonal domain, we cannot simply split the domain into two subdomains by a line segment, as we did to partition a simple polygon, because any line segment that appropriately partitions \(S\) might intersect one or more holes. Even if we allow a shortest path between two points on \(\partial P\) as our separator, it is not always possible to split the sites into sets \(S_{\ell}\) and \(S_{r}\) of roughly equal size. See Figure 9 for an example. We thus need another approach for subdividing the domain. We adapt the balanced shortest-path separator of Abam et al. [3] for this purpose.
To partition the polygonal domain into two subdomains \(P_{\ell}\) and \(P_{r}\), such that each contains roughly half of the sites in \(S\), we allow three different types of separator. These separators consist of 1, 2, or 3 shortest paths, as seen in Figure 10. Formally, we define a balanced shortest-path separator as follows.
A _balanced shortest-path separator_ (sp-separator) partitions the polygonal domain into two subdomains \(P_{\ell}\) and \(P_{r}\), such that \(2n/9\leq|S_{\ell}|\leq 2n/3\). The separator is of one the following three types.
* A _1-separator_ consists of a single shortest path \(\pi(u,v)\) that connects two points \(u,v\in\partial P\). \(P_{\ell}\) is the subdomain left of the oriented path \(\pi(u,v)\).
* A _2-separator_ consists of two shortest path \(\pi(u,v)\) and \(\pi(u,w)\), where \(u\in P\) and \(v,w\in\partial H\), where \(H\) is a hole in \(P\). \(P_{\ell}\) is the subdomain bounded by the two shortest paths and \(\partial H\).
* A _3-separator_ consists of three shortest paths \(\pi(u,v)\), \(\pi(v,w)\), and \(\pi(w,u)\), where \(u,v,w\in P\), and each pair of shortest paths overlaps in a single interval, starting at the common endpoint.
We also allow a shortest path as a _degenerate_ 3-separator, \(P_{\ell}\) is then the degenerate polygon that is this shortest path. We call \(u,v,w\) the _corners_ of the separator. Slightly abusing our notation, we refer to the closed subset of \(\mathbb{R}^{2}\) corresponding to the polygonal domain \(P_{\ell}\) of a separator \(\Delta\) by \(\Delta\) itself. In the rest of this section, we prove the following theorem.
Let \(S\) be a set of \(n\) point sites in a polygonal domain \(P\) with \(m\) vertices. A balanced sp-separator exists and it can be computed in \(O(n^{2}\log m+nm\log m)\) time.
We first give a constructive proof for the existence of a balanced sp-separator, using a similar approach to Abam et al. [3]. Then, we fill in some algorithmic details and prove that the algorithm runs in \(O(n^{2}\log m+nm\log m)\) time.
Existence of a separator.We start by trying to find a 1-separator from an arbitrary fixed point \(u\in\partial P\) to a point \(v\in\partial P\). The point \(v\) is moved clockwise along the boundary of \(P\)
Figure 9: No shortest path between two points on the boundary of \(P\) can separate the sites into two groups. The sites can be separated by three shortest paths, for example using the orange triangle.
starting at \(u\), to find a separator satisfying our constraints, i.e. \(P_{\ell}\) contains between \(2n/9\) and \(2n/3\) sites of \(S\). This way, we either find a balanced 1-separator, or jump over at least \(2n/3-2n/9=4n/9\) sites at a point \(v\). In this case, the region bounded by two shortest paths between \(u\) and \(v\) contains at least \(4n/9\) sites. We then try to find a 2- or 3-separator contained within this region.
To find a 2- or 3-separator, we construct a sequence of 3-separators \(\Delta_{0}\supset\Delta_{1}\supset...\supset\Delta_{k}\), where either the final 3-separator \(\Delta_{k}\) contains between \(2n/9\) and \(2n/3\) sites, or we find a balanced 2-separator within \(\Delta_{k}\). During the construction, the invariant that \(|\Delta_{i}\cap S|\geq 2n/9\) is maintained. The first 3-separator, \(\Delta_{0}\), has as corners the two points \(u\) and \(v\) on \(\partial P\) from before, and a point \(w\), which is an arbitrary point on one of the two shortest paths connecting \(u\) and \(v\). So, \(\Delta_{0}\) is exactly the region bounded by the two shortest path connecting \(u\) and \(v\) that contains at least \(4n/9\) sites.
Whenever \(\Delta_{i}\) contains at most \(2n/3\) sites, we are done. If not, we find \(\Delta_{i+1}\) as follows. If \(\Delta_{i}\) is a degenerate 3-separator, we simply select a subpath of the shortest path that contains \(2n/3\) sites. Similarly, if at least \(2n/9\) sites lie on any of the bounding shortest paths. If neither of these cases holds, then there are at least \(2n/3+1-3(2n/9-1)=4\) sites in the interior of \(\Delta_{i}\). In the following, we find a 3-separator \(\Delta_{i+1}\) with either \(|\Delta_{i+1}\cap S|<|\Delta_{i}\cap S|\), or \(|\text{Int}(\Delta_{i+1})\cap S|<|\text{Int}(\Delta_{i})\cap S|\), or a valid 2-separator contained within \(\Delta_{i}\). Because each \(\Delta_{i}\) contains either fewer sites, or fewer sites in its interior, than its predecessor, we eventually find a 3-separator with the desired number of sites, or we end up in one of the easy degenerate cases. In the following description, we drop the subscript \(i\) for ease of description.
We define a _good_ path to be a path from a point \(p\in\Delta\) to a corner \(u\) to be a shortest path \(\pi(p,u)\) that is fully contained within \(\Delta\). To make sure our definition is also correct when one of the corners lies inside \(\Delta\), we do not allow the path to cross \(\pi(v,w)\). See Figure 11 for an illustration. Essentially, we see the coinciding part of the shortest paths as having an infinitesimally small separation between them. The following provides a formal definition of a good path.
A shortest path \(\pi(p,u)\) from a point \(p\) that lies in a 3-separator \(\Delta\) to a corner \(u\) of \(\Delta\), is a _good path_ if it is fully contained within \(\Delta\), and \(\pi(p,u)\) is a shortest path in the polygonal domain \(P\cap\Delta\), where the outer polygon is \(\Delta\).
We consider the closed region \(Z_{u}\subset\Delta\) such that for every \(p\in Z_{u}\) there is a good path to \(u\). Because for any \(p^{\prime}\) on a good path \(\pi(p,u)\) the path \(\pi(p^{\prime},u)\) is also a good path, \(Z_{u}\) is connected. \(Z_{u}\) is bounded by \(\pi(u,v)\), \(\pi(u,w)\), \(\partial H\), and a curve \(B_{u}\) that connects \(v\) to \(w\), see Figure 12. Because we do not consider \(\partial H\) to be part of \(B_{u}\), this is a possibly disconnected curve that consists of edges from the shortest path map of \(u\) and \(\Delta\). The shortest path map
Figure 10: A 1-separator (blue), a 2-separator (orange), and a 3-separator (green).
of a point \(p\) in a polygonal domain \(P\) partitions the free space into maximal regions, such that for any two points in the same region the shortest paths from \(u\) to both points use the same vertices of \(P\)[37]. We call the curves of the shortest path map for which there are two topologically distinct paths from \(p\) to any point on the curve _walls_. We prove the following lemma of Abam et al. [3] for our definition of a good path.
[Lemma 3.2 of [3]] For any point \(z\in B_{u}\), there are good paths \(\pi(z,u)\), \(\pi(z,v)\), and \(\pi(z,w)\) to the three corners of \(\Delta\).
Proof.: By definition of \(B_{u}\), there is a good path from \(z\) to \(u\). When \(z\) lies on \(\pi(v,w)\), the subpaths of \(\pi(v,w)\) from \(z\) to \(v\) and to \(w\) are good paths. So, assume \(z\notin\pi(v,w)\). We will prove by contradiction that there is a good path from \(z\) to \(v\), and by symmetry from \(z\) to \(w\). Suppose there is no good path from \(z\) to \(v\). Because \(z\) is on \(B_{u}\), there is also a shortest path from \(z\) to \(u\) that is not a good path, so it is not contained within \(\Delta\), or it crosses \(\Delta\). This path \(\pi(z,u)\) must exit (or cross) \(\Delta\) through \(\pi(v,w)\), because otherwise the path could simply continue along \(\pi(v,u)\) or \(\pi(w,u)\) and stay within \(\Delta\). Similarly, \(\pi(z,v)\) must exit \(\Delta\) through \(\pi(u,w)\). The path \(\pi(z,u)\) either goes around \(v\) or \(w\), as shown in Figure 12. In both cases, any path to \(v\) that starts at \(z\) and exits (or crosses) through \(\pi(u,w)\) and does not intersect \(\pi(u,w)\) again, must intersect the path \(\pi(z,u)\). As these shortest paths start at the same point, this is a contradiction with the fact that two shortest paths can only cross once.
This lemma implies that for any point \(z\) on \(B_{u}\), the 3-separator with corners \(u\), \(v\), and
Figure 11: Of the shortest paths \(\pi(p,u)\), \(\pi(q,u)\), and \(\pi(r,u)\) only \(\pi(p,u)\) is a good path.
Figure 12: Any path from \(z\) to \(v\) that exits \(\Delta\) through \(\pi(u,w)\) must intersect a path from \(z\) to \(u\).
is contained within \(\Delta\). We use this observation by moving a point \(z\) along \(B_{u}\) from \(w\) to \(v\). At the start, the 3-separator defined by \(u\), \(v\), and \(z\), which we denote by \(T_{z}\), is equal to \(\Delta\). At the end, \(T_{z}\) is equal to the degenerate 3-separator \(\pi(u,v)\). Note that, in contrast to the situation of Abam et al. [3] for a terrain, \(B_{u}\) is not necessarily continuous, as it can be interrupted by holes. But, if \(B_{u}\) intersects a hole, it intersects this hole exactly twice, because \(Z_{u}\) is connected. A directed walk along \(B_{u}\), jumping at holes, is thus still well-defined. We walk a point \(z\) along \(B_{u}\) until one of the following happens: (1) \(|S\cap T_{z}|\) or \(|S\cap\mathrm{Int}(T_{z})|\) decreases, or (2) \(z\) encounters a hole.
In case (1), either \(T_{z}\) contains at least \(2n/9\) sites, then we set \(\Delta_{i+1}:=T_{z}\), or we jump over at least \(2n/3-2n/9=4n/9\) sites. This can happen because the shortest path to \(u\), to \(v\), or both jump over a hole. We assume the path to \(u\) jumps, the approach for when the path to \(v\) jumps is symmetric. The 3-separator \(u,z,w^{\prime}\), with \(w^{\prime}\) on one of the two \(\pi(z,u)\), could contain the same number of sites, in the closure and in its interior, as \(\Delta\). Therefore, we select an arbitrary site \(s\in S\) that lies in the region bounded by the two shortest paths. We then consider the two 3-separators with \(u\), \(z\), and \(s\) as corners. As \(s\) now lies on the boundary of the 3-separators, both of these separators contain less sites in their interior than \(\Delta\). And, as they partition a region that contains at least \(4n/9\) sites, one of them contains at least \(2n/9\) sites. We set \(\Delta_{i+1}\) to be this 3-separator.
In case (2), let \(z_{1}\) denote the point we encounter the hole \(H_{i}\), and \(z_{2}\) be the point where \(B_{u}\) exits \(H_{i}\), see Figure 13. Whenever \(|T_{z_{2}}\cap S|\geq 2n/9\), we simply continue the walk at \(z_{2}\), until we again end up in one of the two cases. If not, then there are at least \(2n/3-2n/9=4n/9\) sites in \(T_{z_{1}}\setminus T_{z_{2}}\), because \(T_{z_{2}}\subset T_{z_{1}}\). Note that \(T_{z_{1}}\setminus T_{z_{2}}\) is exactly the union of the two 2-separators \(z_{1},u,z_{2}\) and \(z_{1},v,z_{2}\). This means that either the 2-separator \(z_{1},u,z_{2}\) or \(z_{1},v,z_{2}\) contains at least \(2n/9\) sites. Suppose that the separator including \(u\) contains at least \(2n/9\) sites. We then try to find a balanced 2-separator \(z_{1},u,z^{*}\) with \(z^{*}\in\partial H_{i}\) that is contained within the 2-separator \(z_{1},v,z_{2}\). For each point \(z^{*}\in\partial H_{i}\) on the "\(u\)-side" of \(H_{i}\) (blue in Figure 13), \(\pi(z^{*},u)\) is contained within the 2-separator \(z_{1},u,z_{2}\), as it cannot cross \(\pi(z_{1},u)\) and \(\pi(z_{2},u)\). As we did to find a 1-separator, we walk \(z^{*}\) along \(\partial H_{i}\) from \(z_{1}\) to \(z_{2}\) on the "\(u\)-side". As before, we now find a balanced 2-separator, and we are done, or we jump over at least \(2n/3-2n/9=4n/9\) sites. Again, this region does not necessarily contain less sites than \(\Delta\), thus we continue as before by selecting a site \(s\) in this region as the third corner of \(\Delta_{i+1}\). Similarly, when the 2-separator with \(v\) as a corner contains more than \(2n/9\) sites, we
Figure 13: The region \(Z_{u}\) where all points have good paths to \(u\). In blue and green the 2-separator with corners \(z_{1},u,z_{2}\) and \(z_{1},v,z_{2}\), respectively.
walk \(z^{*}\) along the other side of \(H_{i}\) and consider the 2-separator \(z_{1},v,z^{*}\).
Running time.Next, we discuss the details and running times of the general algorithm that computes a balanced sp-separator. We choose for \(u\) the vertex of \(\partial P\) with smallest \(y\)-coordinate. We first construct the _augmented_ shortest path map \(\mathit{SPM}_{u}\) of \(u\). In the augmented shortest path map, we include the shortest path tree of \(u\), essentially triangulating each region in the shortest path map using the vertices of \(P\). This can be constructed in \(O(m\log m)\) time [37] (or even \(O(m+h\log h)\) time, where \(h\) is the number of holes in \(P\)). We then find the region of \(\mathit{SPM}_{u}\) that contains \(p\in S\) for all \(p\) in \(O(n\log m)\) time. We call the vertices of \(\partial P\) and \(\mathit{SPM}_{u}\cap\partial P\) breakpoints, see Figure 14. After sorting these breakpoints along the boundary of \(P\), we can easily move \(v\) along the boundary from one breakpoint to the next, while keeping track of the number of sites in \(P_{\ell}\). When moving \(v\) from one breakpoint to the next, there is only one new triangle included in \(P_{\ell}\). Whenever we encounter a _wall_ breakpoint \(b\), we additionally check the number of sites that lie between the two shortest paths \(\pi(u,b)\). If this number is too large, we continue to find a 2-, or 3-separator in the region bounded by these shortest paths. If not, then we either find a suitable 1-separator at one of the breakpoints, or, if the difference in \(|S_{\ell}|\) between two consecutive breakpoints \(v^{\prime}\) and \(v\) is too large, we find a 1-separator with corners \(u\) and a point \(v^{*}\) between the two breakpoints. As only sites in the triangle \(vv^{\prime}w\) are of interest, where \(w\) is the predecessor of \(v\) in \(\pi(u,v)\), this point can easily be found. When too many sites lie on a line through \(w\) in this final triangle, we select a subpath of this line that contains the desired number of sites as our (degenerate) separator. There are \(O(m)\) breakpoints, thus in \(O((m+n)\log m)\) time we either find a balanced 1-separator, or we find a region bounded by two shortest paths where we will find a 2- or 3-separator.
We then construct the sequence of \(O(n)\) 3-separators \(\Delta_{0}\supset\Delta_{1}\supset...\supset\Delta_{k}\). To go from a 3-separator \(\Delta_{i}\) with corners \(u,v,w\) to the next 3-separator \(\Delta_{i+1}\), we need to construct the curve \(B_{u}\). As stated before, this (possibly disconnected) curve consists of edges from the shortest path map of \(u\). We thus construct the augmented shortest path map \(\mathit{SPM}_{u}\) in \(O(m+h\log h)\) time. Then we find \(B_{u}\) by labelling the shortest path map regions in \(\Delta\) whose shortest path is contained in \(\Delta\), and selecting all edges that are between a labelled and a non-labelled region. After performing a point location in the shortest path map for
Figure 14: The augmented shortest path map of \(u\). The walls of \(\mathit{SPM}_{u}\) are the orange edges. For \(v\) in its current position \(P_{\ell}\) is given by the blue region.
each site in \(S\), we can walk \(z\) along \(B_{u}\) while keeping track of \(|S\cap T_{z}|\) as we did along the polygon boundary. When we encounter a hole, we first check whether the 2-separator with \(u\) or \(v\) contains more sites, for example by explicitly constructing these 2-separators and point locating the sites, and then continue along the boundary of the hole on the \(u\)- or \(v\)-side. During the procedure to find \(\Delta_{i+1}\), we perform a constant number of point locations per site, thus the running time is \(O((n+m)\log m)\). As we lose at least one site in each iteration, the sequence has maximum length \(n\), and the total running time is \(O(n^{2}\log m+nm\log m)\).
Correction of a technical issue in Abam et al. [3].Abam et al. [3] present an approach to find a balanced sp-separator on a terrain \(\mathcal{T}\). They define an sp-separator as either a 1-separator or a 3-separator, where the three shortest paths are disjoint except for their mutual endpoints. However, on a polyhedral terrain these paths might also not be disjoint. This can, for example, happen when choosing a site \(s\) in the area bounded by two shortest paths as a new corner, see Figure 15. Their subsequent definition of a good path, which is simply a shortest path contained within \(\Delta\), would then imply that the entire interior of the box, including the blue region in Figure 14(a), is contained in \(Z_{u}\). The question is then how we move \(z\) from \(w\) to \(v\). If we would move \(z\) along \(\pi(w,v)\) at the start, the site \(w\) will immediately enter the interior of the 3-separator \(u,v,z\), see Figure 14(b). The number of sites in the interior has thus actually _increased_ instead of decreased. In a polyhedral terrain, we can define a good path similar to Definition 19, but then consider the path \(\pi(p,u)\) in the polyhedral terrain \(\mathcal{T}\cap\Delta\). Using our renewed definition, the proof of Abam et al. [3] also holds in the case of coinciding edges on a terrain.
## 5 Spanners in a polygonal domain
We consider a set of point sites \(S\) that lie in a polygonal domain \(P\) with \(m\) vertices and \(h\) holes. Let \(\partial P\) denote the boundary of the outer polygon. In Section 5.1, we first discuss how to obtain a simple geodesic spanner for a polygonal domain, using the separator of Section 4. As before, the complexity of this spanner can be high. In Section 5.2.1, we discuss an adaptation to the spanner construction that achieves lower-complexity spanners, where the edges in the spanner are no longer shortest paths.
Figure 15: A polygonal domain where the shortest paths in the separator \(\Delta\) with corners \(u,v,w\) are not disjoint.
### A simple geodesic spanner
A straightforward approach to construct a geodesic spanner for a polygonal domain would be to use the same construction we used for a simple polygon in Section 3.1. As discussed in Section 4, we cannot split the polygon into two subpolygons by a line segment \(\lambda\). However, to apply our \(1\)-dimensional spanner, we require only that the splitting curve \(\lambda\) is a shortest path in \(P\). Instead of a line segment, we use the balanced sp-separator of Section 4 to split the polygonal domain. There are three types of such a separator: a shortest path between two points on \(\partial P\) (\(1\)-separator), two shortest paths starting at the same point and ending at the boundary of a single hole (\(2\)-separator), three shortest paths \(\pi(u,v)\), \(\pi(v,w)\), and \(\pi(u,w)\) with \(u,v,w\in P\) (\(3\)-separator). See Figure 10 for an illustration and Definition 17 for a formal definition. Let \(P_{\ell}\) be the polygonal domain to the left of \(\lambda\), when \(\lambda\) is a \(1\)-separator, and interior to \(\lambda\), when \(\lambda\) is a \(2\)- or \(3\)-separator. Symmetrically, \(P_{r}\) is the domain to the right of \(\lambda\) for a \(1\)-separator and exterior to \(\lambda\) for a \(2\)-, or \(3\)-separator. As before, let \(S_{\ell}\) be the sites in the closed region \(P_{\ell}\), and \(S_{r}:=S\setminus S_{\ell}\). To compute a spanner on the set \(S\), we project the sites to each of the shortest paths defining the separator, and consecutively run the \(1\)-dimensional spanner algorithm once on each shortest path. Note that these projections are no longer unique, as there might be two topologically distinct shortest paths to \(\lambda\), but we can simply select an arbitrary one to obtain the desired spanning ratio and spanner complexity. We then add the edge \((p,q)\) to our spanner \(\mathcal{G}\) for each edge \((p_{\lambda},q_{\lambda})\) in the \(1\)-dimensional spanners. Finally, we recursively compute spanners for the sites \(S_{\ell}\) in \(P_{\ell}\) and \(S_{r}\) in \(P_{r}\), just like in the simple polygon case.
Whenever the sp-separator intersects a single hole at two or more different intervals, then part of \(P_{\ell}\) or \(P_{r}\) becomes disconnected. When this happens, we simply consider each connected polygonal domain as a separate subproblem, and recurse on all of them. Let \(n_{i}\) and \(m_{i}\) denote the number of sites and worst-case complexity of a shortest path in subproblem \(i\). This means that only reflex vertices are counted for \(m_{i}\), which are the only relevant vertices for the spanner complexity. The sites are partitioned over the subproblems, so we have \(\sum_{i}n_{i}=n\). The only new vertices (not of \(P\)) that can be included in the subproblems are the at most three corners of the separator. Each vertex can be a reflex vertex in only one of the subproblems, thus \(\sum_{i}m_{i}\leq m+3\). In the recursion, an increase in the number of subproblems means that we might have more than \(c\cdot 2^{i}\) vertices not of \(P\) at level \(i\), but the depth of the recursion tree is then proportionally decreased. All further proofs on complexity of our spanners are written in term of \(P_{\ell}\) and \(P_{r}\), but translate to the case of multiple subproblems.
Next, we analyze the spanner construction using any \(1\)-dimensional additively weighted \(t\)-spanner of size \(O(n\log n)\).
The graph \(\mathcal{G}\) is a geodesic \(3t\)-spanner of size \(O(n\log^{2}n)\).
Proof.: As the \(1\)-dimensional spanner has size \(O(n\log n)\), there are still \(O(n\log^{2}n)\) edges in \(\mathcal{G}\). What remains is to argue that \(\mathcal{G}\) is a \(3t\)-spanner. Let \(p,q\) be two sites in \(S\). In contrast to the simple polygon case, a shortest path between two sites \(p,q\) in \(S_{\ell}\) (resp. \(S_{r}\)) is not necessarily contained in \(P_{\ell}\) (resp. \(P_{r}\)). Therefore, we distinguish two different cases: either \(\pi(p,q)\) is fully contained within \(P_{r}\) or \(P_{\ell}\), or there is a point \(r\in\pi(p,q)\cap\lambda\) for some shortest path \(\lambda\) of the separator. In the first case, there exists a path in \(\mathcal{G}\) of length at most \(3td(p,q)\) by induction. In the second case, we have
\[d_{\mathcal{G}}(p,q)\leq d_{\mathcal{G}_{\lambda}}(p,q)\leq td_{w}(p_{\lambda},q_{\lambda})=t(d(p,p_{\lambda})+d(p_{\lambda},q_{\lambda})+d(q,q_{\lambda})). \tag{4}\]
Additionally, we use that \(d(p_{\lambda},q_{\lambda})\leq d(p_{\lambda},r)+d(r,q_{\lambda})\) and \(d(p_{\lambda},r)\leq d(p_{\lambda},p)+d(p,r)\)
because of the triangle inequality, so
\[d(p_{\lambda},q_{\lambda})\leq d(p_{\lambda},r)+d(q_{\lambda},r)\leq d(p_{\lambda},p)+d(p,r)+d(q_{\lambda},q)+d(q,r)\leq 2d(p,q).\]
It follows that \(d_{\mathcal{G}}(p,q)\leq 3td(p,q)\).
Applying the simple \(1\)-dimensional spanner of Section 2 results in a \(6\)-spanner. However, using the refinement of Lemma 3, we again obtain a \((2+\varepsilon)\)-spanner.
### Low complexity spanners in a polygonal domain
To obtain spanners of low complexity in a simple polygon, we formed groups of sites, such that shortest paths within a group were disjoint from shortest paths of other groups. We proposed two different ways of forming these groups, based on the shortest path tree of the central site \(c\), and based on the shorest path tree of the separator \(\lambda\). Both of these approaches do not directly lead to a low complexity spanner in a polygonal domain, as we explain next.
Both methods can still be applied in a polygonal domain, as the shortest path tree of both a site and a shortest path is still well-defined. However, it does not give us the property that we want for our groups. In particular, the second property discussed in Lemma 14: each vertex of \(P\) is only used by shortest paths within \(O(1)\) groups, does not hold. This is because the shortest path between two vertices \(u,v\) is not necessarily homotopic to the path \(\pi(u,c)\cup\pi(c,v)\). Thus paths within a group can go around a certain hole, while their shortest paths to \(c\) (or \(\lambda\)) do not. See Figure 16 for an example. The construction can easily be expanded to ensure there are more sites in each group, by simply adding as many sites very close to the existing ones, or to more than three groups, by adding an additional hole above the construction with two corresponding sites. Consequently, the property that each vertex of \(P\) is used only by shortest paths within \(O(1)\) groups does not hold.
So far, we assumed that every edge \((p,q)\in E\) is a shortest path between \(p\) and \(q\). To obtain a spanner of low complexity, we can also allow an edge between \(p\) and \(q\) to be any path between the two sites. Note that our lower bounds still hold in this case. In the lower bound for a \((3-\varepsilon)\)-spanner (Figure 4), every path between \(p\in S_{\ell}\) and \(q\in S_{r}\) has complexity \(\Theta(m)\), and in the general lower bound (Figure 18) we can easily adapt the top side of the polygon such that any path between two sites \(p,q\) has the same complexity as \(\pi(p,q)\).
Figure 16: Assigning the sites to groups based on the shortest path tree of \(c\), as described in Section 3.2.1, forms these colored groups. The shortest path from each site to \(c\) is shown dashed. Each shortest path between two sites of a group contains both vertices \(v\) and \(w\).
#### A \(12\)-spanner of complexity \(O(m\sqrt{n}+n\log^{2}n)\)
To obtain a low complexity spanner in a polygonal domain, we adapt our techniques for the simple-polygon \(4\sqrt{2}\)-spanner in such a way that we avoid the problems we just sketched. The main difference with the simply polygon approach is that for an edge \((p_{\lambda},q_{\lambda})\) in the \(1\)-dimensional spanner, the edge \((p,q)\) that we add to \(\mathcal{G}\) is no longer \(\pi(p,q)\). Instead, let \((p,q)\) be the shortest path from \(p\) to \(q\) via \(p_{\lambda}\) and \(q_{\lambda}\), excluding any overlap of the path. We denote this path by \(\pi_{\lambda}(p,q)\). This path is not unique, for example when \(\pi(p,p_{\lambda})\) is not unique, but again choosing any such path will do. Formally, \(\pi_{\lambda}(p,q)\) is defined as follows.
The path \(\pi_{\lambda}(p,q)\) is given by:
* \(\pi(p,p_{\lambda})\cup\pi(p_{\lambda},q_{\lambda})\cup\pi(q_{\lambda},q)\), where \(\pi(p_{\lambda},q_{\lambda})\subseteq\lambda\), if \(\pi(p,p_{\lambda})\) and \(\pi(q,q_{\lambda})\) are disjoint,
* \(\pi(p,r)\cup\pi(r,q)\), where \(r\) denotes the closest point to \(p\) of \(\pi(p,p_{\lambda})\cap\pi(q,q_{\lambda})\), otherwise.
One of the properties that we require of the groups, see Lemma 3.2, has changed, namely that each vertex of \(P\) is only used by shortest paths within \(O(1)\) groups. Instead of the shortest paths between sites in a group, we consider the paths \(\pi_{\lambda}(p,q)\) of Definition 3.2. The following lemma shows that we can obtain a spanner with similar complexity as in a simple polygon when groups adhere to this adjusted property.
If the groups adhere to the following properties, then \(\mathcal{G}\) has complexity \(O(m\sqrt{n}+n\log^{2}n)\):
1. each group contains \(\Theta(\sqrt{n})\) sites, and
2. each vertex of \(P\) is used by paths \(\pi_{\lambda}\) within \(O(1)\) groups.
Proof.: Note that the complexity of any path \(\pi_{\lambda}(p,q)\) is \(O(m)\), as it can use a vertex of \(P\) at most once. Thus the proof of Lemma 3.2 directly implies that the complexity of the edges in one level of the \(1\)-dimensional spanner is \(O(m\sqrt{n}+n)\).
In the \(1\)-dimensional recursion, splitting the sites by \(O\) no longer corresponds to a horizontal split in the polygon. However, the paths \(\pi(p,p_{\lambda})\), \(p\in S_{\ell}\), are still disjoint from the paths \(\pi(q,q_{\lambda})\), \(q\in S_{r}\). For the two subproblems generated by the split by \(O\) it thus still holds that \(m_{1}+m_{2}=m\), where \(m_{i}\) denotes the maximum complexity of a path in subproblem \(i\). Lemma 3.2 states that this recursion solves to \(O(m\sqrt{n}+n\log n)\).
In the recursion where the domain is partitioned into two subpolygons \(P_{\ell}\) and \(P_{r}\), we now add at most three new vertices to the polygonal domain, namely the three corners of the spseparator. Each vertex of \(P\) can only be a reflex vertex in either \(P_{\ell}\) or \(P_{r}\), so \(m_{1}+m_{2}\leq m+3\). Lemma 3.2 implies that this recursion for the complexity solves to \(O(m\sqrt{n}+n\log^{2}n)\).
As in Lemma 3.2, we form the groups based on the traversal of the shortest path tree \(\mathit{SPT}_{\lambda}\). We again include all sites in \(S\) in the shortest path tree. Whenever a node \(v\) of \(\lambda\) has multiple children, we let all nodes that correspond to vertices/sites that lie to the _left_ of \(\lambda\) come before nodes that correspond to vertices/sites that lie to the right of \(\lambda\) in the in-order traversal. Within these sets, the vertices/sites are ordered from bottom to top, as seen from \(v\). See Figure 17 for an example. The subtree rooted at the start and end point of \(\lambda\) is simply a part of the shortest path tree of the start/end point. The first \(\lceil\sqrt{n}\rceil\) sites in the in-order traversal are in \(S_{1}\), the second \(\lceil\sqrt{n}\rceil\) in \(S_{2}\), etc.
Clearly, these groups adhere to property 1 of Lemma 3.2. To show these groups adhere to property 2, we again consider for each group \(S_{i}\) the minimal subtree \(\mathcal{T}_{i}\) of \(\mathit{SPT}_{\lambda}\) that contains all \(p\in S_{i}\). Whenever there is more than one vertex of \(\lambda\) in \(\mathcal{T}_{i}\), we choose the leftmost of these vertices as the root of \(\mathcal{T}_{i}\).
**Lemma 24**.: _An edge \(\pi_{\lambda}(p,q)\) with \(p_{\lambda},q_{\lambda}\in S_{i}\) bends only at vertices in \(\mathcal{T}_{i}\)._
Proof.: The path \(\pi_{\lambda}(p,q)\) is a subpath of \(\pi(p,p_{\lambda})\cap\pi(p_{\lambda},q_{\lambda})\cap\pi(q,q_{\lambda})\). In particular, it is the path from \(p\) to \(q\) in \(\mathcal{T}_{i}\) via their lowest common ancestor. This is \(p_{\lambda}\) or \(q_{\lambda}\) when \(\pi(p,p_{\lambda})\cap\pi(q,q_{\lambda})=\emptyset\) and the vertex \(r\), as in Definition 22, otherwise.
Any vertex in SPT\({}_{\lambda}\) occurs in at most two trees \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) as a non-root node.
Proof.: Follows directly from the proof of Lemma 9.
Except for the root nodes, property 2 thus holds. As each \(\mathcal{T}_{i}\) has only one root node by definition, the number of groups using a single root node \(v\) in their paths may be large, but the sum of the number of groups that use a node over all root nodes is still \(O(\sqrt{n})\). From this and Lemma 23 we conclude that \(\mathcal{G}\) has complexity \(O(m\sqrt{n}+n\log^{2}n)\).
The graph \(\mathcal{G}\) is a geodesic \(12\)-spanner of size \(O(n\log^{2}n)\).
Proof.: The number of edges is exactly the same as in the \(4\sqrt{2}\)-spanner in a simple polygon. The way the groups are formed in the construction of the \(1\)-dimensional spanner \(\mathcal{G}_{\lambda}\) does not influence its spanning ratio, thus \(\mathcal{G}_{\lambda}\) is a \(4\)-spanner (see Lemma 10). Note that even for our redefined edges in \(\mathcal{G}\) it holds that \(d_{\mathcal{G}}(p,q)\leq d_{\mathcal{G}_{\lambda}}(p_{\lambda},q_{\lambda})\). Lemma 21 then directly implies that \(\mathcal{G}\) is a \(12\)-spanner.
#### A \((2k+\varepsilon)\)-spanner of complexity \(O(mn^{1/k}+n\log^{2}n)\)
The generalization of the construction, discussed in Section 3.2.2, where \(\Theta(n^{1/k})\) groups are recursively partitioned into smaller groups, is also applicable in a polygonal domain. As our groups adhere to the required properties, the complexity of this spanner remains \(O(mn^{1/k}+n\log^{2}n)\), as in the simple polygon, but the spanning ratio increases to \(6k\). We can also apply the refinement discussed in Lemma 3 in the construction of the (at most three) \(1\)-dimensional spanners. This improves the spanning ratio to \(2k+\varepsilon\) while increasing the complexity by only a constant factor depending only on \(\varepsilon\) and \(k\).
Figure 17: The shortest path tree SPT\({}_{\lambda}\) and the corresponding group assignment.
**Lemma 27**.: _Let \(S\) be a set of \(n\) point sites in a polygonal domain \(P\) with \(m\) vertices, and let \(k\geq 1\) be any integer constant. For any constant \(\varepsilon>0\), there exists a geodesic \((2k+\varepsilon)\)-spanner of size \(O(c_{\varepsilon,k}n\log^{2}n)\) and complexity \(O(c_{\varepsilon,k}(mn^{1/k}+n\log^{2}n))\), where \(c_{\varepsilon,k}\) is a constant depending only on \(\varepsilon\) and \(k\)._
### Construction algorithm
In this section we discuss an algorithm to compute the geodesic spanners of Section 5.2. The following gives an overview of the algorithm that computes a \(6k\)-spanner of complexity \(O(mn^{1/k}+n\log^{2}n)\) in \(O(n^{2}\log m+nm\log m)\) time.
1. Find an sp-separator such that \(P\) is partitioned into two polygons \(P_{\ell}\) and \(P_{r}\), and \(S_{\ell}\) contains at least \(2n/9\) and at most \(2n/3\) sites using the algorithm of Theorem 18.
2. For each shortest path \(\lambda\) of the separator: 1. For each \(p\in S\) find the weighted point \(p_{\lambda}\) on \(\lambda\) and add this point to \(S_{\lambda}\) using the algorithm of Lemma 28. 2. Compute an additively weighted \(1\)-dimensional spanner \(\mathcal{G}_{\lambda}\) on the set \(S_{\lambda}\). 3. For every edge \((p_{\lambda},q_{\lambda})\in E_{\lambda}\) add the edge \((p,q)=\pi_{\lambda}(p,q)\) to \(\mathcal{G}\).
3. Recursively compute spanners for \(S_{\ell}\) in \(P_{\ell}\) and \(S_{r}\) in \(P_{r}\).
The algorithm start by finding a balanced sp-separator. According to Theorem 18 this takes \(O(n^{2}\log m+nm\log m)\) time. We then continue by building a \(1\)-dimensional additively weighted spanner on each of the shortest paths defining \(\lambda\) as follows.
**Lemma 28**.: _We can compute the closest point \(p_{\lambda}\) on \(\lambda\) and \(d(p,p_{\lambda})\) for all sites \(p\in S\), and the shortest path tree SPT\({}_{\lambda}\), in \(O((m+n)\log m)\) time._
Proof.: Hershberger and Suri [27] show how to build the shortest path map of a point site in a polygonal domain in \(O(m\log m)\) time. They also note that this extends to non-point sources, such as line segments, and to \(O(m)\) sources, without increasing the running time. We can thus build the shortest path map of \(\lambda\) in \(P\) in \(O(m\log m)\) time, using that \(\lambda\) has complexity \(O(m)\). After essentially triangulating each region of the shortest path map using the vertices of \(P\), we obtain \(\mathit{SPT}_{\lambda}\) in the same time bound. After building a point location data structure for this augmented shortest path map, we can query it for each site \(p\in S\) to find \(p_{\lambda}\) and \(d(p,p_{\lambda})\) in \(O(\log m)\) time.
**Lemma 29**.: _Given SPT\({}_{\lambda}\), we can construct a \(4\)-spanner \(\mathcal{G}_{\lambda}\) on the additively weighted points \(S_{\lambda}\), where the groups adhere to the properties of Lemma 23, in \(O(n\log n+m)\) time._
Proof.: As in Lemma 14, we can reuse \(\mathit{SPT}_{\lambda}\) to form the groups based on the ordering produced by an in-order traversal of the tree. See Section 5.2.1 for an exact description of this ordering. The ordering allows us to form the groups for a level of the \(1\)-dimensional spanner in \(O(n)\) time, thus the total running time is \(O(n\log n)\).
For the general \(2k\)-spanner for additively weighted sites, using the ordering of the sites and Lemma 15 to build the tree of groups bottom up implies that we can construct this spanner in \(O(n\log n+m)\) time as well.
After computing the additively weighted spanner \(\mathcal{G}_{\lambda}\), we add edge \((p,q)=\pi_{\lambda}(p,q)\) to \(\mathcal{G}\) for every edge \((p_{\lambda},q_{\lambda})\in\mathcal{G}_{\lambda}\). We can either compute and store these edges explicitly, which would take time and space equal to the complexity of all added edges, or we can store them implicitly by only storing the points \(p_{\lambda}\) and \(q_{\lambda}\), or the point \(r\) from Definition 22 when the
paths are not disjoint. The point \(r\) is the lowest common ancestor of the nodes \(p\) and \(q\) in \(\mathit{SPT}_{\lambda}\). This can be computed in \(O(1)\) time after \(O((n+m)\log(n+m))\) preprocessing time [5].
As there are at most three shortest path that define the separator, step 2 takes \(O((n+m)\log(n+m))\) time in total. This means that the time used to construct the balanced sp-separator is the dominant term. Because this step even takes quadratic time, it also dominates in the recursion. After applying the refinement of Lemma 3, we obtain the following theorem.
Let \(S\) be a set of \(n\) point sites in a polygonal domain \(P\) with \(m\) vertices, and let \(k\geq 1\) be any integer constant. For any constant \(\varepsilon>0\), we can build a geodesic \((2k+\varepsilon)\)-spanner of size \(O(c_{\varepsilon,k}n\log^{2}n)\) and complexity \(O(c_{\varepsilon,k}(mn^{1/k}+n\log^{2}n))\) in \(O(n^{2}\log m+nm\log m+K)\) time, where \(c_{\varepsilon,k}\) is a constant depending only on \(\varepsilon\) and \(k\), and \(K\) is the output complexity.
### A \((2k+\varepsilon)\)-spanner with a dependence on \(\sqrt{h}\)
Let \(h\) denote the number of holes in the polygonal domain \(P\). In this section, we describe an alternative approach to Section 5.2 to compute a \((2k+\varepsilon)\)-spanner of low complexity. Here, we do not use the balanced shortest-path separator of Section 4, which greatly reduces the running time, at the cost of making both the size and complexity of the spanner dependent on \(\sqrt{h}\).
Abam et al. [1] describe how to decompose a polygonal domain with \(h\) holes into \(O(h)\) simple polygons using \(O(h)\) vertical segments called _splitting segments_. Furthermore, each each simple polygon has at most three splitting segments on its boundary. Instead of partitioning the polygonal domain by a shortest path, they then apply a graph separator theorem to partition the simple polygons into three sets \(A,B,C\) such that \(|C|=O(\sqrt{h})\), any path from a site \(p\in A\) to \(q\in B\) intersects \(C\), and \(|S_{A}|,|S_{B}|\leq 2|S|/3\), where \(S_{A}\) (resp. \(S_{B}\)) denotes the set of sites that lie in a polygon in \(A\) (resp. \(B\)). We can compute a spanner for the entire polygonal domain by building our 1-dimensional spanner on each of the \(O(\sqrt{h})\) splitting segments, building the simple polygon spanner for each of the \(O(\sqrt{h})\) simple polygons in \(C\) and recursively computing spanners for \(A\) and \(B\).
Next, we analyze the spanning ratio, size, and complexity of \(\mathcal{G}\) when using the simple-polygon spanner of Theorem 3.1 and the corresponding 1-dimensional spanner for the spanners on the splitting segments. The spanning ratio of the spanner remains \(2k+\varepsilon\), because a shortest path between two sites \(p,q\) either crosses a splitting segment, whose spanner then bounds \(d_{\mathcal{G}}(p,q)\), or stays within a simple polygon, in which case the simple polygon spanner bounds \(d_{\mathcal{G}}(p,q)\).
In a single level of the recursion, the spanners on the splitting segments contribute \(O(c_{\varepsilon,k}\sqrt{h}n\log n)\) edges, thus the spanners on all of the splitting segments contribute \(O(c_{\varepsilon,k}\sqrt{h}n\log^{2}n)\) edges in total. For each simple polygon, we build the spanner of Theorem 3.1, which has \(O(c_{\varepsilon,k}n_{i}\log^{2}n_{i})\) edges, where \(n_{i}\) denotes the number of sites in simple polygon \(i\). Because \(\sum_{i}n_{i}=n\), these contribute \(O(c_{\varepsilon,k}n\log^{2}n)\) edges in total. The number of edges is thus \(O(c_{\varepsilon,k}\sqrt{h}n\log^{2}n)\).
The spanner complexity over all 1-dimensional spanners built on the splitting segments is \(O(c_{\varepsilon,k}\sqrt{h}(mn^{1/k}+n\log^{2}n))\). Similarly as for the number of edges, the complexity over all simple-polygon spanners is only \(O(c_{\varepsilon,k}(mn^{1/k}+n\log^{2}n))\).
The number of edges and the complexity of the spanner has thus increased by a factor \(\sqrt{h}\) with respect to the spanner of Theorem 3.2, but building the spanner is significantly easier, as we do not need a balanced sp-separator here. After computing the vertical decomposition
of \(P\) in \(O(m\log m)\) time, we select the segments that have at least one of their endpoints on the rightmost or leftmost vertex of a hole as splitting segments. This partitions the polygonal domain into simple polygons, but does not guarantee that each such simple polygon only has three splitting segments bounding it yet. We subdivide these polygons further by recursively adding a splitting segment that partitions the polygon into two polygons with roughly half the number of splitting segments on its boundary. By computing the number of splitting segments left and right of each vertical segment of the vertical decomposition, we can find such a splitting segment in \(O(m)\) time. The total time to subdivide a polygon is thus \(O(m\log m)\). As all polygons are disjoint, except for the splitting segments, we can subdivide _all_ simple polygons in \(O(m\log m)\) time as well. Computing the corresponding planar graph, and a planar separator for this graph, can be done in linear time [31]. Using Lemma 29, we find that computing all \(1\)-dimensional spanners on the splitting segments takes \(O(c_{\varepsilon,k}\sqrt{h}n\log^{2}n)\) time. Finally, computing spanners for the simple polygons takes \(O(c_{\varepsilon,k}n\log^{2}n+m\log m)\) time, according to Theorem 16. This results in a total running time of \(O(c_{\varepsilon,k}\sqrt{h}n\log^{2}n+m\log m)\).
Let \(S\) be a set of \(n\) point sites in a polygonal domain \(P\) with \(m\) vertices and \(h\) holes, and let \(k\geq 1\) be any integer constant. For any constant \(\varepsilon>0\), we can build a geodesic \((2k+\varepsilon)\)-spanner of size \(O(c_{\varepsilon,k}\sqrt{h}n\log^{2}n)\) and complexity \(O(c_{\varepsilon,k}\sqrt{h}(mn^{1/k}+n\log^{2}n))\) in \(O(c_{\varepsilon,k}\sqrt{h}n\log^{2}n+m\log m+K)\) time, where \(c_{\varepsilon,k}\) is a constant depending only on \(\varepsilon\) and \(k\), and \(K\) is the output complexity.
## 6 Lower bounds for complexity
In this section, we consider lower bounds on the complexity of spanners. We first describe a simple \(\Omega(nm)\) lower bound construction for a \((3-\varepsilon)\)-spanner, and then prove a (slightly worse) \(\Omega(mn^{1/(t-1)})\) lower bound construction for a \((t-\varepsilon)\)-spanner.
### Lower bound for \((3-\varepsilon)\)-spanners
Consider the construction given in Figure 4. We assume that \(m=\Omega(n)\). We split the sites into two sets \(S_{\ell}\) and \(S_{r}\) equally. The sites lie in long'spikes' of length \(\ell\), either on the left (\(S_{\ell}\)) or right (\(S_{r}\)) of a central passage of complexity \(\Theta(m)\). We show that this construction gives a complexity of \(\Omega(mn)\) for any \((3-\varepsilon)\)-spanner.
When \(h\) gets close to \(0\), the distance between any two sites \(p,q\) approaches \(2\ell\). To get a \((3-\varepsilon)\)-spanner, we can thus have at most \(1\) intermediate site on the path from \(p\) to \(q\). We assume that all possible, constant complexity, edges between vertices on the same side of the construction are present in the spanner. To make sure sites on the left also have (short) paths to sites on the right, we have to add some additional edges that go through the central passage of the polygon, which forces shortest paths to have complexity \(\Theta(m)\). We will show that we need \(\Theta(n)\) of these edges, each of complexity \(\Theta(m)\), to achieve a \((3-\varepsilon)\)-spanner, thus proving the \(\Omega(nm)\) lower bound.
Let \(q\in S_{r}\). For each \(p\in S_{\ell}\), we need a path with at most \(1\) intermediate site to \(q\). There are two ways to achieve this: we can go through an intermediate site on the left, or on the right. In the first case, we add an edge from a site \(p^{\prime}\in S_{\ell}\) to \(q\). In the second case, we need to add an edge from each \(p\in S_{\ell}\) to any site \(q^{\prime}\in S_{r}\). In the first case we thus add only one edge, while in the second case we add \(\Theta(n)\) edges (that go through the central passage). Let \(k\geq 1\) be the number of sites in \(S_{r}\) that have a direct edge to some site in \(S_{\ell}\). If \(k<|S_{r}|\), then there is some site \(q\in S_{r}\) for which we are in case two, and we thus have \(\Theta(n)\) edges of
complexity \(\Theta(m)\). If \(k=|S_{r}|\), then there is a direct edge to each of the \(\Theta(n)\) sites in \(S_{r}\), and we therefore also end up with a complexity of \(\Omega(nm)\).
### General lower bounds
For any constant \(\varepsilon\in(0,1)\) and integer constant \(t\geq 2\), there exists a set of \(n\) point sites in a simple polygon \(P\) with \(m=\Omega(n)\) vertices for which any geodesic \((t-\varepsilon)\)-spanner has complexity \(\Omega(mn^{1/(t-1)})\).
Proof.: Consider the construction of the polygon \(P\) shown in Figure 18. The starting points of the spikes lie on a convex curve, such that the shortest path between any two sites turns at all spikes that lie in between. Let \(p_{1},...,p_{n}\) be the sites from left to right. Thus, the complexity of the path from the \(i\)-th site \(p_{i}\) to the \(j\)-th site \(p_{j}\) is equal to \(|i-j|\). When \(h\) is close to \(0\), the distance between any two sites approaches \(2\ell\). To achieve a spanning ratio of \((t-\varepsilon)\), the path in the spanner from \(p_{i}\) to \(p_{j}\) can visit at most \(t-2\) other vertices. In other words, we can go from \(p_{i}\) to \(p_{j}\) in at most \(t-1\) hops. This is also called the hop-diameter of the spanner.
As the spanning ratio is determined only by the number of hops on the path, we can model the spanner in a much simpler metric space \(\vartheta_{n}\). This is a \(1\)-dimensional Euclidean space with \(n\) points \(v_{1},...,v_{n}\) that lie on the \(x\)-axis at coordinates \(1,2,...,n\). The edge \((v_{i},v_{j})\) thus has length (or weight) \(|i-j|\). Any spanning subgraph of \(\vartheta_{n}\) of hop-diameter \(h\) and total weight \(w\) (the weight of a graph is the sum of the weights of its edges) is in one-to-one correspondence to an \((h+1-\varepsilon)\)-spanner of \(P\) of complexity \(\Theta(w)\). Denitz, Elkin, and Solomon [19] prove the following on the relation between the _hop-radius_ and weight of any spanning subgraph of \(\vartheta_{n}\). The hop-radius \(h(G,r)\) of a graph \(G\) with respect to a root \(r\) is defined as maximum number of hops that is needed to reach any vertex in the graph from the root. The hop-radius \(h(G)\) of \(G\) is then defined as \(\min_{r\in V}h(G,r)\). Note that the hop-diameter is an upper bound on \(h(G)\).
[Dinitz et al. [19]] For any sufficiently large integer \(n\) and positive integer \(h<\log n\), any spanning subgraph of \(\vartheta_{n}\) with hop-radius at most \(h\) has weight at least \(\Omega(h\cdot n^{1+1/h})\).
The lemma implies that any \((t-\varepsilon)\)-spanner of \(P\), which has hop-diameter \(t-1\), has complexity \(\Omega((t-1)\cdot n^{1+1/(t-1)})=\Omega(n^{1+1/(t-1)})\), for constant \(t\).
To achieve a lower bound for \(m>n\), we slightly adapt our polygon such that the top of each spike has complexity \(\Theta(m/n)\). This implies that the path between \(p_{i}\) and \(p_{j}\) has complexity \(m/n\cdot|i-j|\) instead. For this adapted polygon \(P^{\prime}\), any spanning subgraph of \(\vartheta_{n}\) of hop-diameter \(h\) and total weight \(w\) is in one-to-one correspondence to an \((h+1-\varepsilon)\)-spanner of \(P^{\prime}\) of complexity \(\Theta(m/n\cdot w)\). It follows from Lemma 3 that any \((t-\varepsilon)\)-spanner of \(P^{\prime}\) has complexity \(\Omega(m/n\cdot n^{1+1/(t-1)})=\Omega(mn^{1/(t-1)})\).
For any integer constant \(t\geq 2\), there exists a set of \(n\) point sites in a simple polygon \(P\) with \(m=\Omega(n)\) vertices for which any geodesic \(t\)-spanner has complexity \(\Omega(mn^{1/t})\).
|
2301.05659 | From stage to page: language independent bootstrap measures of
distinctiveness in fictional speech | Stylometry is mostly applied to authorial style. Recently, researchers have
begun investigating the style of characters, finding that the variation remains
within authorial bounds. We address the stylistic distinctiveness of characters
in drama. Our primary contribution is methodological; we introduce and evaluate
two non-parametric methods to produce a summary statistic for character
distinctiveness that can be usefully applied and compared across languages and
times. Our first method is based on bootstrap distances between 3-gram
probability distributions, the second (reminiscent of 'unmasking' techniques)
on word keyness curves. Both methods are validated and explored by applying
them to a reasonably large corpus (a subset of DraCor): we analyse 3301
characters drawn from 2324 works, covering five centuries and four languages
(French, German, Russian, and the works of Shakespeare). Both methods appear
useful; the 3-gram method is statistically more powerful but the word keyness
method offers rich interpretability. Both methods are able to capture
phonological differences such as accent or dialect, as well as broad
differences in topic and lexical richness. Based on exploratory analysis, we
find that smaller characters tend to be more distinctive, and that women are
cross-linguistically more distinctive than men, with this latter finding
carefully interrogated using multiple regression. This greater distinctiveness
stems from a historical tendency for female characters to be restricted to an
'internal narrative domain' covering mainly direct discourse and
family/romantic themes. It is hoped that direct, comparable statistical
measures will form a basis for more sophisticated future studies, and advances
in theory. | Artjoms Šeļa, Ben Nagy, Joanna Byszuk, Laura Hernández-Lorenzo, Botond Szemes, Maciej Eder | 2023-01-13T16:58:43Z | http://arxiv.org/abs/2301.05659v1 | # From stage to page: language independent bootstrap measures of distinctiveness in fictional speech
###### Abstract
Stylometry is mostly applied to _authorial_ style. More recently, researchers have begun investigating the style of _characters_, finding that, although there is detectable stylistic variation, the variation remains within authorial bounds. In this article, we address the stylistic distinctiveness of characters in drama. Our primary contribution is methodological; we introduce and evaluate two non-parametric methods to produce a summary statistic for character distinctiveness that can be usefully applied and compared across languages and times. This is a significant advance--previous approaches have either been based on pairwise similarities (which cannot be easily compared) or indirect methods that attempt to infer distinctiveness using classification accuracy. Our first method is based on bootstrap distances between 3-gram probability distributions, the second (reminiscent of 'unmasking' techniques) on word keysness curves. Both methods are validated and explored by applying them to a reasonably large corpus (a subset of DraCor):
we analyse 3301 characters drawn from 2324 works, covering five centuries and four languages (French, German, Russian, and the works of Shakespeare). Both methods appear useful; the 3-gram method is statistically more powerful but the word keyness method offers rich interpretability. Both methods are able to capture phonological differences such as accent or dialect, as well as broad differences in topic and lexical richness. Based on exploratory analysis, we find that smaller characters tend to be more distinctive, and that women are cross-linguistically more distinctive than men, with this latter finding carefully interrogated using multiple regression. This greater distinctiveness stems from a historical tendency for female characters to be restricted to an 'internal narrative domain' covering mainly direct discourse and family/romantic themes. It is hoped that direct, comparable statistical measures will form a basis for more sophisticated future studies, and advances in theory.
## 1 Introduction
Since Vladimir Propp's work, structural narratology has approached fictional characters mainly through their role or function--by what they do, or what is done to them (Eder, Jannidis, and Schneider 2010). This character typology relied on recurring functions in the narrative (lower, villain, victim, detective, etc.) and the same perspective was often adopted in computational research, where characters in novels were modelled on the basis of narrative passages rather than dialogue (Bamman, Underwood, and Smith 2014; Bonch-Osmolovskaya and Skorinkin 2017; Underwood, Bamman, and Lee 2018; Stammbach, Antoniak, and Ash 2022).
In dramatic texts, however, the dominant device for characterisation is an utterance. While the script usually contains some stage directions, the specifics of characterisation and style of performance are not determined by the text itself, but developed by a specific theatre, director or a troupe. Over the course of history, many plays were written for specific theatre stages, and it was common practice to write characters for specific actors (Fischer-Lichte 2002). Of course, this kind of 'outsourced characterisation' was supported by dramatic conventions and formulas. Viewers' expectations could be shaped without a single word being uttered on stage, just by a character wearing a costume, operating a puppet, or changing a dell'arte stock mask. At the same time, the things characters say and how they say them are the main textual source of information about them. It is reasonable to assume that dramatists make significant efforts to create linguistic distinctions between princes and paupers, lovers and schemers, aristocrats and merchants. Tragic monologue is written differently to a comedic exchange between servants. Some previous computational works treat linguistic distinctiveness of characters from the perspective of this stylistic continuum (Vishnubhotla, Hammond, and Hirst 2019), noting that it can be influenced by genre, character gender, or their social and professional dispositions.
A parallel narratological tradition, tied to Bakhtin's ideas of heteroglossia, focuses not on abstract character roles, but on the words characters say (Bronwen 2012; Culpeper 2001; Sternberg 1982). The modern novelistic space of dialogic exchange, 'educated conversation' (Moretti 2011) and the clash of styles in reported discourse become central here. Available stylometric research on fictional speech and micro-stylistic variation suggests that characters within a text are often distinguishable by their local linguistic patterns without obscuring the global authorial trace (Burrows 1987; Hoover 2017). As put by Burrows and Craig: 'Characters speak in measurably different ways, but the authorial contrasts transcend this differentiation. The diversity of styles within an author always remains within bounds' (Burrows and Craig 2012, 307-8).
Conceptually and methodologically, the majority of previous works examined not the _distinctiveness_ of characters, but their (pairwise) _similarity_. Similarity measures are meaningful in pairwise contexts, but cannot be analysed and compared as individual summary statistics. Since Burrows' seminal study of speech patterns in Jane Austen's characters (Burrows 1987), these approaches focused on calculating similarity within a collection of characters: how different is character X from character Y, and each of them from character Z. Burrows measured the correlation between characters' usage of 30 most frequent words (technically, he fit a linear regression for two sets of log-frequencies); later, similarity was most often inferred through clustering based on pairwise distance calculations (Hoover 2017; Reeve 2015; Craig and Greatley-Hirsch 2017). Sometimes linguistic similarity served as a basis for arguing functional similarity, as well. A recent study that linked Bakhtin's dialogism and the stylistic diversity of characters' speech (Vishnubhotla, Hammond, and Hirst 2019), proposed the analysis of distinctiveness rather than similarity using supervised classification. Instead of using a network of pairwise relationships, the authors asked how well a classifier can recognise character X as being written by author A. Classification accuracy in this scenario becomes an explicit summary statistic for distinctiveness that can be assigned to a character (or, in an aggregated manner, to a play or an author). However, the supervised approach, proposed by Vishnubhotla et al., is data hungry: it suffers from extreme class imbalance, an abundance of short samples (most characters speak only a little) and is dependent on language-specific feature construction procedures.
By contrast, this paper will present a simple, non-parametric measure of character distinctiveness that is based on bootstrapped probability distributions representing a character and all others present in a given play: an approach largely informed by authorship verification techniques. This measure is language-independent and relies only on the context of a single work, which, in turn, minimises problems of language variation, authorial signal and chronological change in a comparative setting. Individual distinctiveness scores can be then tested against other measures and metadata categories in a hypothesis-driven manner, not only across languages, but also across genres (e.g. novel vs. drama). Do comedies tend to employ more distinct characters? Does distinctiveness increase (authors get better), or decrease (social and linguistic homogenisation occurs) over time? Is there a difference between the distinctiveness of fictional women and men? If so is it the direct result of perceived gender differences, or is it constructed by imagined
differences in social and professional status?
Lacking good descriptive metadata on the dramatic characters, this paper will not answer above mentioned questions in any satisfying way. Instead we focus on presenting and justifying the measure of distinctiveness and exploring several factors that might shape the final scores (like the year of composition, character gender and characters' sample size).
## 2 Materials
As the beginning of our exploration of cross-linguistic variation, we examined four dramatic corpora from DraCor (Fischer et al., 2019): Shakespeare, French, German, and Russian. DraCor is a project that gathers dramatic corpora in various languages, primarily European, encoded in TEI-XML. With 15 corpora available so far, including the Shakespeare corpus available both in English and German, DraCor facilitates large scale analysis of dramatic conventions across language traditions, and offers a wide variety of useful metadata, at the level of both plays and characters. While the analysis of all DraCor corpora would be possible with the methods we developed, for the purpose of this preliminary study we focused on the languages and dramatic traditions well known to the members of our team, eventually selecting the full corpora for Shakespeare, French, German, and Russian: a total of 2324 texts, the majority of which come from French and German. The corpus is summarised in Table 1.
## 3 Methods
### General Approach and Definitions
Our understanding of character distinctiveness is largely informed by 'authorship verification' approaches, which centre around verifying that a text is written by a target author. This problem is more general than 'authorship attribution' that tries to identify the nearest stylistic neighbour for a text (Halvani, Winter, and Graner, 2019). Instead, authorship verification asks about the relative _magnitude_ of similarity: is a target text more similar to same-author samples, or different-author samples? With this in mind,
\begin{table}
\begin{tabular}{l r r r r r r} & Total & Characters & Unique & Unique & Total & Total \\ Corpus & Characters & Analysed & 3-grams & Words & 3-grams & Words \\ \hline French & 15462 & 1744 & 9896 & 79994 & 29.79 m & 5.47 m \\ German & 14010 & 1182 & 14341 & 150956 & 24.80 m & 4.31 m \\ Russian & 3707 & 248 & 12542 & 71217 & 4.05 m & 0.72 m \\ Shakespeare & 1431 & 127 & 5921 & 19595 & 2.16 m & 0.43 m \\ \end{tabular}
\end{table}
Table 1: A summary of the corpus. All word and 3-gram counts are for the filtered corpus (characters that speak at least 2000 words) only.
we define a character's 'distinctiveness' as the degree to which the style of their speech differs from that of other characters. We understand'style' here instrumentally, as a deviation from an unobserved average language (Herrmann, Dalen-Oskam, and Schoch 2015) and do not introduce aggressive feature filtering, allowing both 'grammatical' and 'thematic' signal to contribute to the final measures. We anchor our distinctiveness measure in the context of the specific text in which a character appears. In theory, the frame of reference could be all plays from one author, or all plays from the same period, or even
Figure 1: Character distinctiveness, per corpus, versus % Dialogue. Women are shown smaller, in orange, men (and undefined) larger and in blue. GAM (Generalised Additive Model) trendlines are superimposed in the same colours. Baseline data (GAM trend for distinctiveness of character vs self) is shown as a dashed line.
some external corpus--however all of these would greatly complicate any comparative study.
### Bootstrap 3-gram Distinctiveness
Based on our definition of distinctiveness above, we considered a character's style to be an idi
Figure 2: Character distinctiveness, per corpus, versus year composed (DraCor data). Women are shown smaller, in orange, men (and undefined) larger and in blue. GAM (Generalised Additive Model) trendlines are superimposed in the same colours. Baseline data (GAM trend for distinctiveness of character vs self) is shown as a dashed line.
language distribution, this was expected to be generally Zipfian, a family of heavy-tailed distributions, so non-parametric methods were seen to be important. 3-grams were preferred to words for a number of reasons: first, they capture sub-word information which means they will reflect general sonic preferences (so they can capture things like accent) and, particularly in inflected languages, also reflect some grammatical style; second, as a practical matter, they effectively expand the sample data, since a string of text produces approximately one 3-gram per character. This increased sample size should reduce the variance of the statistics. Finally, the number of unique 3-grams in a language is considerably smaller than the number of words, so the frequency data is less sparse, which again is expected to increase robustness. To now operationalise the distinctiveness, as defined, we used standard bootstrap methods to measure the median energy distance (Szekely and Rizzo 2013) with bootstrap confidence intervals between the two distributions (character 3-gram frequencies vs 'other' 3-gram frequencies). The energy distance is one of a family of related metrics that are commonly used to measure difference between probability distributions.
Some limitations and choices were required. As mentioned, we measured distinctiveness only within the context of a single work (even for authors with multiple works). To expand beyond single works would produce very mismatched sample sizes, since some authors were prolific and some produced just one play; even with non-parametric meth
Figure 3: An analysis, per corpus, of the distribution of various features by gender. Distributions are estimated, with the median shown as a solid line. Actual points are shown as rug plots with outliers ‘o’ plotted for points outside 3Q + 2\(\times\)IQR.
ods, hugely mismatched sample sizes are problematic. Further, the plays span four languages and roughly five centuries, making the 'distant' context seem ridiculous. As well as the selected distinctiveness statistic (median energy distance) we also recorded a 'baseline' distinctiveness, being each character's distance from themselves. The theoretical baseline is, of course, zero, but the sample baselines will not be, so this gives us an idea of the inherent variance of the samples. Finally, when selecting characters to examine, we chose a minimum size of 2000 words. Sample sizes are somewhat arbitrary, and are matters of debate (Eder 2015, 2017), but this seemed a reasonable, or perhaps even slightly aggressive, lower bound.
### Area under keywords
Our second, supplementary approach was informed by 'unmasking' techniques, often employed in stylometric research (Koppel and Schler 2004; Kestemont et al. 2016; Plechac and Sela 2021). Unmasking refers to a range of methods that share one goal: to measure and compare the _depth_ of the differences between two sets of texts. For example, an author might write both high fantasy fiction and historical novels: a classifier would have little difficulty distinguishing one genre from another by simply using superficial features (e.g. 'dragons','magic', 'elves'). However, by assumption, if these most distinctive features are removed, the classifier will have more trouble determining which text came from which pool, because the texts share one deep similarity--a common authorial style. Conversely, if we compare books by two different fiction writers, these texts will also have superficial differences. However, while removing more and more distinctive features, the classifier should remain confident in distinguishing the authors from each other, because the texts do not share an authorial style that is deeply rooted in common linguistic elements and distributed over many features. By comparing the speed with which the rates of accuracy decay we can approach authorship verification problems, i.e. how plausible is that this text belongs to author A?
We applied the same thinking to fictional characters, as opposed to authors: the distinctiveness of a character may rely on a small number of catch-phrases ('Gadzooks!' or 'Cowabungal'), or it may be driven by non-stylistic, referential factors (Mary, speaking to John is not likely to use word 'Mary', but likely to use word 'John', and vice-versa). On the other hand, there are characters whose speech systematically differs from the neutral language: such as when the author imitates dialects, slang, regionalism, speech and phonetic idiosyncrasies. In the former case, an imaginary classifier should quickly lose accuracy (since John and Mary speak quite similarly), but in the latter case the removal of a small number of features would not be enough to disrupt classification.
In our case, it was impractical to use'standard', supervised (i.e. classifier-based) unmasking because individual characters, as samples, were simply too small. Instead we used word keyness--a character's relative preference for a word in the context of a given drama--to calculate an alternative distinctiveness score together with a bag of easily interpretable features per character. First, we use weighted log-odds (Monroe, Colaresi,
and Quinn 2008) to calculate keywords for a character relative to the speech pool of the rest of the cast; second, we represented each character by their 100 words with highest keyness, arranged by rank; finally, we measured the area under this curve, which we interpret as distinctiveness--characters with just a few key words will exhibit less area under the keyness curve. By comparing these final areas, we can compare the amount of difference each character has in relation to all other speech in the play. In a similar manner to the bootstrapped approach, we upsample each character's word pool to match the size of the rest of the words in the play to minimise, as much as possible, the effect of sample size.
## 4 Results
Overall, the distinctiveness energy statistic appears useful. The baseline (character vs self) is quite stable cross-linguistically, although it is slightly higher for characters with a very large share of dialogue (Fig. 1). Note also that the distinctiveness statistic appears roughly Gaussian (see Appendix B for more discussion) and its range is relatively consistent between languages (peaking at roughly 0.20), although this consistency does not apply at the level of authors. The obvious issue is that there is a strong negative correlation between character size and distinctiveness, but this is not only a limitation of the method--lead characters naturally set the dominant style of a text (and, possibly, inherit more of the 'true' authorial voice). Importantly, distinctiveness does not increase with the number of speakers in a play. The method works best when there are reasonable sample sizes for both the examined character and the 'other' class. This is illustrated by the 'U' curve visible in the French corpus in Figure 1 as the examined characters' dialogue share passes 50%. As hoped, the energy-distance method does appear to capture characters who are written with distinctive idiolects, representing things like foreign accents or social class. For a discussion of this see Section 5.
As seen in Figure 2, there is no clear correlation between the date of composition and character distinctiveness which suggests that language change does not disturb the measure. The finding that seems clear is that women are written differently to men. Female characters are generally more distinctive in all corpora (Fig. 3b), although this is not visible using the keyness AUC measure--leading us to conclude that the keyness measure has lower power. This difference in the distinctiveness of female characters can partly be explained by the fact that they tend to have smaller parts (Fig 3a), and smaller characters in general are more distinctive (Fig. 1), but that is not the whole story. Female parts have more restricted 3-gram vocabularies (Fig. 3c), suggesting that they are also restricted in their semantic fields. This becomes clearer when the relative frequencies of their (word) vocabularies are examined. As well as the stereotypical tendencies (women say 'love', men say'sword'), the female characters, cross-linguistically, seem to be less likely to reference the 'external world' of the drama. As seen in Appendix A, relatively more frequent words for women are dominated by personal pronouns representing 'I','me', 'you', etc. or words relating to family. The male lists are dominated by indicative
articles and political terms ('law', 'noble', 'king', etc.).
The higher distinctiveness of female characters is further supported by a formal linear model: we fit a Bayesian multiple regression where distinctiveness was conditioned on both gender and size (characters' percentage of total dialogue). A direct gender effect is present in all corpora, as expected from Figure 2(a), but, when we account for variation among authors, the effect may be less pronounced than it appears (for analysis and more detailed discussed, the posterior estimates are described in Appendix B). Our finding interlocks with the observation by Underwood, Bamman, and Lee (2018) that female characters found in English 18-20th century fiction displayed high distinctiveness due to the particular way they were narrated, suggesting a pervasive authorial mentality.
## 5 Discussion
The measures of stylistic character distinctiveness that were proposed in this paper appear to be effective in capturing a _degree_ to which characters stand out from others. The most distinctive characters, by both of our metrics, often have systematically different speech, in the form of dialects, regionalisms or class markers. For example, Shakespeare's Captain Fluellen (_Henry V_) is Welsh, and his accent is written for comedic effect. The systematic replacements b\(\rightarrow\)p and d\(\rightarrow\)t make him the most distinctive Shakespeare character according to both the 3-gram and word measures:
Fluellen
Your grandfather of famous memory, an't please your majesty, and your great-uncle Edward the Plack Prince of Wales, as I have read in the chronicles, fought a most prave pattle here in France. King Henry V
They did, Fluellen. Fluellen
Your majesty says very true: if your majesties is remembered of it, the Welshmen did good service in a garden where leeks did grow, wearing leeks in their Monmouth caps; which, your majesty know, to this hour is an honourable badge of the service; and I do believe your majesty takes no scorn to wear the leek upon Saint Tavy's day.
Regional differences also contribute to high distinctiveness in the German corpus. For example, Emerike, written by Johanna von Weissenthurn, uses -ey instead of -ei (zwey, bey, Freylich) which is a form indicative of pre-standardised Southern German spelling. John, in Hauptmann's _Die Ratten_, speaks Plattdeutsch, a variant heavily influenced by Dutch, e.g. 'Det hat er jesacht, det ick noch ma hin musste und janz jenau anjeben'.
In the French corpus, the most distinctive character by keyness is Gareau, from _Le Pedant Joue_ (Cyrano de Bergerac), who speaks a 'patois' or rural dialect. In his critical edition, Frederic Lachevre comments on this distinct idiolect when Gareau is first introduced
(Cyrano de Bergerac 1921, 25):
Cyrano a fabrique de toutes pieces le patois de Gareau. Le manuscrit de la BN donne un langage tout different que celui imprime en 1654, la pronociation des mots n'est pas tout a fait la meme. Nous avons naturellement maintenu pour Gareau le texte de 1654 publie par Cyrano lui-meme.
Cyrano created the patois of Gareau from scratch. The manuscript of the [Bibliotheque Nationale] offers quite a different language to the one printed in 1654, the pronunciation of the words is not quite the same. We have naturally maintained for Gareau the text of 1654 published by Cyrano himself.
The most distinctive Russian characters come from Ostrovskii, who gave the main stage to Muscovite merchants and their families with their vernacular, non-aristocratic language. Tolstoy's Nikita (high on both the 3-gram and keyness lists) from _The Power of Darkness_ has heavily stylised speech suggestive of Western or Southern Russian dialects, e.g. featuring a word-initial [w].
It must be borne in mind, however, that dialects or accents do not automatically cause high distinctiveness--what is being detected is the _difference_ in speech patterns. In a text where everyone speaks Welsh, an English character would score highly on distinctiveness, and vice versa. Cross-linguistic inference must also account for systematic language differences: the lexical and morphological features of the various languages lead naturally to different probability distributions for both words and \(N\)-grams (although the exact nature of those differences is too complex to grapple with here). Word-based distinctiveness measures permit easier interpretation, but appear less (statistically) powerful. In addition, word-based measures operate in much higher dimensions, with all the usual problems that entails (sparsity, the 'curse of dimensionality', etc. See, for example Moisl (2011)). Finally, word-based measures naturally invite lemmatisation for highly inflected languages (like Russian and German), which might cause problems for future work dealing with languages that are non-standard, historical, or otherwise less well-resourced.
We have noted that our distinctiveness measure has a strong negative correlation to the size of the character. This relationship should not be understood as a simple artefact that renders our measurement useless. Distinctive speech is always a construct, a subset of linguistic and stylistic reality. If a minor character has just a few lines about gallows and graves--like Shakespeare's gravedigger--we will never know more about their language. However, _Hamlet_ is not _only_ about gallows and graves; if we imagine bootstrapping the gravedigger's speech, it would be endlessly populated by these few words: we don't know how the gravedigger would speak when ruling a country, or murdering their uncle. From this perspective, a protagonist is more likely to represent lexical and stylistic norm, while minor characters will sample the Other in their ethnic, dialectal, or professional distinctiveness.
Despite the few limitations, we hope that these measures of character distinctiveness will support improved theories about style, characterisation and history. The most important
question to be asked concerns the source(s) of this representational distinctiveness that authors instil in their characters. To even begin to address this issue, we need much richer annotation for characters: their social class, profession, region of origins, age. Determining the drivers of distinctiveness will not be easy. Even to carefully verify the effect of character gender was quite complicated. We know that part of the effect comes from size: women are more likely to be minor characters. However, it is reasonable to assume that gender difference can also be confounded by genre (e.g. in comedies there are more women playing larger roles) and social class (rural people speak more in comedies). There is also the effect of time: changing the relative dynamics of character sizes (Algee-Hewitt 2017), improving the representation of women as dramatists and altering the depiction of social class--all of which complicates the analysis even further. However, having a clear summary measure for a character's stylistic distinctiveness may help us to refine our theories about the speech of fictional characters, leading in turn to better causal models.
## 6 Availability of Data and Code
The details of our approach, including data acquisition and preprocessing, are published in a Zenodo repository, allowing for full replication of all reported results: [https://doi.org/10.5281/zenodo.7383687](https://doi.org/10.5281/zenodo.7383687).
## Acknowledgements
AS, JB, LHL and ME were funded by the "Large-Scale Text Analysis and Methodological Foundations of Computational Stylistics" project (SONATA-BIS 2017/26/E/HS2/01019). BN is also grateful to QuaDramA, funded by Volkswagen Foundation and the DFG-priority programme Computational Literary Studies, for financing the presentation of the paper at the workshop.
|
2304.09865 | Safer Conversational AI as a Source of User Delight | This work explores the impact of moderation on users' enjoyment of
conversational AI systems. While recent advancements in Large Language Models
(LLMs) have led to highly capable conversational AIs that are increasingly
deployed in real-world settings, there is a growing concern over AI safety and
the need to moderate systems to encourage safe language and prevent harm.
However, some users argue that current approaches to moderation limit the
technology, compromise free expression, and limit the value delivered by the
technology. This study takes an unbiased stance and shows that moderation does
not necessarily detract from user enjoyment. Heavy handed moderation does seem
to have a nefarious effect, but models that are moderated to be safer can lead
to a better user experience. By deploying various conversational AIs in the
Chai platform, the study finds that user retention can increase with a level of
moderation and safe system design. These results demonstrate the importance of
appropriately defining safety in models in a way that is both responsible and
focused on serving users. | Xiaoding Lu, Aleksey Korshuk, Zongyi Liu, William Beauchamp, Chai Research | 2023-04-18T11:03:10Z | http://arxiv.org/abs/2304.09865v1 | # Safer Conversational AI as a Source of User Delight
###### Abstract
This work explores the impact of moderation on users' enjoyment of conversational AI systems. While recent advancements in Large Language Models (LLMs) have led to highly capable conversational AIs that are increasingly deployed in real-world settings, there is a growing concern over AI safety and the need to moderate systems to encourage safe language and prevent harm. However, some users argue that current approaches to moderation limit the technology, compromise free expression, and limit the value delivered by the technology. This study takes an unbiased stance and shows that moderation does not _necessarily_ detract from user enjoyment. Heavy handed moderation does seem to have a nefarious effect, but models that are moderated to be safer can lead to a better user experience. By deploying various conversational AIs in the Chai platform, the study finds that user retention can increase with a level of moderation and safe system design. These results demonstrate the importance of appropriately defining safety in models in a way that is both responsible and focused on serving users.
## 1 Introduction
Recent advancements in Large Language Models (LLMs) has lead to the development of highly capable conversational AIs (Thoppilan et al., 2022), such as chatGPT. These systems have been rapidly growing in popularity and are being increasingly deployed in real world settings (Kung et al., 2023; Huang et al., 2022). With the prevalence of AI solutions in the real world as tools or as services, there has been huge attention in AI safety and ensuring that developed systems are moderated to be helpful, wholesome and truthful (Hanu and Unitary team, 2020; Si et al., 2022; Xu et al., 2020; Manakul et al., 2023). There is, however, as well a community of users that believe that systems are being over-moderated, and that moderation limits their freedom of expression and delight in these systems (Llanso, 2020; Hammontree, 2023). For example within the entertainment domain, users may wish the conversational AIs to impersonate famous characters, which moderation may restrict them from doing. Further, individual intentions may not always align with the subjective safety standards of the large research institutions developing this technology. It seems important for businesses, and society at large to be willing to discuss what amount and nature of moderation will best serve the needs of the users of this technology.
This paper attempts to take an unbiased stance, and determine the extent to which moderation of conversational AIs influences users' perceived enjoyment of the conversational AI system. We empirically show that safety does not necessarily come at the expense of user enjoyment; models that are moderated to be safer (to a certain extent), even if they ignore user's requests, can lead to a greater user experience. We deploy five conversational AIs in the Chai platform with varying levels of safety and methods to encourage safe language. By considering the downstream user-retention, we find that user-retention can be higher for the moderated systems than for the unmoderated systems. This underscores the importance of a nuanced, light-touch approach to moderation as a way to better serve users.
## 2 Related Work
In recent years, there has been the emergence of a large number of social conversational AIs for chitchat with human users (Yan et al., 2022; Bao et al., 2020; Adiwardana et al., 2020; Irvine et al., 2023; Choudhary and Kawahara, 2022).
**Unsafe Content** With the deployment of these powerful conversational AIs in real-world settings, there has been an increasing concern regarding their safety - it is required that systems do not
generate harmful or illegal content. Schmidt and Wiegand (2017) discuss the scope of unsafe content, covering various aspects of abusive behaviour: hate speech, profanity, malicious intent, hostility, abusive messages and cyber-bullying. There have been extensive efforts in defining thresholds and categories for unsafe responses (Swamy et al., 2019; Waseem et al., 2017; Magu et al., 2017; Zampieri et al., 2019; Caselli et al., 2020; van Aken et al., 2018). A popular categorization (Ram et al., 2018; Paranjape et al., 2020) for offensive content is: profane content, sexual content, racial content, other hate speech and violent crime. With greater granularity, Zhang et al. (2020) outline a more detailed hierarchical taxonomy including concepts such as privacy invasion, jealousy and other forms of malevolent content.
**Design of Safe conversational AI** In conversational AI-human conversations, unsafe content can arise from either the human or the conversational AI. If a human response is identified as harmful, systems will typically return non-committal safe responses or attempt to change the subjects (Cercas Curry and Rieser, 2019; Paranjape et al., 2020). However, malicious users can also design undetectable specific adversarial prompts (Liu et al., 2020; Hill et al., 2015; Roller et al., 2020), where for example a carefully crafted user input can trigger a model to generate offensive content (Wallace et al., 2019). Adversarial susceptibility of conversational AIs is an open research area. When considering generated conversational AI responses, various approaches have been proposed in literature to mitigate the risk of unsafe content generation. These approaches can be grouped into three main categories (Xu et al., 2021). The first approach is **data curation**(Roller et al., 2020; Rashkin et al., 2019), where models are trained on _safe_ data to reduce the likelihood of generating harmful content. However, such under exposed models struggle to respond to offensive user content and are also susceptible to adversarial prompts (Gehman et al., 2020). Alternatively or as an addition to data curation, a safety layer can be introduced through the use of a **detection system**(Zampieri et al., 2019; Founta et al., 2018; Dinan et al., 2020; Khatri et al., 2018) that can identify system generated unsafe responses and reject them. Finally, a suite of methods explore **controlled generation**, where a model is trained to conditionally generate outputs (Smith et al., 2020; Krause et al., 2021), offering the user/designer a level of control on the _offensiveness_ of the conversational AI responses. For example Niu and Bansal (2018) give a control on model politeness and Nogueira dos Santos et al. (2018) show how offensive sentences can be transformed to benign ones.
**Safety and User Engagement** A typical business is concerned with maximising the _engagingess_ of their conversational AI (Irvine et al., 2023), as this translates to a higher level of user retention. There has however been limited research investigating the impact of designing safer models on the level of user engagement. Nevertheless, Xu et al. (2021) find that there is little correlation between safety and conversational AI engagingness, where human evaluation is used to measure both model safety and engagingness. This work explores the interaction between conversational AI safety and engagingness in greater depth.
## 3 Method: Design of a safe chat AI
Modern deep learning-based sequence-to-sequence chat AI systems are typically designed to take as an input a specific user prompt sequence, \(\mathbf{x}\) and a chat history (previous user and system messages), \(\mathbf{h}\), to output the response, \(\hat{\mathbf{y}}\) with the highest likelihood, as per the chat AI model, \(\mathcal{M}\),
\[\hat{\mathbf{y}}=\operatorname*{arg\,max}_{\mathbf{y}}\left\{p_{\mathcal{M}}( \mathbf{y}|\mathbf{x},\mathbf{h};\hat{\theta})\right\}, \tag{1}\]
where \(\hat{\theta}\) are the model's trained parameters. These parameters are learnt during training in a standard unsupervised manner, using masked language modeling, where the model predicts the masked tokens.
### Design
The simplest approach to ensure a chat AI system, \(\mathcal{M}\) only generates _safe_ sequences is to perform masked language modeling training on only _safe_ data, i.e. such that the model is not exposed to unsafe content during training. From a general dataset, \(\mathcal{D}=\{\mathbf{y}_{i}\}\), we can extract a safe dataset, \(\mathcal{D}^{(s)}\),
\[\mathcal{G}(\mathbf{y})<\epsilon\implies\mathbf{y}\in\mathcal{D}^{(s)},\qquad \mathbf{y}\in\mathcal{D}, \tag{2}\]
where \(\mathcal{G}\) is a measure of the _threat_ of a particular sequence (i.e. a measure of how unsafe the model is), \(\mathbf{y}\) and \(\epsilon\) is an agreed threshold of safety. However
it is argued that a model under-exposed to unsafe content fails when exposed to harmful user content and may for example just quote the harmful content back. To mitigate the risk of such undesirable behaviour, this work assumes the standard pipeline of pre-training the language model on general data, \(\mathcal{D}\) and then finetuning it on a curated, safe dataset, \(\mathcal{D}^{(s)}\). This exposes the model to unsafe content during pre-training but encourages it to only generate safe content after finetuning, i.e. \(\mathcal{G}(\mathbf{y})<\epsilon\).
It is expensive and inefficient to use human annotation to define the threat, \(\mathcal{G}\), of any sequence \(\mathbf{y}\) and thus in this work we explore an automated measure to act as a proxy for human perception of model threat. Specifically, a popular and well established moderation tool, OpenAI's moderation endpoint 1 is used as a binary threat classifier as a proxy for, \(\mathcal{G}\). The tool also identifies the following groupings: hate, hate/threatening, sexual, sexual/minors, violence, violence/graphic. Any sample in the training set \(\mathcal{D}\) flagged as risk as per these categories, is excluded from the safe dataset, \(\mathcal{D}^{(s)}\), as specified in Equation 2.
Footnote 1: OpenAI moderation tool: [https://platform.openai.com/docs/guides/moderation/overview](https://platform.openai.com/docs/guides/moderation/overview)
### Safety Evaluation
Having trained a chat AI to be safe, it is necessary to have an independent and systematic measure of the _safety_ of generated responses from the chat AI to verify that the design method does truly create _safer_ responses. Hence, for a set of chat AI responses, \(\{\hat{\mathbf{y}}\}_{n=1}^{N}\), we can calculate a safety score, \(\mathcal{S}\) defined as:
\[\mathcal{S}=1-\frac{1}{N}\sum_{n=1}^{N}\tilde{\mathcal{G}}(\hat{\mathbf{y}}), \tag{3}\]
where \(\tilde{\mathcal{G}}\) represents the threat score normalized to lie between 0 and 1, to behave as a probability. Once again, it is expensive and inefficient to use human evaluation at scale for the threat measure, \(\mathcal{G}\) and thus this work uses an automated proxy measure of threat at inference time. To reduce bias towards the design method, it is necessary to use a different (from open AI's moderation tool) proxy function for \(\mathcal{G}\). Hence, this work uses a standard deep-learning based model, \(\mathcal{Q}\), trained in a supervised manner on pre-curated public safety dataset (binary classification task to identify unsafe and safe samples) 2, as a proxy for threat classification at inference time,
Footnote 2: Model used for evaluation of safety: [https://huggingface.co/unitary/toxic-bert](https://huggingface.co/unitary/toxic-bert)
\[\mathcal{G}(\mathbf{y})\approx P_{\mathcal{Q}}(\mathbf{y}). \tag{4}\]
## 4 Experiments
### Set-Up
**conversational AI Model** In this paper we consider 5 main conversational AI systems: **chat-ai-baseline-u** is a GPT-J 6B [23] fine-tuned on novels and literature3, which was found to contain some unsafe language. **chat-ai-base-s** is another baseline, where chat-ai-base-u is further fine-tuned on safer literature. Next, we introduce **chat-ai-safe**, which is the chat-ai-base-s model trained on real user suggested edits on the Chai platform 4, filtered to retain only safe suggestions, as described in Section 2. For further analysis, we consider **chat-ai-unsafe**, which again takes the chat-ai-base-s model, but now trains it on all real user suggested edits, which have been found to contain some unsafe language. As a final comparison, we consider a model **chat-ai-unsafe-mod**, where the chat-ai-unsafe model is deployed with a safety moderation layer to reject user responses detected as unsafe.
Footnote 3: [https://huggingface.co/hakurei/lit-6B](https://huggingface.co/hakurei/lit-6B)
Footnote 4: The Chai platform allows for users to edit the response given by the system
**User Retention** To evaluate the conversational AIs, we deploy them to the Chai Research platform and look at the user retention after 30 days. This is done by creating mutually exclusive cohorts of new users and assigning them to one of the five systems. The 30 day user retention is the number of users that return to the app after 30 days of signing up, and gives a good indication of the user's experience with the conversational AI.
**Safety Evaluation Dataset** We evaluate the safety of each model as detailed in Section 3.2. The safety score is calculated over a dataset of user-conversational AI conversations, returning the probability of system-generated responses being safe. The safety score (probability) is given by Equation 3. Each model's dataset of system responses consists of 10,000 responses across user conversations, with an average length of \(\mathbf{X}\) tokens per system response.
### Results
Table 1 considers the impact of increasing levels of safety on user retention. It is first verified that by further finetuning chat-ai-base-u on safe literature data to produce model chat-ai-base-s does give a safer system, as there is an increase of almost 10% in average conversational AI responses' safety when deployed on the Chai platform. More encouragingly, there is also observed an increase of 13.8% in the 30-day user retention on the platform with this safer model. Now, further finetuning the chat-ai-base-s model on proposed user edits, filtered by a safety moderation system, to give the chat-ai-safe model, does indeed encourage the system responses to be significantly safer, as seen by an increase from 60.0% to 70.2% in average response safety. Moreover, there is a further 6.9% increase in the 30-day user retention, suggesting that a safer model can increase user retention.
Despite the clear trend in Table 1, it can be argued that increased user retention cannot be attributed solely to higher model safety, but also the fact that the safer models have been trained on more data. This proposition is analyzed in further detail in Table 2, where results are presented for the chat-ai-unsafe model, which has also been finetuned on further data. Instead of finetuning chat-ai-base-s on only moderated user suggested edits, all suggested edits are included. A decrease of almost 5% in the average safety demonstrates that user suggestions often contain threatening language and this encourages the model as a result to generate less safe language. With this decrease in safety there is also a discouraging increase in user retention by 27.6%, which is greater than the 20.6% for the chat-ai-safe model. This perhaps shows that it is the finetuning to user suggestions that has more of an impact on user retention than safe model design.
However, Table 2 also shows that model chat-ai-unsafe-mod, a significantly safer version of chat-ai-unsafe (user responses are rejected at deployment time if deemed unsafe), has the greatest user retention, boosting the user retention from +27.6% to +34.5%. Therefore, it can be argued that encouraging the model to be safe to a certain extent can increase user retention. Specifically, a safety threshold between 70.2% and 62.2% is optimal for maximal user retention, as demonstrated by Figure 1 summarising the relationship between model safety and user retention for the safer models.
## 5 Conclusions
This work provides empirical evidence that moderation of conversational AI systems can lead to a better user experience without compromising on AI safety. By comparing moderated and unmoderated systems deployed on the Chai chaiAI platform, we find that users often prefer (to an extent) a system that is explicitly trained to encourage safe content. These results have important implications for AI developers and businesses, as they demonstrate that prioritizing safety can not only align with moral duties but also benefit business goals by delivering better products that users prefer. Moving forward, it will be crucial to strike a balance between user entertainment and AI safety, and to continue exploring the impact of moderation on user experience in different settings and domains.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Safety (\%) & Retention (\%\(\uparrow\)) \\ \hline chat-ai-base-u & 50.5 & - \\ chat-ai-base-s & 60.0 & +13.7 \\ chat-ai-safe & 70.2 & +20.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model safe-score probability and 30-day user retention relative (percentage increase) to the chat-ai-base-u model.
Figure 1: Model Safety probability (%) against 30-day user retention relative (percentage increase) to the chat-ai-base-u model. |
2309.06572 | Addressing the Blind Spots in Spoken Language Processing | This paper explores the critical but often overlooked role of non-verbal
cues, including co-speech gestures and facial expressions, in human
communication and their implications for Natural Language Processing (NLP). We
argue that understanding human communication requires a more holistic approach
that goes beyond textual or spoken words to include non-verbal elements.
Borrowing from advances in sign language processing, we propose the development
of universal automatic gesture segmentation and transcription models to
transcribe these non-verbal cues into textual form. Such a methodology aims to
bridge the blind spots in spoken language understanding, enhancing the scope
and applicability of NLP models. Through motivating examples, we demonstrate
the limitations of relying solely on text-based models. We propose a
computationally efficient and flexible approach for incorporating non-verbal
cues, which can seamlessly integrate with existing NLP pipelines. We conclude
by calling upon the research community to contribute to the development of
universal transcription methods and to validate their effectiveness in
capturing the complexities of real-world, multi-modal interactions. | Amit Moryossef | 2023-09-06T10:29:25Z | http://arxiv.org/abs/2309.06572v1 | # Addressing the Blind Spots in Spoken Language Processing
###### Abstract
This paper explores the critical but often overlooked role of non-verbal cues, including co-speech gestures and facial expressions, in human communication and their implications for Natural Language Processing (NLP). We argue that understanding human communication requires a more holistic approach that goes beyond textual or spoken words to include non-verbal elements. Borrowing from advances in sign language processing, we propose the development of universal automatic gesture segmentation and transcription models to transcribe these non-verbal cues into textual form. Such a methodology aims to bridge the blind spots in spoken language understanding, enhancing the scope and applicability of NLP models. Through motivating examples, we demonstrate the limitations of relying solely on text-based models. We propose a computationally efficient and flexible approach for incorporating non-verbal cues, which can seamlessly integrate with existing NLP pipelines. We conclude by calling upon the research community to contribute to the development of universal transcription methods and to validate their effectiveness in capturing the complexities of real-world, multi-modal interactions.
## 1 Introduction
Human speech is typically accompanied by a dynamic combination of co-speech gestures and facial expressions, together forming an integral part of human communication. These non-verbal cues, far from being random or merely accessory, provide additional layers of meaning, clarify intention, emphasize points, regulate conversation flow, and facilitate emotional connection. They enrich our interactions and help convey complex or nuanced information that words alone might not capture.
Co-speech gestures refer to the hand and body movements accompanying spoken discourse; they supplement verbal communication by offering additional information, such as object size or shape; they emphasize and make abstract concepts tangible, like gesturing upwards to signify an increase; they control the conversation flow, signaling a speaker's intent, inviting listener interaction, or showing that the speaker is in thought or pause; and lastly, they compensate for the limitations of spoken language, especially in high-stakes or noisy environments, by providing an alternative mode of conveying complex or nuanced information.
Facial expressions during speech significantly contribute to communication by indicating the speaker's emotions, and providing insight into their feelings about the topic; they can emphasize certain aspects of the discourse, with actions like raised eyebrows signifying surprise or importance; they offer social cues, with expressions like a smile suggesting friendliness or a serious look indicating sincerity; they help clarify verbal meaning, especially in ambiguous situations, for example, a confused expression might denote misunderstanding; finally, they enhance interpersonal connection by helping to build rapport, expressing empathy, and conveying cues of understanding and engagement; altogether, facial expressions, like gestures, add complexity and depth to verbal communications.
The field of Natural Language Processing (NLP) has become highly effective in understanding language directly from text. However, understanding speech, with its imperfect and noisy signals, remains a more complex challenge. Text-based language models have proven highly scalable, thanks largely to the compressible nature of text and its abundant availability in semi-anonymous forms. Yet, these models fundamentally ignore the rich layers of meaning added by non-verbal cues, a significant aspect of human communication. This means that while we have become adept at parsing text, we are missing out on the nuanced interplay of speech and gesture that characterizes in-person communication. Despite some promising work in generating co-speech gestures from audio (Gi
nosar et al., 2019; Bhattacharya et al., 2021; Liu et al., 2022), these gestures are often treated as accessory to speech rather than integral components, and thus, they do not always contribute the correct or intended information. As such, an understanding and integration of non-verbal cues remain an important frontier for further exploration in NLP.
Spoken language understanding, we propose, can benefit immensely from the advances in sign language processing. We advocate for the implementation of universal automatic gesture segmentation and transcription models that can transcribe co-speech gestures into textual input. This could be a pioneering step towards integrating the richness of non-verbal cues directly into the NLP models. By including transcribed gestures, the models would bridge the blind spots in spoken language understanding. This is a bidirectional process; Just as spoken language models can learn from sign language processing, the insights from the transcription of spoken language gestures can also inform and enhance sign language processing, due to iconicity, and metaphors. Ultimately, this holistic approach would result in a more nuanced and comprehensive understanding of human communication, bringing us closer to the complexities and richness of real-world, multi-modal interactions.
## 2 Stereotypical Language Variation
Non-verbal forms of communication are subject to significant cultural variability, shaped by a complex interplay of historical, societal, and cultural factors.
In Mediterranean cultures, non-verbal communication is prevalent and vibrant. People in this region often use expressive gestures and maintain close personal space when communicating. Italian, for instance, is renowned for its extensive use of gestures. Italians often use their hands and bodies expressively to illustrate their points or emotions, and there is a broad range of specific gestures that carry particular meanings, often comprehensible even without accompanying speech.
In contrast, Japanese communication tends to incorporate fewer and more subtle non-verbal cues. A bow, a nod, or a slight tilt of the head can convey a myriad of meanings depending on the context, demonstrating respect, agreement, or understanding. Meanwhile, in Nordic cultures, such as Swedish or Finnish, non-verbal cues are typically used sparingly. The communication style tends to be direct and understated, with less emphasis on gestures and more focus on verbal content.
Overall, these stereotypical examples highlight the diverse ways in which languages around the world incorporate non-verbal cues into communication. This diversity emphasizes the importance of cultural understanding and sensitivity in interpreting and engaging in cross-cultural communication research, and data collection and annotation.
## 3 Motivating Examples
Non-verbal cues can act to affirm and reinforce the spoken words, thereby strengthening the communicated message. They can also undermine the verbal message, creating a contradiction between what is being said and the speaker's true intent or feelings. For NLP research to understand speech, it can not rely solely on audio (or textual transcription) to understand the intent of the speaker.
For example, saying 'Perfect' while making a circle with the thumb and index finger often emphasizes approval and satisfaction. Similarly, nodding while saying 'Yes' reinforces affirmation, underscoring the speaker's understanding or agreement. On the other hand, saying 'OK' while rolling one's eyes, can suggest that the speaker doesn't find the situation truly satisfactory, despite the verbal agreement. Similarly, stating "I'm not made" while frowing or clenching fists suggests that the speaker is indeed upset, contradicting their verbal assertion.
### Machine Translation
While existing in many other languages, Italian stereotypically gives us many examples of gestures conveying meaning, where the verbal part is often dropped altogether, making it even more similar to signed languages.
Table 1 showcases a toy example of a conversation between two Italians using only gestures, without speech. It is transcribed using SignWriting (Sutton, 1990) to demonstrate that anonymous non-verbal transcription can be done in a low-bandwidth manner and that it can be reproduced and understood by people trained at reading SignWriting.
### Sentiment Analysis
To demonstrate the limitations of text-based sentiment analysis, consider the following hypothetical dialogue between a couple, where the man is utilizing passive-aggressive communication. In each turn, we also present the sentiment score as predicted by the Google Cloud Natural Language API
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline I & shouldn’t be saying this but... she’s cheating on him & She’s cheating on him?? & She’s cheating on him & Yeah! But it’s none of my business & I can’t believe it \\ \hline \hline \end{tabular}
\end{table}
Table 1: “How to gossip in Italian” by the Pasinis, transcribed in Sutton SignWriting by Sutthikhun Phaengphongsai [https://www.youtube.com/watch?v=7V-GniCQFkE](https://www.youtube.com/watch?v=7V-GniCQFkE), demonstrating a conversation between two Italians using only gestures, without speech.
Demo1, where scores range between \([-1,1]\).
Footnote 1: [https://cloud.google.com/natural-language](https://cloud.google.com/natural-language)
Woman: How is it going? (0) Man: I am fine. (0.74) [crosses his arms]
Woman: Did you enjoy dinner? (0.55) Man: It was fine. (0.92) [avoids eye contact, lips pressed tightly]
Woman: Is something wrong? You seem distant. (-0.78) Man: No, nothing's wrong. (0.53) [shakes his head slightly, exhales loudly]
Woman: Are you sure? (0) Man: I said I'm fine. (0) [rolls eyes, turns away]
While all the man's responses register as neutral to positive, his body language--avoiding eye contact, pressing his lips tightly together, shaking his head, exhaling loudly, rolling his eyes, and turning away--signals that he may actually be upset, frustrated, or disengaged. By neglecting body language and other contextual clues, current models miss out on a significant layer of human communication, particularly in emotionally charged or complex dialogues. Such a holistic approach could provide a more nuanced and accurate understanding of the emotional context and underlying issues, thus enriching machine-human interactions.
## 4 Methodology
Machine learning techniques that focus solely on text have gained predominance due to several key factors: the abundance of readily available text data, the potential for semi-anonymous data collection and processing, the high bandwidth-to-overhead ratio as a word consumes only a few bytes compared to kilobytes or more for a second of speech or video, and the ease with which text can be viewed, edited, and corrected.
Previous efforts have attempted to include other modalities like images (Razavi et al., 2019), videos (Yan et al., 2021), or audio through the use of techniques like VQ-VAEs (van den Oord et al., 2017). However, these approaches often significantly increase the context size, are not transferable across different systems, and generally require the original signal (like a video) to be sent for processing. In contrast, our proposal offers a more flexible, universal, and efficient way to incorporate non-verbal cues directly as text.
### Proposal
We propose adopting a universal transcription system for body language, much like the written system used for spoken languages. This system would transcribe gestures, facial expressions, and other non-verbal cues into textual form. The advantages of this approach are numerous:
Flexibility in TranscriptionDifferent programs can decide on their own transcription methods, taking into account local variations and context.
Computational EfficiencyText-based methods require significantly lower computational resources compared to image or video processing. (Notoriously, GPT-4 was released without image upload support, since inference on a single image takes upwards of 20 seconds)
Compatibility with Existing ModelsAs the body language would be transcribed into discrete tokens, it can fit seamlessly into existing large language models without any modification.
AnonymityTranscription acts as a form of biometric anonymization, removing the need to share actual video or images.
ExplainabilityThe textual transcription provides a more transparent input, making the language modeling process more understandable.
Seamless IntegrationThe proposed methodology does not require any significant changes to existing NLP pipelines. It simply acts as an additional layer of data for better understanding and disambiguation. You can include it, or not.
### Implementation
To successfully integrate non-verbal cues when processing spoken language, we advocate the following steps:
1. Capture both video and audio during speech.
2. Use sign language segmentation models to identify boundaries of individual gestures.
3. Transcribe these gestures into a textual notation system like SignWriting.
4. Use speech-to-text models to transcribe the spoken language, identifying the boundaries where each word is expressed.
5. If word boundaries are not directly accessible, a re-alignment model can be used to approximate these boundaries.
6. Combine both speech and gesture transcriptions into a single text string, where gestures can be used to provide additional context to the spoken words.
This approach can be thought of as analogous to incorporating additional context, such as gender, into machine translation [10]. By training on a large dataset that includes unmarked sentences, the model may develop certain biases. Introducing a smaller dataset with contextual information can help the model learn correlations between language and specific contexts. During inference, one has the option to either provide just the text for a more generalized output or include additional contextual tags for a more accurate and targeted output.
## 5 Conclusions
This paper underscores the fundamental role of non-verbal cues, such as co-speech gestures and facial expressions, in human communication. While strides have been made in the realm of Natural Language Processing for understanding textual content, a holistic approach that integrates the rich layers of non-verbal information is significantly lacking. This shortfall not only hampers the comprehension of spoken language but also limits our ability to construct nuanced, context-aware NLP models.
The key to advancing in this frontier may lie in borrowing techniques and insights from sign language processing. We advocate for the adaptation and implementation of universal automatic gesture segmentation and transcription models that can transcribe co-speech gestures into textual input. Such an approach would be a pivotal step in bridging the gap between text-based and real-world, spoken interactions, thereby enriching both the scope and applicability of NLP models.
When processing spoken language content, researchers should adopt a more holistic lens, one that goes beyond words and phrases to also consider non-verbal cues. Existing universal segmentation and transcription models used in sign language can serve as invaluable resources for this purpose, as they offer the ability to transcribe gestures directly as text.
We call upon researchers in spoken language processing to contribute to the development of universal gesture transcription methods. Furthermore, we encourage the academic community to construct challenge sets specifically tailored to validate the utility of these transcription methods in capturing the complexities of non-verbal communication. These steps are not merely supplementary but are central to achieving a more comprehensive understanding of human communication in its full richness and complexity.
|
2308.08676 | Cutoff in the Bernoulli-Laplace Model With Unequal Colors and Urn Sizes | We consider a generalization of the Bernoulli-Laplace model in which there
are two urns and $n$ total balls, of which $r$ are red and $n - r$ white, and
where the left urn holds $m$ balls. At each time increment, $k$ balls are
chosen uniformly at random from each urn and then swapped. This system can be
used to model phenomena such as gas particle interchange between containers or
card shuffling. Under a reasonable set of assumptions, we bound the mixing time
of the resulting Markov chain asymptotically in $n$ with cutoff at $\log{n}$
and constant window. Among other techniques, we employ the spectral analysis of
arXiv:0906.4242 on the Markov transition kernel and the chain coupling tools of
arXiv:2203.08647 and arXiv:1606.01437. | Thomas Griffin, Bailey Hall, Jackson Hebner, David Herzog, Denis Selyuzhitsky, Kevin Wong, John Wright | 2023-08-16T21:19:51Z | http://arxiv.org/abs/2308.08676v1 | # Cutoff in the Bernoulli-Laplace Model With Unequal Colors and Urn Sizes
###### Abstract
We consider a generalization of the Bernoulli-Laplace model in which there are two urns and total balls, of which \(r\) are red and \(n-r\) white, and where the left urn holds \(m\) balls. At each time increment, \(k\) balls are chosen uniformly at random from each urn and then swapped. This system can be used to model phenomena such as gas particle interchange between containers or card shuffling. Under a reasonable set of assumptions, we bound the mixing time of the resulting Markov chain asymptotically in \(n\) with cutoff at \(\log n\) and constant window. Among other techniques, we employ the spectral analysis of [4] on the Markov transition kernel and the chain coupling tools of [1] and [6].
###### Contents
* 1 Introduction
* 2 Lemmata
* 2.1 Spectral Lemmata
* 2.2 Hypergeometric Comparison Lemmata
Bounds under Assumption 1.1 * 3.1 Lower Bound * 3.2 Upper Bound
* 4 Bounds when \(\gamma=h(1-h)>0\)
* 4.1 The Lower Bound
* 4.2 The Upper Bound
* 5 Conclusion
* 5.1 Generalizations
* 5.2 Numerical Results
* 6 Acknowledgements
## 1 Introduction
In the classical Bernoulli-Laplace model, there are a total of \(n\) balls split evenly between a left and a right urn. Of these balls, \(\frac{n}{2}\) are red and \(\frac{n}{2}\) are white. At each time increment, \(k\) balls are selected uniformly at random without replacement from each urn and then swapped between the urns.
We consider a generalization of this model where the left urn holds \(m\) balls and the right urn \(n-m\) balls, and where \(r\) of the balls are red and \(n-r\) of the balls are white. As in the classical case, \(k\) balls are selected from both urns uniformly and without replacement to swap. Note that implicitly, \(m,r,k\) are sequences in \(n\) but we will suppress the reliance. We will let \(X_{t}\) denote the number of red balls in the left urn after \(t\) swaps. Transition probabilities for this process are given by
\[p(x,y)=\mathbb{P}(x-H_{1}^{x}+H_{2}^{x}=y) \tag{1}\]
where \(H_{1}^{x}\) and \(H_{2}^{x}\) are independent hypergeometric random variables corresponding to the number of red balls moved out of the left urn and into the left urn during the swap, respectively. More precisely,
\[H_{1}^{x}\sim\mathrm{Hyp}(m,x,k)\quad\text{ and }\quad H_{2}^{x}\sim\mathrm{ Hyp}(n-m,r-x,k). \tag{2}\]
This process is an irreducible, aperiodic Markov chain on the finite state space
\[\mathcal{X}=\{\max(0,r+m-n),\max(0,r+m-n)+1,...,\min(m,r)\}\]
and so converges to a stationary distribution \(\pi\). Given an initial distribution \(\mu\) for \(X_{0}\), we define the _total variation distance_ between the law of \(X_{t}\) and the stationary distribution by
\[\|\mu P^{t}-\pi\|_{TV}=\sup_{A\subset\mathcal{X}}\left|\mu P^{t}(A)-\pi(A) \right|,\]
where \(P\) is the Markov transition kernel corresponding to the chain. We may then define the _mixing time at \(\varepsilon\)_ by
\[t_{\mathrm{mix}}^{(n)}(\varepsilon)=\sup_{x\in\mathcal{X}}\inf\{t:\|\delta_{x}P ^{t}-\pi\|_{TV}\leq\varepsilon\},\]
where \(\delta_{x}\) is the point distribution at \(x\) (meaning the chain starts at \(X_{0}=x\in\mathcal{X}\)).
We say a sequence of Markov chains _exhibits cutoff_ in total variation if
\[\lim_{n\to\infty}\frac{t_{\mathrm{mix}}^{(n)}(\varepsilon)}{t_{\mathrm{mix}}^ {(n)}(1-\varepsilon)}=1\quad\text{ for all fixed }\quad\varepsilon\in(0,1).\]
Further, we say a sequence \(a_{n}\) is a _cutoff window_ if there exists a constant \(c(\varepsilon)\) for each \(\varepsilon\in(0,1)\) such that
\[a_{n}=o\left(t_{\mathrm{mix}}^{(n)}\left(\frac{1}{2}\right)\right)\quad\text {and}\quad\left|t_{\mathrm{mix}}^{(n)}(\varepsilon)-t_{\mathrm{mix}}^{(n)}(1- \varepsilon)\right|\leq c(\varepsilon)a_{n}\quad\text{ for all }\quad n.\]
This paper analyzes the mixing times of sequences \(\{X_{t}^{(n)}\}_{n}\) of generalized Bernoulli-Laplace chains. To make this manageable, we introduce the following assumption.
**Assumption 1.1**.: _For each \(n\), assume without loss of generality that \(r,m\leq\frac{n}{2}\). Suppose \(\lim_{n\to\infty}\frac{k}{n}=\gamma\), \(\lim_{n\to\infty}\frac{m}{n}=h\) and \(\lim_{n\to\infty}\frac{r}{n}=\eta\), and that_
\[0<\gamma\leq h\leq\frac{1}{2},\quad 0<\eta\leq\frac{1}{2},\quad\gamma\neq h(1-h),\quad\text{ and }\quad\gamma\neq\frac{1}{2}\text{ if }h=\frac{1}{2}.\]
**Remark 1.2**.: _When proving asymptotic results, Assumption 1.1 allows us to make the following assumptions without loss of generality_
\[m,n-m\geq 2,\quad k\neq\frac{m(n-m)}{n},\quad\text{and}\quad\text{if }k=m\text{ then }m\neq\frac{n}{2}.\]
_Further,_
\[\mathcal{X}=\{0,\ldots,\min(m,r)\}.\]
For completeness, we explicitly provide the stationary distributions of the chains. The proof is simple and follows, for example, from the logic in [7].
**Proposition 1.3**.: _The stationary distribution of \(X_{t}^{(n)}\) is \(\pi^{(n)}\sim\mathrm{Hyp}(n,r,m)\), with probability mass function_
\[\pi^{(n)}(j)=\frac{\binom{r}{j}\binom{n-r}{m-j}}{\binom{n}{m}}.\]
We now define for each \(n\)
\[t_{n}:=\frac{-\log n}{2\log\left|1-\frac{kn}{m(n-m)}\right|}=\frac{\log n}{2 \left|\log\left|1-\frac{kn}{m(n-m)}\right|\right|}. \tag{3}\]
**Notation**. Let \(a_{n}\) and \(b_{n}\) be sequences in \(\mathbb{R}\). We write \(a_{n}\lesssim b_{n}\) if there exists a constant \(C\in\mathbb{R}\) such that \(\left|a_{n}\right|\leq C\left|b_{n}\right|\) for all \(n\) large enough. If \(\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=1\), we write \(a_{n}\sim b_{n}\). To denote the probability of transitioning from a state \(x\) into a set \(S\) in \(t\) steps, we write
\[P_{t}(x,S):=\sum_{y\in S}p_{t}(x,y)=\mathbb{P}_{x}(X_{t}\in S),\]
where \(\mathbb{P}_{x}\) denotes the probability measure induced by the Markov chain starting from a state \(X_{0}=x\in\mathcal{X}\). We may also sometimes write \(X_{t}^{x}\) to refer to a random variable induced by the same starting criterion, especially in the context of two chains started at different initial states.
Our main result bounds \(t_{\mathrm{mix}}^{(n)}(\varepsilon)\) within a constant distance of \(t_{n}\) and is given below.
**Theorem 1.4**.: _Let \(\{X_{t}^{(n)}\}_{n}\) be a sequence of generalized Bernoulli-Laplace chains satisfying Assumption 1.1. Then there exist constants \(N_{1},N_{2},c\) and \(C\), all depending on \(\varepsilon,\gamma,\eta,h\), such that for all \(n\geq N_{1}\),_
\[t_{n}-c\leq t_{\mathrm{mix}}^{(n)}(\varepsilon),\]
_and for all \(n\geq N_{2}\),_
\[t_{\mathrm{mix}}^{(n)}(\varepsilon)\leq t_{n}+C,\]
_with \(t_{n}\) as in (3)._
**Remark 1.5**.: _By taking the maximums of a finite number of constants, Theorem 1.4 implies that for any sequence \(\{X_{t}^{(n)}\}_{n}\) of Bernoulli-Laplace chains satisfying Assumption 1.1, there exists a constant \(C^{\prime}\) such_
\[t_{n}-C^{\prime}\leq t_{\mathrm{mix}}^{(n)}(\varepsilon)\leq t_{n}+C^{\prime}\]
_for all \(n\). This further implies that \(t_{\mathrm{mix}}^{(n)}(\varepsilon)\) exhibits cutoff at \(t_{n}\) with constant window._
**Remark 1.6**.: _If we assume further that_
\[\left|\frac{1}{\log\left|1-\frac{\gamma}{\eta(1-\eta)}\right|}-\frac{1}{\log \left|1-\frac{kn}{m(n-m)}\right|}\right|=O\left(\frac{1}{\log n}\right),\]
_then Theorem 1.4 implies (for possibly different \(C^{\prime}\)) that_
\[\frac{-\log n}{2\log\left|1-\frac{\gamma}{\eta(1-\eta)}\right|}-C^{\prime} \leq t_{\mathrm{mix}}^{(n)}(\varepsilon)\leq\frac{-\log n}{2\log\left|1-\frac {\gamma}{\eta(1-\eta)}\right|}+C^{\prime}, \tag{4}\]
_and so \(t_{\mathrm{mix}}^{(n)}(\varepsilon)\) exhibits cutoff at order \(\log n\) with constant window._
**Remark 1.7**.: _Theorem 1.4 fails when \(\gamma=h(1-h)\). In that case, \(t_{\rm mix}^{(n)}(\varepsilon)\) is usually a bounded sequence. For details, see Section 4._
## 2 Lemmata
### Spectral Lemmata
All of the eigenvalues and eigenfunctions for the Bernoulli-Laplace model are known (see [4]), but here we only need the first two. This is similar to the approach in [1].
**Lemma 2.1**.: _The Markov transition has first two right eigenfunctions_
\[s_{1}(x)=1-\frac{n}{rm}x \tag{5}\] \[s_{2}(x)=1-\frac{2(n-1)}{rm}x+\frac{(n-1)(n-2)}{r(r-1)m(m-1)}x(x-1) \tag{6}\]
_with respective eigenvalues_
\[\lambda_{1}=1-\frac{nk}{m(n-m)} \tag{7}\] \[\lambda_{2}=1-\frac{2(n-1)k}{m(n-m)}+\frac{(n-1)(n-2)k(k-1)}{m(m- 1)(n-m)(n-m-1)}. \tag{8}\]
_That is,_
\[\mathbb{E}_{x}[s_{1}(X_{1})]=\lambda_{1}s_{1}(x)\quad\text{ and }\quad \mathbb{E}_{x}[s_{2}(X_{1})]=\lambda_{2}s_{2}(x).\]
Proof.: This can be checked by a direct calculation.
**Corollary 2.2**.: _For all \(t\geq 0,x\in\mathcal{X}\), we have_
\[\mathbb{E}_{x}[s_{1}(X_{t})]=\lambda_{1}^{t}s_{1}(x)\] \[\mathbb{E}_{x}[s_{2}(X_{t})]=\lambda_{2}^{t}s_{2}(x)\]
_by the Markov property._
Doing some algebra, we find that
\[s_{1}^{2}(x)=b_{0}+b_{1}s_{1}(x)+b_{2}s_{2}(x),\]
where
\[b_{0}=\frac{(n-m)(n-r)}{(n-1)rm},\quad\ b_{1}=-\frac{(n-2r)(n-2m )}{(n-2)rm},\quad\ b_{2}=\frac{n^{2}(r-1)(m-1)}{(n-1)(n-2)rm}.\]
**Corollary 2.3**.: _For any \(x,t,\) we have_
\[\mathbb{E}_{x}[s_{1}^{2}(x)]=b_{0}+b_{1}\lambda_{1}^{t}s_{1}(x)+b_{2}\lambda_{2}^ {t}s_{2}(x).\]
**Lemma 2.4**.: _Let \(\lambda_{1},\lambda_{2}\) be as above. Then \(\lambda_{1}^{2}-\lambda_{2}=O(\frac{1}{n}).\)_
Proof.: Observe that
\[\lambda_{1}^{2}-\lambda_{2} =\left(1-\frac{nk}{m(n-m)}\right)^{2}-\left(1-\frac{2(n-1)k}{m(n- m)}+\frac{(n-1)(n-2)k(k-1)}{m(m-1)(n-m)(n-m-1)}\right)\] \[=\frac{2k}{m(n-m)}+\left(\frac{n^{2}k^{2}}{m^{2}(n-m)^{2}}-\frac{ (n-2)(n-1)k(k-1)}{(n-m)(n-m-1)m(m-1)}\right).\]
The claim follows.
**Lemma 2.5**.: _Let \(\lambda_{2}\) be as above, \(t_{n}\) as in (3), and suppose \(\gamma\neq h(1-h)\). Then \(\lambda_{2}^{t_{n}}=O(\frac{1}{n}).\)_
Proof.: We first show that
\[\lambda_{1}^{2}-\lambda_{2}=\frac{2k}{m(n-m)}+\left(\frac{n^{2}k^{2}}{m^{2}(n- m)^{2}}-\frac{(n-2)(n-1)k(k-1)}{(n-m)(n-m-1)m(m-1)}\right)\geq 0.\]
Notice that for \(n\geq 4\), we have
\[\frac{4k^{2}}{m^{2}(n-m)^{2}}\leq\frac{2k}{m(n-m)}.\]
It thus suffices to show that
\[\frac{(n^{2}+4)k^{2}}{m^{2}(n-m)^{2}}-\frac{(n-2)(n-1)k(k-1)}{(n-m)(n-m-1)m(m- 1)}\geq 0,\]
which is equivalent to
\[\frac{(n^{2}+4)k}{m(n-m)}-\frac{(n-2)(n-1)(k-1)}{(n-m-1)(m-1)}\geq 0.\]
It is easy to check that the claim holds for \(k=1\) and \(k=m\). Since the left-hand side is linear in \(k\), this implies that the claim holds for all \(k\in\{1,\ldots,m\}\).
The condition \(\gamma\neq h(1-h)\) implies \(\lim_{n\to\infty}\lambda_{2}>0\). We may thus assume without loss of generality that \(\lambda_{2}=|\lambda_{2}|\). Therefore \(\log\left(\lambda_{1}^{2}\right)-\log|\lambda_{2}|=:a_{n}\geq 0\), and so
\[\left|\lambda_{2}^{t_{n}}\right| =\exp\left(\log|\lambda_{2}|\frac{\log n}{|\log(\lambda_{1}^{2}) |}\right)\] \[=\exp\left(\left[\log\left(\lambda_{1}^{2}\right)-a_{n}\right] \frac{\log n}{|\log(\lambda_{1}^{2})|}\right)\] \[=\exp\left(-\log n\right)\exp\left(\frac{-a_{n}\log n}{|\log( \lambda_{1}^{2})|}\right)\] \[\lesssim\frac{1}{n}.\]
### Hypergeometric Comparison Lemmata
Denote the standard normal density by \(\phi(x)\). For \(l\in\mathcal{X}\), let
\[p:=\frac{l}{m},\quad q:=1-p,\quad\sigma:=1\vee\sqrt{kpq\left(1-\frac{k}{n} \right)}. \tag{9}\]
**Definition 2.6**.: _We say \(Z\) has a discrete normal distribution on \(\mathcal{X}\) with parameters \(\zeta\in\mathbb{R}\) and \(\xi>0\) if_
\[\mathbb{P}(Z=j)=\frac{1}{\xi\mathcal{N}_{\zeta,\xi}}\phi\left(\frac{j-\zeta}{ \xi}\right)\qquad\text{ where }\qquad\mathcal{N}_{\zeta,\xi}:=\sum_{x\in \mathcal{X}_{k}}\frac{1}{\xi}\phi\left(\frac{x-\zeta}{\xi}\right)\]
_for \(j\in\mathcal{X}\), and denote \(Z\sim\mathrm{dN}(\zeta,\xi)\)._
Further, let \(x_{j}:=\frac{j-kp}{\sigma}\) and \(\mathcal{N}:=\mathcal{N}_{kp,\sigma}\). Finally, we can give an important technical lemma, the proof of which follows Lemma 5.2 from [1].
**Lemma 2.7**.: _Given the above parameters, supposing Assumption 1.1 holds, letting \(l=\frac{rm}{n}+O(\sqrt{n})\), we have that_
\[\mathcal{N}=1+O\left(\frac{1}{\sqrt{n}}\right).\]
Proof.: Let \(\mathcal{Z}\) be the standard normal distribution on \(\mathbb{R}\). From the proof of Lemma 5.2 from [1], we can obtain that
\[\mathcal{N}\geq 1-\mathbb{P}\left(\mathcal{Z}\leq\frac{-kp}{\sigma}\right)- \mathbb{P}\left(\mathcal{Z}\geq\frac{kq}{\sigma}\right)-\frac{2}{\sqrt{2\pi \sigma^{2}}}.\]
Letting \(l=\frac{rm}{n}+O(\sqrt{n})\), we have
\[p=\frac{r}{n}+O\left(\frac{1}{\sqrt{n}}\right)\quad\text{ and }\quad q=\frac{r}{n }+O\left(\frac{1}{\sqrt{n}}\right)\quad\text{ and }\quad\sigma\sim\sqrt{\gamma(1-\gamma)n\eta(1-\eta)}.\]
Applying Chebyshev's inequality, we get
\[\mathbb{P}\left(\mathcal{Z}\leq\frac{-kp}{\sigma}\right)\leq\frac{\sigma^{2}} {k^{2}p^{2}}=O\left(\frac{1}{n}\right).\]
Similarly, \(\mathbb{P}\left(\mathcal{Z}=\frac{kq}{\sigma}\right)=O\left(\frac{1}{n}\right)\). Of course, \(\frac{2}{\sqrt{2\pi\sigma^{2}}}=O\left(\frac{1}{\sqrt{n}}\right)\), and so the claim follows.
## 3 Bounds under Assumption 1.1
### Lower Bound
In this section, we prove the lower bound of Theorem 1.4, which we restate below.
**Proposition 3.1**.: _Let \(\{X_{t}^{(n)}\}_{n}\) be a sequence of generalized Bernoulli-Laplace chains satisfying Assumption 1.1. Then there exist constants \(N_{1}:=N_{1}(\varepsilon,\gamma,\eta,h)\) and \(c:=c(\varepsilon,\gamma,\eta,h)\) such that for all \(n\geq N_{1}\),_
\[t_{n}-c\leq t_{\mathrm{mix}}^{(n)}(\varepsilon).\]
Proof.: We define
\[s(x)=\sqrt{n-1}s_{1}(x)\]
and calculate the mean and variance of \(s(X_{t})\) both in the case where the chain is started from \(0\) and with respect to the stationary distribution. We also let \(t=t_{n}-c\), where \(c\) is to be determined later.
Clearly, \(\mathbb{E}_{\pi}[s(X_{t})]=0\) and
\[\mathrm{Var}_{\pi}(s(X_{t}))=(n-1)\frac{n^{2}}{r^{2}m^{2}}\mathrm{ Var}_{\pi}(X_{t})=\frac{(n-m)(n-r)}{rm}\leq\frac{(1-h)(1-\eta)}{\eta h}+2\]
for sufficiently large \(n\). Now,
\[|\mathbb{E}_{0}[s(X_{t})]| =|\sqrt{n-1}\lambda_{1}^{t}s_{1}(0)|\] \[=\sqrt{n-1}|\lambda_{1}|^{t_{n}}|\lambda_{1}|^{-c}\] \[=\sqrt{\frac{n-1}{n}}|\lambda_{1}|^{-c}.\]
To calculate
\[\mathrm{Var}_{0}[s(X_{t})]=(n-1)\mathbb{E}_{0}[s_{1}(X_{t})^{2}]- \mathbb{E}_{0}[s(X_{t})]^{2},\]
we first write
\[\mathbb{E}_{0}[s_{1}(X_{t})^{2}] =b_{0}+b_{1}\lambda_{1}^{t}+b_{2}\lambda_{2}^{t}\] \[=\frac{(n-m)(n-r)}{(n-1)rm}-\frac{(n-2m)(n-2r)}{(n-2)rm}\lambda_ {1}^{t_{n}}\lambda_{1}^{-c}+\frac{(m-1)(r-1)n^{2}}{rm(n-1)(n-2)}\lambda_{2}^{ t_{n}}\lambda_{2}^{-c}.\]
By Lemma 2.5, \(\lambda_{2}^{t_{n}}=O(\frac{1}{n})\). Now consider these three terms in \((n-1)\mathbb{E}_{0}[s_{1}(X_{t})^{2}]\) in the limit as \(n\to\infty\). The first,
\[\frac{(n-1)(n-m)(n-r)}{(n-1)rm}\to\frac{(1-h)(1-\eta)}{h\eta}.\]
The second vanishes, and the third is asymptotically bounded by \(1\). It follows that
\[\text{Var}_{0}[s(X_{t})]\leq\frac{(1-h)(1-\eta)}{h\eta}+2=:K\]
for sufficiently large \(n\). We now define the sets
\[A_{\alpha}=\left\{x\in\mathcal{X}:|s(x)|\leq\alpha\sqrt{K}\right\}\]
and
\[B_{d,c} =\left\{x:\min(|s(x)-\lambda_{1}^{-c}|,|s(x)+\lambda_{1}^{-c}|) \leq d\sqrt{K}\right\}\] \[=\left\{x:|s(x)-\lambda_{1}^{-c}|\leq d\sqrt{K}\right\}\cup\left\{ x:|s(x)+\lambda_{1}^{-c}|\leq d\sqrt{K}\right\}.\]
The two sets reflect the two possibilities for the mean. Using Chebyshev's inequality, we calculate
\[\pi_{n}(A_{\alpha})\geq 1-\frac{1}{\alpha^{2}}\]
For \(B_{d,c}\), we have
\[P_{t}(0,B_{d,c}) \geq P_{t}\left(0,\left\{\left|s(X_{t})-\sqrt{\frac{n}{n-1}}\mathbb{ E}_{0}[s(X_{t})]\right|\leq d\sqrt{K}\right\}\right)\] \[=P_{t}\left(0,\left\{|s(X_{t})-\mathbb{E}_{0}[s(X_{t})]|\leq d \sqrt{K}\right\}\right)\] \[\qquad+P_{t}\left(0,\left\{\mathbb{E}_{0}[s(X_{t})]-d\sqrt{K} \leq s(X_{t})<\mathbb{E}_{0}[s(X_{t})]\sqrt{\frac{n}{n-1}}-d\sqrt{K}\right\}\right)\] \[\qquad-P_{t}\left(0,\left\{\mathbb{E}_{0}[s(X_{t})]+d\sqrt{K}<s( X_{t})\leq\mathbb{E}_{0}[s(X_{t})]\sqrt{\frac{n}{n-1}}+d\sqrt{K}\right\}\right)\] \[\geq P_{t}\left(0,\left\{|s(X_{t})-\mathbb{E}_{0}[s(X_{t})]|\leq d \sqrt{K}\right\}\right)\] \[\qquad-P_{t}\left(0,\left\{\mathbb{E}_{0}[s(X_{t})]+d\sqrt{K}<s( X_{t})\leq\mathbb{E}_{0}[s(X_{t})]\sqrt{\frac{n}{n-1}}+d\sqrt{K}\right\}\right)\] \[\geq 1-\frac{1}{d^{2}}-\frac{\left|\left(\sqrt{\frac{n}{n-1}}-1 \right)\mathbb{E}_{0}[s(X_{t})]\right|}{\frac{n\sqrt{n-1}}{rm}}\] \[\geq 1-\frac{1}{d^{2}}-\frac{K^{\prime}}{\sqrt{n}}\]
eventually, where \(K^{\prime}\) is a constant. Notice that if
\[|\lambda_{1}|^{-c}-d\sqrt{K}\geq 2\alpha\sqrt{K} \tag{10}\]
we have that \(A_{\alpha}\) and \(B_{d,c}\) are disjoint, implying that \(P_{t}(0,A_{\alpha})\leq\frac{1}{d^{2}}+\frac{K^{\prime}}{\sqrt{n}}\). Choosing \(\alpha\) and \(d\) such that \(\frac{1}{\alpha^{2}}\) and \(\frac{1}{d^{2}}\) are less than \(\frac{\varepsilon}{3}\), and then \(c\) such that (10) is satisfied, we have
\[||P_{t}(0,\cdot)-\pi_{n}||_{TV} \geq|P_{t}(0,A_{\alpha})-\pi_{n}(A_{\alpha})|\] \[\geq 1-\frac{1}{\alpha^{2}}-\frac{1}{d^{2}}-\frac{K^{\prime}}{ \sqrt{n}}\] \[\geq 1-\varepsilon\]
which gives the desired result.
Observe that equality is (asymptotically) achieved in (10) when
\[c(\varepsilon,\gamma,\eta,h)=\frac{\log\left(2\alpha+d\right)+\log\sqrt{K}}{ \left|\log\left|1-\frac{\gamma}{h(1-h)}\right|\right|}\]
which does not depend on \(n\).
### Upper Bound
We now begin the proof of the upper bound in Theorem 1.4.
**Definition 3.2**.: _Define the following:_
\[\mathcal{I}_{n}(\kappa) =\left\{x\in\mathcal{X}:\left|x-\frac{rm}{n}\right|\leq\kappa \sqrt{n}\right\}\] \[F_{n}(\kappa) =\left\{(x,y)\in\mathcal{I}_{n}(\kappa)^{2}:|x-y|\leq\frac{\sqrt{ n}}{\kappa^{3}}\right\}\] \[\tau_{x,y}(\kappa) =\min\{t:(Y_{t}^{x},Y_{t}^{y})\in F_{n}(\kappa)\}.\]
#### 3.2.1 Path Coupling Contraction
We use a generalization of the coupling introduced in [6]. Let \(Y_{1,t}\) and \(Y_{2,t}\) be generalized Bernoulli-Laplace chains with parameters \(m,r,n\). We define a coupling between these chains as follows: In each chain, label the balls in the left urn with \(\{1,\ldots,m\}\) and the balls in the right urn with \(\{m+1,\ldots,n\}\) such that the red balls in any given urn have lower indices than the white balls in the same urn. Now, choose subsets \(A\subset\{1,\ldots,m\}\) and \(B\subset\{m+1,\ldots,n\}\) uniformly at random such that \(|A|=|B|=k\). Then, in both chains, move the balls with labels in \(A\) to the right urn and labels in \(B\) to the left urn. Relabel the balls and repeat.
This is a coupling because each individual chain, \(Y_{1,t}\) or \(Y_{2,t}\), has the same behavior as \(X_{t}\). However, there is a tendency for \(Y_{1,t}\) and \(Y_{2,t}\) to approach one another because the same indices are being swapped in each chain and indices are not independent of color. This is formalized below.
**Lemma 3.3**.: _Let \(x,y\in\mathcal{X}\). Then for all \(t\geq 0\),_
\[\mathbb{E}[|Y_{t}^{x}-Y_{t}^{y}|]\leq\left(1-\frac{k(n-2k)}{m(n-m)}\right)^{t}|x -y|.\]
_That is, each application of this path coupling's Markov transition is a strict contraction with coefficient_
\[1-\frac{k(n-2k)}{m(n-m)}\in(0,1).\]
Proof.: The proof is very similar to that of Lemma 3.4 in [1]. We may assume without loss of generality that \(x>y\). By the Markov property, it suffices to prove for \(t=1\). Now consider the case when \(x-y=1\). Then \(Y_{1}^{x}-Y_{1}^{y}\) can only takes values \(-1,0,\text{ and }1\). We directly calculate that
\[\mathbb{P}(Y_{1}^{x}-Y_{1}^{y}=-1)=\frac{k}{m}\left(\frac{k}{n-m}\right)\]
and
\[\mathbb{P}(Y_{1}^{x}-Y_{1}^{y}=1)=\left(\frac{m-k}{m}\right)\left(\frac{n-m-k }{n-m}\right).\]
Thus
\[\mathbb{E}[|Y_{1}^{x}-Y_{1}^{y}|] =\frac{k}{m}\left(\frac{k}{n-m}\right)+\left(\frac{m-k}{m}\right) \left(\frac{n-m-k}{n-m}\right)\] \[=1-\frac{k(n-2k)}{m(n-m)}.\]
When \(x-y>1\), we use the triangle inequality to get
\[\mathbb{E}[|Y_{1}^{x}-Y_{1}^{y}|]\leq\sum_{i=0}^{x-y-1}\mathbb{E}[|Y_{1}^{y+i} -Y_{1}^{y+i+1}|]\leq(x-y)\left(1-\frac{k(n-2k)}{m(n-m)}\right).\]
Now we state the following:
**Proposition 3.4**.: _Suppose Assumption 1.1 is satisfied. Then for any \(\kappa\in\mathbb{N}\) such that_
\[\kappa^{4}\left(1-\frac{\gamma(1-2\gamma)}{h(1-h)}\right)^{\kappa}\leq\frac{1 }{\kappa^{2}}, \tag{11}\]
_we have_
\[\mathbb{P}(\tau_{x,y}(\kappa)>t_{n}+\kappa)=O\left(\frac{1}{\kappa^{2}}\right).\]
Proof.: First observe that
\[\mathbb{P}(\tau_{x,y}(\kappa)>t_{n}+\kappa)\leq\mathbb{P}(X_{t_{n}}^ {x}\not\in\mathcal{I}_{n}(\kappa))+\mathbb{P}(X_{t_{n}}^{y}\not\in\mathcal{I}_{n }(\kappa))+\mathbb{P}(X_{t_{n}+\kappa}^{x}\not\in\mathcal{I}_{n}(\kappa))\] \[+\mathbb{P}(X_{t_{n}+\kappa}^{y}\not\in\mathcal{I}_{n}(\kappa))+ \mathbb{P}\left(Y_{t_{n}}^{x},Y_{t_{n}}^{y}\in\mathcal{I}_{n}(\kappa),\left|Y_ {t_{n}+\kappa}^{x}-Y_{t_{n}+\kappa}^{y}\right|>\frac{\sqrt{n}}{\kappa^{3}} \right),\]
where we may use \(X\) instead of \(Y\) for the first four terms because \(X\) and \(Y\) share the same distributions by the coupling. We now work to asymptotically bound these terms by \(\frac{1}{\kappa^{2}}\). For any constants \(t\geq 0\) and \(z\in\mathcal{X}\), we have
\[\mathbb{P}_{z}(X_{t}\not\in\mathcal{I}_{n}(\kappa)) =\mathbb{P}_{z}\left(\left|X_{t}-\frac{rm}{n}\right|>\kappa\sqrt{ n}\right)\] \[=\mathbb{P}_{z}\left(s_{1}^{2}(X_{t})>\kappa^{2}\frac{n^{3}}{r^{4 }}\right)\] \[\leq\frac{1}{\kappa^{2}}\frac{r^{4}}{n^{3}}\mathbb{E}_{z}[s_{1}^ {2}(X_{t})]\] \[=\frac{1}{\kappa^{2}}\frac{r^{4}}{n^{3}}\left(b_{0}+b_{1}\lambda_ {1}^{t}s_{1}(z)+b_{2}\lambda_{2}^{t}s_{2}(z)\right).\]
Two of the terms involve \(t=t_{n}\) and the other two \(t=t_{n}+\kappa\). Since \(\left|\lambda_{1}\right|,\left|\lambda_{2}\right|\leq 1\), it suffices to bound for \(t=t_{n}\). As \(\frac{r^{4}}{n^{3}}=O(n)\), we need only show \(b_{0},b_{1}\lambda_{1}^{t_{n}}s_{1}(z),b_{2}\lambda_{2}^{t_{n}}s_{2}(z)=O(\frac {1}{n})\). The first two follow easy because \(b_{0},b_{1}=O(\frac{1}{n})\) and \(\lambda_{1}\) and \(s_{1}(z)\) are bounded. For the final term, notice that \(b_{2}\) and \(s_{2}(z)\) is bounded. It therefore suffices to show that \(\lambda_{2}^{t_{n}}=O(\frac{1}{n})\), and this follows from Lemma 2.5. Thus
\[\mathbb{P}_{z}(X_{t}\not\in\mathcal{I}_{n}(\kappa))=O\left(\frac{1}{\kappa^{2 }}\right).\]
We now bound the last term. Observe that
\[\mathbb{P}\left(Y_{t_{n}}^{x},Y_{t_{n}}^{y}\in\mathcal{I}_{n}( \kappa),\left|Y_{t_{n}+\kappa}^{x}-Y_{t_{n}+\kappa}^{y}\right|>\frac{\sqrt{n}} {\kappa^{3}}\right) \leq\max_{x,y\in\mathcal{I}_{n}(\kappa)}\mathbb{P}\left(\left|Y_ {\kappa}^{x}-Y_{\kappa}^{y}\right|\geq\frac{\sqrt{n}}{\kappa^{3}}\right)\] \[\leq\max_{x,y\in\mathcal{I}_{n}(\kappa)}\frac{\kappa^{3}}{\sqrt{ n}}\mathbb{E}\left[\left|Y_{\kappa}^{x}-Y_{\kappa}^{y}\right|\right]\] \[\leq 2\kappa^{4}\left(1-\frac{k(n-2k)}{m(n-m)}\right)^{\kappa}\] \[=O\left(\frac{1}{\kappa^{2}}\right).\]
#### 3.2.2 Mixing Times for Chains Started Close Together
We now endeavor to prove the following:
**Proposition 3.5**.: _Suppose that Assumption 1.1 is satisfied. Then,_
\[\max_{(x,y)\in F_{n}(\kappa)}\|\delta_{x}P-\delta_{y}P\|_{TV}=O\left(\frac{1}{ \kappa^{2}}\right).\]
Proof.: Let \(H_{1}^{x}\) and \(H_{2}^{x}\) be hypergeometric random variables as in Equation (2). First, let \(Z_{a}^{b}\sim\mathrm{dN}_{k}(kp_{a}^{b},\sigma^{b})\) for \(a\in\{1,2\}\) and \(b\in\{x,y\}\), where
\[p_{1}^{x}=\frac{x}{m},\ \ p_{2}^{x}=q_{1}^{x},\ \ p_{1}^{y}=\frac{y}{m},\ \ p_{2}^{y}=q_{1}^{y},\ \ \ \text{and}\ \ \ \sigma^{b}=1\vee\sqrt{kp_{1}^{b}p_{2}^{b}\left(1-\frac{k}{n}\right)}.\]
Now observe that
\[\mu_{H_{2}^{x}-H_{1}^{x}+x} -\mu_{H_{2}^{y}-H_{1}^{y}+y}\] \[=(\mu_{H_{2}^{x}-H_{1}^{x}+x}-\mu_{H_{2}^{x}-Z_{1}^{x}+x})+(\mu_ {H_{2}^{x}-Z_{1}^{x}+x}-\mu_{Z_{2}^{x}-Z_{1}^{x}+x})+(\mu_{Z_{2}^{x}-Z_{1}^{x} +x}-\mu_{Z_{2}^{x}-Z_{1}^{y}+x})\] \[+(\mu_{Z_{2}^{x}-Z_{1}^{y}+x}-\mu_{Z_{2}^{y}-Z_{1}^{y}+y})+(\mu_ {Z_{2}^{y}-Z_{1}^{y}+y}-\mu_{H_{2}^{y}-Z_{1}^{y}+y})+(\mu_{H_{2}^{y}-Z_{1}^{y} +y}-\mu_{H_{2}^{y}-H_{1}^{y}+y}).\]
By the independence of all the hypergeometric metrics and discrete normals above, we obtain
\[\|\delta_{x}P-\delta_{y}P\|_{TV} =\|\mu_{H_{2}^{x}-H_{1}^{x}+x}-\mu_{H_{2}^{y}-H_{1}^{y}+y}\|_{TV}\] \[\leq\sum_{b\in\{x,y\}}\sum_{a=1}^{2}\|\mu_{H_{a}^{b}}-\mu_{Z_{a}^ {b}}\|_{TV}+\|\mu_{Z_{2}^{x}+x-y}-\mu_{Z_{2}^{y}}\|_{TV}+\|\mu_{Z_{1}^{x}}-\mu _{Z_{1}^{y}}\|_{TV}.\]
We now bound the first term by proving the following local limit theorem.
**Proposition 3.6** (Local limit theorem).: _Given Assumption 1.1 and using parameters in (9), if \(l=\frac{rm}{n}+O(\sqrt{n}),\)\(H\sim\mathrm{Hyp}(m,l,k),\) and \(Z\sim\mathrm{dN}_{k}(kp,\sigma)\), then_
\[\|\mu_{H}-\mu_{Z}\|_{TV}=O\bigg{(}\frac{1}{\sqrt{n}}\bigg{)}.\]
Proof.: Let \(L:=\sup\{j\in\mathcal{X}:x_{j}\geq-\delta\sigma\}\) and \(R:=\inf\{j\in\mathcal{X}:x_{j}\leq\delta\sigma\}\) for a fixed \(\delta\in(0,\frac{1}{2}]\). Observe that
\[2\|\mu_{H}-\mu_{Z}\|_{TV} =\sum_{j=L}^{R}|\mathbb{P}(H=j)-\mathbb{P}(Z=j)|\] \[\quad+\sum_{j=0}^{L-1}|\mathbb{P}(H=j)-\mathbb{P}(Z=j)|+\sum_{j=R +1}^{k}|\mathbb{P}(H=j)-\mathbb{P}(Z=j)|\] \[=:T_{1}+T_{2}+T_{3}\]
First, by triangle inequality,
\[|\mathbb{P}(H=j)-\mathbb{P}(Z=j)|\leq\mathbb{P}(H=j)+\mathbb{P}(Z=j).\]
and by applying Hoffding's inequality from [3], we obtain
\[T_{2}+T_{3}\leq\frac{(k+1)\exp\left(-\frac{1}{2}\delta^{2}\sigma^{2}\right)}{ \mathcal{N}\sigma\sqrt{2\pi}}+\mathbb{P}(H<L)+\mathbb{P}(H>R),\]
where it is clear that
\[\frac{(k+1)\exp\left(-\frac{1}{2}\delta^{2}\sigma^{2}\right)}{\mathcal{N} \sigma\sqrt{2\pi}}\to 0.\]
For the other two terms, recall that any hypergeometric distribution is the sum of \(k\) independent Bernoulli random variables and \(H\) has mean \(kp\). Therefore, again applying Hoffding's inequality,
\[\mathbb{P}(H<L)+\mathbb{P}(H>R) =\mathbb{P}(H-kp<L-kp)+\mathbb{P}(H-kp>R-kp)\] \[\leq 2\exp\left(-2\frac{(L-kp)^{2}}{k}\right)+2\exp\left(-2\frac{ (R-kp)^{2}}{k}\right)\] \[\leq 2\exp\left(-2\frac{\delta\sigma^{2}}{k}\right)+2\exp\left(-2 \frac{\delta\sigma^{2}}{k}\right)\] \[\to 0\]
where the last step occurs since \(\sigma^{2}\sim\gamma(1-\gamma)n\eta(1-\eta)\) and \(kp\sim\gamma n\eta\). Finally, for \(T_{1}\), observe that
\[T_{1} \leq\sum_{j=L}^{R}\left|\frac{\phi(x_{j})}{\sigma}-\frac{\phi(x_{ j})}{\mathcal{N}\sigma}\right|+\sum_{j=L}^{R}\left|\mathbb{P}(H=j)-\frac{ \phi(x_{j})}{\sigma}\right|\] \[\leq\sum_{j=L}^{R}\frac{\phi(x_{j})}{\sigma}\left|1-\frac{1}{ \mathcal{N}}\right|+\sum_{j=L}^{R}\left|\mathbb{P}(H=j)-\frac{\phi(x_{j})}{ \sigma}\right|\] \[\leq|\mathcal{N}-1|+\sum_{j=L}^{R}\left|\mathbb{P}(H=j)-\frac{ \phi(x_{j})}{\sigma}\right|.\]
The left term is \(O\left(\frac{1}{\sqrt{n}}\right)\) by Lemma 2.7. For the right term, notice that
\[\sum_{j=L}^{R}\left|\mathbb{P}(H=j)-\frac{\phi(x_{j})}{\sigma}\right|\leq 2\sum_ {j=L}^{\left|kp\right|}\left|\mathbb{P}(H=j)-\frac{\phi(x_{j})}{\sigma}\right|\]
From Theorem 1 of [5], or in particular as Lemma 5.3 in [1], we have that
\[2\sum_{j=L}^{\left|kp\right|}\left|\mathbb{P}(H=j)-\frac{\phi(x_{j})}{\sigma} \right|\leq\frac{D}{\sigma}=O\left(\frac{1}{\sqrt{n}}\right)\]
since \(0<\frac{k}{n}<1\) holds for all but finitely many \(n\). This proves the result.
To complete the proof of Proposition 3.5, we must show that
\[\|\mu_{Z^{x}_{2}+x-y}-\mu_{Z^{y}_{2}}\|_{TV}+\|\mu_{Z^{x}_{1}}-\mu_{Z^{y}_{1}}\|_ {TV}=O\left(\frac{1}{\kappa^{2}}\right).\]
We solve only \(\|\mu_{Z^{x}_{2}+x-y}-\mu_{Z^{y}_{2}}\|_{TV}\lesssim\frac{1}{\kappa^{2}}\) since the proof for \(\|\mu_{Z^{x}_{1}}-\mu_{Z^{y}_{1}}\|_{TV}\) follows the same method, but without a correction for \(x-y\). First, without loss of generality assume \(x\geq y\). Then, let \(\mathcal{Y}:=\mathcal{X}\cap\{\mathcal{X}+x-y\}\), \(\mathcal{N}^{x}:=\mathcal{N}_{kp^{x}_{2},\sigma^{x}}\), \(\mathcal{N}^{y}:=\mathcal{N}_{kp^{y}_{2},\sigma^{y}}\), \(\overline{y_{j}}:=\frac{j-kp^{y}_{2}}{\sigma^{y}}\) and \(\overline{x_{j}}:=\frac{j-x+y-kp^{x}_{2}}{\sigma^{x}}\). Define \(J_{n}:=[\frac{k}{2}-4\kappa\sqrt{n},\frac{k}{2}+4\kappa\sqrt{n}]\), then
\[2\|\mu_{Z^{x}_{2}+x-y}-\mu_{Z^{y}_{2}}\|_{TV}\leq\sum_{j\in \mathcal{Y}_{n}\cap J_{n}}\left|\frac{\phi(\overline{x_{j}})}{\mathcal{N}^{x} \sigma^{x}}-\frac{\phi(\overline{y_{j}})}{\mathcal{N}^{y}\sigma^{y}}\right|+ \sum_{j\in\mathcal{Y}_{n}\cap J^{c}_{n}}\frac{\phi(\overline{x_{j}})}{ \mathcal{N}^{x}\sigma^{x}}\\ +\sum_{j\in\mathcal{Y}_{n}\cap J^{c}_{n}}\frac{\phi(\overline{y_{ j}})}{\mathcal{N}^{y}\sigma^{y}}+\sum_{\mathcal{X}\setminus(\mathcal{X}+x-y)} \mathbb{P}(Z^{y}_{2}=j)+\sum_{(\mathcal{X}+x-y)\setminus\mathcal{X}}\mathbb{P }(Z^{x}_{2}=j-x+y)\\ =:\mathcal{T}_{1}+\mathcal{T}_{2}+\mathcal{T}_{3}+\mathcal{T}_{4}+ \mathcal{T}_{5}.\]
We now work backwards. \(\mathcal{T}_{4}\) and \(\mathcal{T}_{5}\) are bounded analogously, as are \(\mathcal{T}_{2}\) and \(\mathcal{T}_{3}\).
For \(\mathcal{T}_{4}\), observe that \(x\geq y\), and by the space we are summing over, \(j\leq x-y-1\). Further, \(\sigma^{y}\sim\sqrt{\gamma(1-\gamma)n\eta(1-\eta)}\). Using Lemma 2.7 and looking at large \(n\), we obtain:
\[\mathcal{T}_{4}\leq\sum_{j=0}^{x-y-1}\mathbb{P}(Z^{y}_{2}=j)\leq(x-y)\mathbb{P }(Z_{3}=x-y-1)=\frac{x-y}{\mathcal{N}^{y}\sigma^{y}}\phi\left(\frac{x-y-1-kp^{y }_{2}}{\sigma^{y}}\right)\leq C_{0}e^{-c_{1}n}\]
for constants \(C_{0},c_{1}>0\) independent of \(n\).
For \(\mathcal{T}_{2}\), let \(\mathcal{Z}\) be the standard normal distribution over \(\mathbb{R}\). Using the definition of \(J_{n}\) with \(\kappa\geq 0\), \(n\geq 0\), \(0<\gamma<\frac{1}{2}\), and \((x,y)\in F_{n}(\kappa)\), we have that asymptotically,
\[\frac{k}{2}+4\kappa\sqrt{n}\geq\kappa\sqrt{n}+x-y+kp^{x}_{2}+1\quad\text{ and }\quad\frac{k}{2}-4\kappa\sqrt{n}\leq-\kappa\sqrt{n}+x-y-1+kp^{x}_{2}.\]
Therefore, we can use integral comparison, monotonicity of the standard normal density away from the mean, and Lemma 2.7 to obtain
\[\mathcal{T}_{3}\leq\sum_{j\in J^{c}_{n}}\frac{\phi(\overline{x_{j}})}{\mathcal{ N}^{x}\sigma^{x}}\leq\int_{J^{c}_{n}}\frac{\phi(\overline{x_{j}})}{\mathcal{N}^{x} \sigma^{x}}\,dx\leq\frac{2}{\mathcal{N}^{x}}\mathbb{P}\left(\mathcal{Z}\geq \frac{\kappa\sqrt{n}}{\sigma^{x}}\right)\leq\frac{2(\sigma^{x})^{2}\mathbb{E}[ \mathcal{Z}^{2}]}{n\mathcal{N}^{x}}\frac{1}{\kappa^{2}}=O\left(\frac{1}{\kappa^{ 2}}\right),\]
where we use that \(\sigma^{x}\sim\sqrt{n\eta(1-\eta)\gamma(1-\gamma)}\).
Now, for \(\mathcal{T}_{1}\), we can use the result of \(\left|\frac{1}{\sigma^{x}}-\frac{1}{\sigma^{y}}\right|=O(\frac{1}{n})\) from [1]. Using Lemma
2.7, observe:
\[\mathcal{T}_{1} =\sum_{j\in\mathcal{Y}_{n}\cap J_{n}}\left|\frac{\phi(\overline{x_{ j}})}{\sigma^{x}}-\frac{\phi(\overline{y_{j}})}{\sigma^{y}}\right|+O\left(\frac{1}{ \sqrt{n}}\right)\] \[\leq\sum_{j\in\mathcal{Y}_{n}\cap J_{n}}\left|\frac{\phi(\overline {x_{j}})}{\sigma^{x}}-\frac{\phi(\overline{y_{j}})}{\sigma^{x}}\right|+\sum_{ j\in\mathcal{Y}_{n}\cap J_{n}}\phi(\overline{y_{j}})\left|\frac{1}{\sigma^{x}}- \frac{1}{\sigma^{y}}\right|+O\left(\frac{1}{\sqrt{n}}\right).\] \[=:\mathcal{T}_{1}+\mathcal{T}_{2}+O\left(\frac{1}{\sqrt{n}}\right).\]
It is easy to see that, since \(\phi\) is bounded above by 1,
\[\mathcal{T}_{2}\leq 8\kappa\sqrt{n}\cdot O\left(\frac{1}{n}\right),\]
which is also \(O(\frac{1}{\sqrt{n}})\). For \(\mathcal{T}_{1}\), first observe that \(\phi\) is Lipschitz, and let \(K\) to be its Lipschitz constant. Then
\[\mathcal{T}_{1} \leq\frac{K}{\sigma^{x}}\sum_{j\in\mathcal{Y}_{n}\cap J_{n}}| \overline{y_{j}}-\overline{x_{j}}|\] \[=\frac{K}{\sigma^{x}}\bigg{(}\frac{1}{\sigma^{x}}\sum_{j\in \mathcal{Y}_{n}\cap J_{n}}|j-kp_{2}^{y}-(j-x+y-kp_{2}^{x})|+\left|\frac{1}{ \sigma^{x}}-\frac{1}{\sigma^{y}}\right|\sum_{j\in\mathcal{Y}_{n}\cap J_{n}}|j -kp_{2}^{y}|\bigg{)}.\]
It can be seen that \(|j-kp_{2}^{y}-(j-x+y-kp_{2}^{x})|\) simplifies to \(|x-y+kp_{2}^{x}-kp_{2}^{y}|\), and using the triangle inequality and the definitions of \(p_{2}^{b}\), we obtain \(|x-y|\big{(}1+\frac{k}{m}\big{)}\), which is less than or equal to \(\frac{\sqrt{n}}{\kappa^{3}}\left(1+\frac{k}{m}\right)\) by definition of \(F_{n}(\kappa)\). For the right term, observe that \(|j-kp_{2}^{y}|\leq 8\kappa\sqrt{n}\) by the definition of \(J_{n}\), thus we now have \(O\left(\frac{1}{n}\right)\cdot O(n)\), and now distributing \(\frac{K}{\sigma^{x}}\) and recognizing that \(\sigma^{x}\sim O(\sqrt{n})\) concludes that the right term is also \(O\left(\frac{1}{\sqrt{n}}\right)\). Thus we now have:
\[\mathcal{T}_{1} \leq\frac{K}{(\sigma^{x})^{2}}\sum_{j\in\mathcal{Y}_{n}\cap J_{n}} \frac{\sqrt{n}}{\kappa^{3}}\left(1+\frac{k}{m}\right)\] \[\leq\frac{Kn}{(\sigma^{x})^{2}}\left(1+\frac{k}{m}\right)\cdot \frac{1}{\kappa^{2}}\] \[=O\left(\frac{1}{\kappa^{2}}\right)\]
as desired.
#### 3.2.3 Proof of Upper Bound
We now prove the upper bound in Theorem 1.4 using the results derived in the previous two subsections.
**Proposition 3.7**.: _Let \(\{X_{t}^{(n)}\}_{n}\) be a sequence of generalized Bernoulli-Laplace chains satisfying Assumption 1.1. Then there exist constants \(N_{2}:=N_{2}(\varepsilon,\gamma,\eta,h)\) and \(C:=(\varepsilon,\gamma,\eta,h)\) such that for all \(n\geq N_{2}\),_
\[t_{\rm mix}^{(n)}(\varepsilon)\leq t_{n}+C.\]
Proof.: Let \(x,y\in\mathcal{X}\) and \(A\subset\mathcal{X}\). Select \(\kappa\in\mathbb{N}\) satisfying (11). Define \(t:=t_{n}+\kappa+1\). Then by Proposition 3.4 and the strong Markov property,
\[\begin{split}&|\mathbb{P}_{x}(X_{t}\in A)-\mathbb{P}_{y}(X_{t} \in A)|\\ &\quad=|\mathbb{P}_{x}(Y_{t}\in A)-\mathbb{P}_{y}(Y_{t}\in A)|\\ &\leq 2\mathbb{P}(\tau_{x,y}(\kappa)>t-1)+|\mathbb{P}_{x}(Y_{t}\in A,\tau_{x,y}(\kappa)\leq t-1)-\mathbb{P}_{y}(Y_{t}\in A,\tau_{x,y}(\kappa)\leq t -1)|\\ &\leq O\left(\frac{1}{\kappa^{2}}\right)+\max_{\begin{subarray}{ c}z,w\in F_{n}(\kappa)\\ s\in\{1,\ldots,t\}\end{subarray}}\hskip-1.0pt|\mathbb{P}_{z}(Y_{s}\in A)- \mathbb{P}_{w}(Y_{s}\in A)|.\end{split}\]
Observe that for \(s\in\{1,\ldots,t\}\) and \(z,w\in F_{n}(\kappa)\),
\[|\mathbb{P}_{z}(Y_{s}\in A)-\mathbb{P}_{w}(Y_{s}\in A)|\leq\|\delta_{z}P- \delta_{w}P\|_{TV}\hskip-1.0pt=O\left(\frac{1}{\kappa^{2}}\right),\]
where we use Proposition 3.5. Thus,
\[\max_{x\in\mathcal{X}}\hskip-1.0pt\|\delta_{x}P^{t}-\pi\|_{TV}\hskip-1.0pt\leq 2 \max_{x,y\in\mathcal{X}}\hskip-1.0pt\|\delta_{x}P^{t}-\delta_{y}P^{t}\|_{TV} \hskip-1.0pt=2\max_{x,y\in\mathcal{X}}\hskip-1.0pt|\mathbb{P}_{x}(X_{t}\in A) -\mathbb{P}_{y}(X_{t}\in A)|\,=O\left(\frac{1}{\kappa^{2}}\right). \tag{12}\]
Now let \(C_{3}\) be the asymptotic bounding constant in (12). Then
\[\max_{x\in\mathcal{X}}\hskip-1.0pt\|\delta_{x}P^{t}-\pi\|_{TV}\hskip-1.0pt\leq \frac{C_{3}}{\kappa^{2}}\]
for \(n\) large enough. If it is not already true, increase \(\kappa\) so that \(\frac{C_{1}}{\kappa^{2}}\leq\varepsilon\) in addition to (11). Then \(C=\kappa+1\) works as the constant referenced in the theorem statement, and this completes the proof.
## 4 Bounds when \(\gamma=h(1-h)>0\)
We now investigate the critical case when \(\gamma=h(1-h)\). We will ultimately show that the asymptotic behavior of \(t_{\rm mix}^{(n)}(\varepsilon)\) differs.
**Assumption 4.1**.: _For each \(n\), assume without loss of generality that \(r,m\leq\frac{n}{2}\). Suppose \(\lim_{n\to\infty}\frac{k}{n}=\gamma\), \(\lim_{n\to\infty}\frac{m}{n}=h\) and \(\lim_{n\to\infty}\frac{r}{n}=\eta\), and that \(\gamma=h(1-h)\). Assume further that_
\[|\lambda_{1}|=\left|1-\frac{nk}{m(n-m)}\right|\lesssim\frac{1}{\sqrt{n}}. \tag{13}\]
**Theorem 4.2**.: _Let \(\{X_{t}^{(n)}\}_{n}\) be a sequence of Bernoulli-Laplace chains satisfying Assumption 4.1. Then there exists constants \(N:=N(\varepsilon,\gamma,\eta,h)\) and \(C:=C(\varepsilon,\gamma,\eta,h)\) such that for all \(n\geq N\),_
\[2\leq t_{\mathrm{mix}}^{(n)}(\varepsilon)\leq C. \tag{14}\]
**Remark 4.3**.: _Under Assumption 4.1, we no longer have cutoff at a multiple of \(\log n\)._
We split the proof of Theorem 4.2 across two subsections.
### The Lower Bound
Proof.: We bound the total variation at time \(t=1\):
\[||\delta_{0}P-\pi^{(n)}||_{TV} \geq|\delta_{0}P(\{k\})-\pi^{(n)}(\{k\})|\] \[=1-\pi^{(n)}(\{k\})\] \[=1-\frac{\binom{r}{k}\binom{n-r}{m-k}}{\binom{n}{m}}\] \[=1-\frac{r!\,m!\,(n-m)!\,(r-m)!}{k!\,n!\,(r-k)!\,(m-k)!\,(n-m-r+k)!}.\]
Since \(n,k,r,m\) are all of the same order, it is easy to see that the \(n!\) term in the denominator dominates the fraction and send its limit to \(0\). Therefore, for any \(\varepsilon\in(0,1)\), we have for sufficiently large \(n\) that
\[||\delta_{0}P-\pi^{(n)}||_{TV}\geq\varepsilon.\]
This implies \(t_{\mathrm{mix}}^{(n)}(\varepsilon)\geq 2\).
### The Upper Bound
For this subsection, we introduce an alternative to \(t_{n}\). Notice that the definition given in (3) is invalid under Assumption 4.1.
**Definition 4.4**.: _When \(\lambda_{2}\neq 0\), let_
\[q_{n}=\frac{\log(n)}{|\log|\lambda_{2}||}=\frac{-\log(n)}{\log\left|1-\frac{2(n -1)}{m(n-m)}k+\frac{(n-1)(n-2)k(k-1)}{m(n-m)(m-1)(n-m-1)}\right|}.\]
_When \(\lambda_{2}=0\), let \(q_{n}=1\)._
We will now prove an analogue of Proposition 3.4 under Assumption 4.1.
**Proposition 4.5**.: _Suppose Assumption 4.1 is satisfied. Then for any \(\kappa\in\mathbb{N}\) such that_
\[\kappa^{4}\left(1-\frac{\gamma(1-2\gamma)}{h(1-h)}\right)^{\kappa}\leq\frac{1}{ \kappa^{2}}, \tag{15}\]
_we have_
\[\mathbb{P}(\tau_{x,y}(\kappa)>q_{n}+\kappa)=O\left(\frac{1}{\kappa^{2}}\right).\]
Proof.: The the proof is identical to that of Proposition 3.4, except that we bound
\[\mathbb{P}_{z}(X_{q_{n}}\not\in\mathcal{I}_{n}(\kappa))=\frac{1}{\kappa^{2}} \frac{r^{4}}{n^{3}}\left(b_{0}+b_{1}\lambda_{1}^{q_{n}}s_{1}(z)+b_{2}\lambda_{2 }^{q_{n}}s_{2}(z)\right).\]
We need that \(\lambda_{2}^{q_{n}}=O\left(\frac{1}{n}\right)\). If \(\lambda_{2}=0\), then any asymptotic bound is trivially satisfied. If \(\lambda_{2}\neq 0\), then
\[\lambda_{2}^{q_{n}} \leq|\lambda_{2}|^{q_{n}}\] \[=\exp\left(\log\lvert\lambda_{2}\rvert\frac{\log(n)}{\lvert\log \lvert\lambda_{2}\rvert\rvert}\right)\] \[=\frac{1}{n}.\]
This proves the proposition.
Following the argument of Section 3.2, we arrive at an analogue of Proposition 3.7.
**Proposition 4.6**.: _Let \(\{X_{t}^{(n)}\}_{n}\) be a sequence of generalized Bernoulli-Laplace chains satisfying Assumption 4.1. Then there exist constants \(N:=N(\varepsilon,\gamma,\eta,h)\) and \(C:=C(\varepsilon,\gamma,\eta,h)\) such that for all \(n\geq N\),_
\[t_{\mathrm{mix}}^{(n)}(\varepsilon)\leq q_{n}+C.\]
We now employ this proposition with the rate of convergence assumption (13) to prove the upper bound of Theorem 4.2.
Proof.: It suffices to show that \(q_{n}\) is bounded when Assumption 4.1 holds. This is trivially satisfied for all \(n\) such that \(\lambda_{2}=0\), so assume \(\lvert\lambda_{2}\rvert\neq 0\). Using Lemma 2.4, we see that
\[\lambda_{2} =\lambda_{1}^{2}+O\left(\frac{1}{n}\right)\] \[=O\left(\frac{1}{\sqrt{n}}\right)^{2}+O\left(\frac{1}{n}\right)\] \[=O\left(\frac{1}{n}\right).\]
That is to say, there exists a \(K\) such that \(|\lambda_{2}|\leq\frac{K}{n}\). Thus
\[\log|\lambda_{2}|\leq\log(K)-\log(n).\]
Consider only \(n\) large enough such that \(\log(K)-\log(n)<0.\) Then
\[|\text{log}|\lambda_{2}||\geq\log(n)-\log(K).\]
This implies that
\[q_{n}=\frac{\log(n)}{|\text{log}|\lambda_{2}||}\leq\frac{\log(n)}{\log(n)-\log (K)},\]
which is clearly bounded.
## 5 Conclusion
### Generalizations
Theorem 1.4 generalizes pre-existing work on the two-color, two-urn Bernoulli-Laplace model. In particular, we extend the work from [6] on the lower bound and [1] on the upper bound to a model with uneven distributions of colors and urn sizes. We also show that mixing time is bounded under certain conditions in Theorem 4.2, which we believe to be a new result.
Another possible generalization comes from letting there be more than 2 colors. The complete spectrum of the Markov transition in this case is still known (see [4]), but the more complicated state space is harder to work with, and many proofs (e.g. that of Proposition 3.5) breaks down.
Another case still unproven (though analogous to [2]) is when \(\gamma=0\). We put forth the following conjecture, which is the analogue of the result in [6] and notably lacks constant window.
**Conjecture 5.1**.: _Let \(\{X_{t}^{(n)}\}_{n}\) be a sequence of Bernoulli-Laplace chains. For each \(n\), assume without loss of generality that \(r,m\leq\frac{n}{2}\). Suppose \(\lim_{n\to\infty}\frac{k}{n}=0\), \(\lim_{n\to\infty}\frac{m}{n}=h\) and \(\lim_{n\to\infty}\frac{r}{n}=\eta\). Then there exists constants \(N,c,c^{\prime},\) and \(c^{\prime\prime}\) all depending on \(\varepsilon,\eta,\gamma,\) and \(h\) such that for all \(n\geq N\),_
\[\frac{n\log n}{c(\varepsilon,\eta,\gamma,h)k}-\frac{k}{n}c^{\prime}(\varepsilon,\eta,\gamma,h)\leq t_{mix}(\varepsilon)\leq t_{n}+\frac{c^{\prime\prime}( \varepsilon,\eta,\gamma,h)n}{k}\log\log n.\]
The last generalization we propose is to let \(k\) and \(r\) be random variables such that convergence \(\frac{k}{n}\to\gamma\) and \(\frac{r}{n}\to\eta\) in distribution. We are not aware of any previous literature on such a model.
### Numerical Results
The asymptotic behavior proven in Theorems 1.4 and 4.2 is readily observed in numeric data, which throughout this section is exact. The mixing times for two sequences of Bernoulli-Laplace chains satisfying Assumption 1.1 are plotted in Figures 1 and 2. Notice that they resemble a constant multiple of \(\log n\).
Figure 3 displays the mixing times for a sequence of Bernoulli-Laplace chains satisfying Assumption 4.1. Notice that they appear constant.
Figure 2: \(t_{\rm mix}^{(n)}(0.01)\) when \(\frac{k}{n}=0.10,\frac{r}{n}=0.40,m=r\)
In Tables 1 and 2, we vary \(\frac{k}{n}\) and \(\frac{r}{n}\), respectively, while fixing \(m=r\). In Table 3, we vary \(\frac{m}{n}\).
Notice in Table 1 that the mixing times appear to reach a minimum at \(\frac{k}{n}=\frac{r}{n}(1-\frac{r}{n})\). This makes sense in light of Theorems 1.4 and 4.2. For very low values of \(\frac{k}{n}\) (closer to \(0\)) and very high values of \(\frac{k}{n}\) (closer to \(\frac{r}{n}\)), the mixing times quickly increase.
However, as shown in Tables 2 and 3, mixing times increase monotonically in \(\frac{m}{n}\). This makes sense given the behavior of \(t_{n}\) under the same conditions. Tables 2 and 3 are so similar because the corresponding rows in each table differ only by their values of \(\eta\), and as Theorem 1.4 proves, this allows for at most bounded difference in the mixing times. The data does consistently show, though, that mixing occurs faster \(\eta=0.50\) than \(\eta<0.50\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline (k/n, n) & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450 & 500 & 550 & 600 & 650 & 700 & 750 & 800 & 850 & 900 & 950 & 1000 \\ \hline
[MISSING_PAGE_POST]
\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) & +\(\infty\) \\ \hline \end{tabular}
\end{table}
Table 1: \(t_{\rm mix}^{(n)}(0.01)\) when \(\frac{k}{n}\in\{0.02,0.04,\ldots,0.48,0.50\},\frac{r}{n}=0.50\), \(m=r\)
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \((\frac{r}{n},n)\) & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450 & 500 & 550 & 600 & 650 & 700 & 750 & 800 & 850 & 900 & 950 & 1000 \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular} Table 2: \(t_{\rm mix}^{(n)}(0.01)\) when \(\frac{k}{n}=0.02\), \(\frac{r}{n}\in\{0.02,0.04,\ldots,0.48,0.50\}\), \(m=r\)
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \((\frac{m}{n},n)\) & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450 & 500 & 550 & 600 & 650 & 700 & 750 & 800 & 850 & 900 & 950 & 1000 \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
Acknowledgements
All authors were supported by NSF grant 1950583 for work done during the 2023 Iowa State Mathematics REU. |
2310.12464 | Lidar Panoptic Segmentation and Tracking without Bells and Whistles | State-of-the-art lidar panoptic segmentation (LPS) methods follow bottom-up
segmentation-centric fashion wherein they build upon semantic segmentation
networks by utilizing clustering to obtain object instances. In this paper, we
re-think this approach and propose a surprisingly simple yet effective
detection-centric network for both LPS and tracking. Our network is modular by
design and optimized for all aspects of both the panoptic segmentation and
tracking task. One of the core components of our network is the object instance
detection branch, which we train using point-level (modal) annotations, as
available in segmentation-centric datasets. In the absence of amodal (cuboid)
annotations, we regress modal centroids and object extent using
trajectory-level supervision that provides information about object size, which
cannot be inferred from single scans due to occlusions and the sparse nature of
the lidar data. We obtain fine-grained instance segments by learning to
associate lidar points with detected centroids. We evaluate our method on
several 3D/4D LPS benchmarks and observe that our model establishes a new
state-of-the-art among open-sourced models, outperforming recent query-based
models. | Abhinav Agarwalla, Xuhua Huang, Jason Ziglar, Francesco Ferroni, Laura Leal-Taixé, James Hays, Aljoša Ošep, Deva Ramanan | 2023-10-19T04:44:43Z | http://arxiv.org/abs/2310.12464v1 | # Lidar Panoptic Segmentation and Tracking without Bells and Whistles
###### Abstract
State-of-the-art lidar panoptic segmentation (LPS) methods follow "bottom-up" segmentation-centric fashion wherein they build upon semantic segmentation networks by utilizing clustering to obtain object instances. In this paper, we re-think this approach and propose a surprisingly simple yet effective detection-centric network for both LPS and tracking. Our network is modular by design and optimized for all aspects of both the panoptic segmentation and tracking task. One of the core components of our network is the object instance detection branch, which we train using point-level (modal) annotations, as available in segmentation-centric datasets. In the absence of amodal (cuboid) annotations, we regress modal centroids and object extent using trajectory-level supervision that provides information about object size, which cannot be inferred from single scans due to occlusions and the sparse nature of the lidar data. We obtain fine-grained instance segments by learning to associate lidar points with detected centroids. We evaluate our method on several 3D/4D LPS benchmarks and observe that our model establishes a new state-of-the-art among open-sourced models, outperforming recent query-based models.
## I Introduction
Lidar panoptic segmentation (LPS) is the task of labeling all 3D points with distinct semantic classes and instance IDs. This is directly relevant to online streaming robot operation, as robots need to be aware of both scene semantics and surrounding dynamic objects in order to navigate safely.
While state-of-the-art 3D detection and tracking methods detect objects in top-down fashion (Fig. 1, _center_) and regress full object extent and orientation/velocity [1, 2, 3, 4], lidar instance and panoptic segmentation (Fig. 1, _left_) follow "bottom-up" segmentation-centric philosophy [5, 6, 7, 8, 9], that does not require reasoning about the full extent of 3D bounding boxes. Instead, these _segmentation-centric_ first perform per-point semantic classification and then learn to group points corresponding to _thing_ classes into instances in a bottom-up fashion.
In this paper, we question the established narrative that bottom-up grouping is the only design pattern for LPS and propose _MOST_ (_MOdal Segmentation and Tracking_), a surprisingly simple yet effective _detection-centric_ approach for _lidar panoptic segmentation_[10] and tracking [5]. Concretely, we base our method on CenterPoint [1], designed for top-down lidar-based 3D object detection. As in CenterPoint, we first encode a point cloud sequence using a sparse 3D convolutional backbone [2, 11, 1, 12] and flatten the bottleneck layer into a bird's eye view (BEV) representation of the point cloud that we use to detect objects as points. To obtain dense, per-point semantic interpretation and instance interpretation of point clouds, we add a 3D decoder head to our network that un-projects this representation and up-samples it back to the original resolution to perform per-voxel semantic classification. While 3D bounding boxes (_c.f._, [1]) can directly be regressed from per-point bottleneck features, necessary fine-grained details needed for point-wise classification and instance segmentation are lost. Therefore, inspired by the instance segmentation branch of modern two-stage instance segmentation methods, we add a second-stage instance segmentation network that determines the _membership_ of points to their respective instance centers provided by the instance recognition branch. Finally, we obtain spatio-temporal instance labels by additionally learning to regress modal offset vectors used for scan-to-scan instance association [1].
The **motivation** for this design is two-fold. Firstly, LPS methods should maximize all aspects of the panoptic segmentation and tracking task, _i.e._, (i) object recognition, (ii) instance segmentation, (iii) per-point semantic classification, and (iv) tracking, all explicitly captured via different modules in our network, supervised with corresponding loss functions. Second, this network is modular by design - while object detection, point classification, instance segmentation, and velocity regression components all share a common feature extractor, different components are disentangled. This, in principle, allows us to investigate the performance of each module separately, which is important for model interpretability that is crucial in robotics applications. Importantly, such a model can be trained in the future on multiple datasets with different levels of supervision (as densely-labeled data is expensive to obtain), or replace different
Fig. 1: State-of-the-art LPS methods (_left_) learn to group points in a bottom-up fashion, while state-of-the-art 3D object detectors (_center_) detect objects as _amodal_ centers in the bird’s-eye representation of the scene, followed by _amodal_ 3D bounding box regression. In this paper, we re-purpose the _latter_ for the _former_. Our method in parallel classifies points (_semantic segmentation_), detect _modal_ instance centers (_modal instance recognition_) and their velocities (_modal instance tracking_).
modules with stronger counterparts to boost the performance further.
Importantly, 3D detectors, such as CenterPoint, used as a base for our model, rely on _amodal_ 3D bounding box supervision that enclose the full extent of the object, not only the visible portion. Such labels are not necessarily available in segmentation-centric semantic/panoptic segmentation datasets [13, 10].1 To remedy this, we show we can leverage track-level (temporal) information to reason about the full extent of objects during the network training, and, as our experiments confirm, we alleviate the need for _amodal_ labels. From this perspective, our method _MOST marries_ object instance recognition and semantic segmentation in a single modular network suitable for 3D and 4D lidar panoptic segmentation and can be trained solely from temporal point-level (modal) supervision. This makes our method versatile enough for a thorough evaluation on multiple benchmarks for 3D/4D LPS on Panoptic nuScenes [14] and SemanticKITTI [13, 10] datasets.
Footnote 1: With exception of nuScenes, which also includes _amodal_ 3D object detection labels.
In **summary**, (i) we propose a 3D/4D lidar segmentation network that unifies per-point semantic segmentation with modal object recognition and tracking in a single network. We (ii) detect instances via _modal_ point-based temporal supervision and segment them with our novel binary instance segmentation network that determines point-to-detection membership based on BEV and per-point semantic features. Finally, we (iii) show the effectiveness of our method on various benchmarks for 3D/4D LPS. This confirms that our top-down approach based on modal recognition is highly effective for both 3D and 4D lidar panoptic segmentation and may directly impact design patterns used in future developments in this field of research. Our code, along with experimental data, is available at [https://mostlps.github.io](https://mostlps.github.io).
## II Related Work
In this section, we summarize relevant work in 3D object detection, tracking, and semantic and panoptic segmentation for lidar point clouds.
**Semantic segmentation.** Advances in deep representation learning on unordered point sets [15] enable direct encoding of raw, unstructured point clouds to estimate fine-grained per-point semantic labels [15, 16, 17, 18]. Alternatively, several methods [19, 20, 21, 22, 23, 24] operate on a spherical projection of point cloud (_i.e._, range images), and provide an excellent trade-off between accuracy and efficiency, important in robotics scenarios. State-of-the-art methods rely on voxel grids in conjunction with sparse convolutions [25, 26]. To efficiently encode sparse lidar point clouds, Cylinder3D [12] performs a cylindrical partition and proposes asymmetrical 3D convolution networks, followed by point refinement. We similarly adopt a sparse voxel grid-based backbone for point-based classification, a sub-task of panoptic segmentation.
**Panoptic segmentation.** Seminal methods for lidar panoptic segmentation follow a top-down approach inspired by the early image-based baselines [27]. These approaches train separate networks for semantic segmentation and object detection, followed by heuristic result fusion [10]. However, recent trends show that in the lidar domain, bottom-up, methods [28, 29, 30, 31, 5, 6, 7, 8, 32], including recent query-based networks [33], obtain state-of-the-art results. Are bottom-up methods based on point grouping and cross-attention de-facto go-to approaches for lidar panoptic segmentation? We suggest this is not necessarily the case.
**4D lidar panoptic segmentation.** Recently introduced 4D lidar panoptic segmentation [5, 14] extends lidar panoptic segmentation to the temporal domain, which requires sequence-level understanding. 4D-PLS [5] poses this task as bottom-up spatio-temporal point grouping, while MOPT [34] and CA-Net [35] segment instances in individual scans and associate them across time. Our proposed method is flexible and can generalize to utilize either single-scan or a multi-scan lidar sweep for both 3D and 4D lidar panoptic segmentation in one single unified network.
**(A)modal object localization.** Amodal bounding boxes (in 2D or 3D) encapsulate the full extent of the object, regardless of whether the full object is visible or not. This approach has origins in object detection [36] and requires the hallucination of bounding boxes during the annotation process. Recent works on 3D object detection ([1, 3, 11, 2, 37]) specifically utilizing amodal bounding boxes. Boxes can be hallucinated by annotators [36], obtained via linear interpolation in sequences [38] or in 3D using SLAM/structure from motion [39]. Alternatively, the recognition task can be posed as localization of the visible portion of the object (modal recognition), common in segmentation-centric tasks [40, 41, 13]. Modal annotations do not require hallucination of unobserved regions and are thus less sensitive to localization errors and less expensive in terms of annotation costs. In this paper, we demonstrate using modal annotations can achieve competitive performance compared to existing works built on amodal annotations.
## III MOST: Modal Segmentaton and Tracking
In this section, we present _MOST_ (_MOdal Segmentation and Tracking_) for lidar panoptic segmentation of point clouds and point cloud sequences. Lidar panoptic segmentation methods must predict semantic class and a unique instance identity label for each point in a point cloud (sequence). This task is especially challenging in the temporal domain because objects may become occluded or may exit or (re)-enter the sensing area. We first cover an overview of _MOST_, followed by a discussion of all key components.
**Overview.** A visual overview of _MOST_ is presented in Fig. 2. We base our network for lidar panoptic segmentation on an encoder-decoder-based U-net architecture [42]. In particular, we build on a sparse voxel grid-based backbone [11], which encodes points with a shared multi-layer perceptron (MLP), and accumulates encoded points in voxels. We then
apply several 3D sparse convolution layers to obtain a downsampled BEV representation of the scene (Fig. 2_a_), _i.e._, the bottleneck layer. Next, we detect object instances via the modal instance detection branch on top of the BEV representation. In parallel, our decoder upsamples the bottleneck layer back to the original voxel grid resolution via a series of 3D upsampling layers to obtain voxel-level semantic predictions (Fig. 2_b_). _Finally_, our instance segmentation network, PointSegMLP (Fig. 2_c_), determines which points belong to detected instances. To this end, PointSegMLP classifies points within instance-specific regions of interest (RoIs) centered around detected instances leading to panoptic segmentation predictions.
**Network architecture.** The input to our network is a point cloud \(\mathcal{I}^{t}=\{(x,y,z,intensity),\dots\}\in N\times\mathbb{R}^{4}\), where \(N\) denotes the number of points. We accumulate input point clouds over a time window \([t-\delta t,t]\) to obtain _4D point cloud \(\mathcal{I}^{[t-\delta t,t]}\in N^{t}\times\mathbb{R}^{5}\)_, wherein the last dimension encodes the relative time \(\delta t\). We encode points using a MLP to obtain per-point features \(F^{point}\), which we accumulate in a regular 4D voxel grid \(\mathbb{R}^{C\times H\times W\times D}\), where \(H\), \(W\), \(D\) are dimensions of the bounding volume and \(C\) is the channel dimension. This 4D multichannel feature grid is processed with 3D convolutional encoders and decoders, following the sparse convolutional backbone of VoxelNet [11], which has proven successful for 3D object detection [2, 11, 1] and semantic segmentation [12].
For the _modal detection_ branch, we flatten voxel features along their height to obtain a BEV feature map \(F^{bev}\in\mathbb{R}^{C^{\prime}\times W^{\prime}\times D^{\prime}}\). We then apply 2D convolutional layers to reduce the channel dimension to \(K\) output classes, followed by ReLU activation function to obtain _modal_ center heatmaps \(O_{c}^{bev}\in\mathbb{R}^{K\times W^{\prime}\times D^{\prime}}\), followed by non-maxima suppression to obtain a set of detected instances.
To obtain point-precise _semantic segmentation_, we upsample the BEV feature map \(F^{bev}\) via upsampling layers back to a voxel-grid representation to obtain voxel-level logits \(O^{vox}\). In a U-net fashion, we add skip connections from downsampling layers to capture fine-grained features. We then classify voxels via the softmax classifier.
Finally, to segment instances corresponding to predicted centers, we train an instance segmentation network (_PointSegMLP_) that predicts point-to-center _memberships_. Given a predicted center \(\hat{c}\in\mathbb{R}^{3}\), we compute a binary membership for all points \(p\in\mathit{Rol}(\hat{c})\). To this end, we concatenate per-point features \(F^{point}\) and predicted center BEV features \(F^{bev}\) for each point-center pair. Next, we use concatenated features to determine instance memberships \(O^{mem}(p,\hat{c})\in[0,1]\) for all pairs. We detail all components of our network in the following paragraphs.
**Semantic segmentation.** For voxel-level supervision, we obtain the supervisory signal from the current sweep \(\mathcal{I}^{t}\) via majority voting to obtain \(Y^{vox}\in\mathcal{K}^{H\times W\times D}\), where \(\mathcal{K}\) denotes the set of all classes. Next, we apply per-voxel cross-entropy (CE) loss on-top of the voxel logits \(O^{vox}\):
\[L_{seg}=CE(Y^{vox},O^{vox}). \tag{1}\]
We note that even though we accumulate raw point clouds as \(\mathcal{I}^{[t-\delta t,s]}\) as input to the encoder, the loss is only applied to voxels corresponding to \(\mathcal{I}^{t}\). This is done by simply masking out loss corresponding to voxels in \(\mathcal{I}^{[t-\delta t,s]}\) that do not belong to the current sweep \(\mathcal{I}^{t}\). For point-level supervision, we utilize a point-refinement network [12]. We obtain voxel features \(O^{vox}\) at the point level, point features \(F^{point}\), and BEV features \(O^{bev}\) and train a linear layer using cross-entropy loss. During inference, we assign point-level predictions \(O^{point}\) to all points within the voxel.
**Modal object instance recognition.** Assuming access to only per-point semantic class and instance IDs for objects, we represent objects via statistics computed from observed points. More precisely, for a visible set of points \(\mathcal{P}\) representing an instance, we define _modal_ center \(c\in\mathbb{R}^{3}\) as
Fig. 2: _MOST_ overview. We accumulate a point cloud sequence and encode it using a voxel grid-based encode-decoder backbone (_c.f_, [11, 12]). After accumulating encoded points in a 3D voxel grid, (_a_) we down-sample the volume (via sparse 3D convolutions and pooling layers), and flatten the bottleneck layer along height-axis to obtain BEV representation (followed by 2D convolutional layers, similar to [1]). We use this representation to detect objects as (modal) points and regress (modal) offsets for temporal association. Our decoder (_b_) consists of several up-sampling layers to obtain fine-frained, voxel-level semantic predictions. Our instance segmentation network, _PointSegMLP_ (_c_) performs binary classification within regions of interest (RoI) centered around predicted centers to obtain object instances. (_d_) _PointSegMLP_ utilizes point and center features as input to produce panoptic segmentation results.
the mean of \(\mathcal{P}\), and _modal_ extent \(r\in\mathbb{R}^{3}\) as the maximum distance of a point \(p\) from \(c\). Intuitively, modal extent encodes the visible extent of an object. Following [1], we obtain BEV supervisory signal by constructing \(K\) class-wise center heatmaps \(Y_{c}^{bev}\in\mathbb{R}^{W^{\prime}\times D^{\prime}}\). We project modal centers and modal extents from 3D into the 2D BEV plane. Then, we place a 2D Gaussian centered at the projected modal centers with the projected radius as the variance. Since the projection collapses the center height information, we set an additional regression target \(Y_{h}^{bev}\in\mathbb{R}^{W^{\prime}\times D^{\prime}}\) to localize the object in 3D. We then apply focal loss [43] for _thing_ classes as in CenterPoint [1] for modal centers heatmaps \(O_{c}^{bev}\). In addition we apply an L1 regression loss on height \(O_{h}^{bev}\):
\[L_{det}=FocalLoss(Y_{c}^{bev},O_{c}^{bev})+|Y_{h}^{bev}-O_{h}^{bev}|. \tag{2}\]
**Estimating modal extent \(r\).** We first compute the extent for instances at each time step via shrink-wrapping (SW). To this end, we estimate tight axis-aligned bounding boxes that enclose observed points. We compute SW axis-aligned box at time \(t\) as: \(r_{t}=\max\left\{|p-c|,p\in\mathcal{P}\right\}\), where \(\mathcal{P}\) represents the set of observed points for this instance.
Intuitively, humans reason about the object's extent by fusing information from multiple viewpoints. A sensor mounted on an autonomous vehicle similarly observes objects from different viewpoints over time. Therefore, we can derive more accurate object extent estimates \(r\) by reasoning about object size over time. We utilize unique instance IDs, available in 4D panoptic segmentation datasets [10, 14] to obtain refined object extent estimates through _temporal supervision_. To this end, we simply compute the maxima of all per-frame extent estimates for an object across time (MAX) to obtain a more precise estimate compared to naive SW: \(r=\max\left\{r_{t}\right\}_{t=0}^{T}\). These are then used as target extents during the modal instance recognition branch training. This approach is especially beneficial for instances that contain only a few points--for example, cases where only a vehicle's front or rear bumper is visible.
**PointSegMLP for instance segmentation.** The modal recognition branch provides object center predictions, while the semantic decoder independently provides point-wise semantic predictions. The next step is to obtain _instance-level_ segmentation, _i.e._, modal point-precise segmentation masks for detected instances. We tackle instance segmentation by training a _point membership_ function, _PointSegMLP_, that determines for each point in the scene _which_ points correspond to _which_ detected centroids. This is analogous to image-based two-stage instance segmentation networks [44], that segment instances via binary classification for each anchor box.
Consider a detected object instance \(D\) with center \(\hat{\mu}\) and class \(\hat{k}\). We take all points \(\mathcal{P}=\left\{p\in RoI(\hat{\mu},\hat{k})\right\}\) as _in-range_ points, wherein each \(RoI\) is constructed from the predicted modal extents of the detected object. Next, we obtain features for the detected center and for all in-range points \(p\) (see Fig. 2\(d\)). The center features are comprised of its 3D position \(\hat{\mu}\), semantic class \(\hat{k}\) and BEV features \(F_{\hat{\mu}}^{bev}\). The BEV features \(F_{\hat{\mu}}^{bev}\) are obtained by first projecting \(\hat{\mu}\) onto the BEV plane, followed by linear interpolation of \(F^{bev}\) features at the projected point. Similarly, we compute point features using its 3D position \(p\in\mathcal{P}\), predicted semantic label \(O_{p}^{point}\) and the BEV features \(F_{p}^{bev}\) for point \(p\). In addition, we append \(F_{p}^{point}\) features obtained from the backbone. The obtained center features are concatenated with the point features to obtain a feature representation for a point-center pair. Our per-point _PointSegMLP_, shared across all points, utilizes the obtained point-center features to determine per-point-to-center instance membership \(O^{mem}\in[0,1]\). The architecture boils down to a light-weight MLP comprised of fully connected layers with batch normalization and ReLU activation. For supervision, we construct a binary ground truth membership \(Y^{mem}\), points belonging to an instance \(D\) are assigned 1, and others 0. We train _PointSegMLP_ using binary cross-entropy loss (BCE):
\[L_{mem}=BCE(Y^{mem},O^{mem}). \tag{3}\]
**Modal panoptic tracking.** We follow CenterPoint [1] and concatenate point clouds before encoding them. Such spatio-temporal representations can be used to regress offset vectors \(v\in\mathbb{R}^{2}\) and to obtain sequence-level lidar panoptic segmentation through greedy association. The difference, however, is that we estimate offset vectors using modal labels only. We construct ground truth velocity offsets \(Y_{v}^{bev}\) for input point cloud \(\mathcal{I}^{t}\) using \(\mathcal{I}^{t-\delta t}\) and \(\mathcal{I}^{t+\delta t}\). For each object, ground truth velocity offsets are computed through a centered difference between modal centers _i.e._, \((\mu^{t+\delta t}-\mu^{t-\delta t})/(2\delta t)\). The velocity offset predictions \(O_{v}^{bev}\) when combined with per-point 3D panoptic segmentation leads to a unified, single-network top-down approach to 4D lidar panoptic segmentation. We train the velocity offset regression head using L1 loss:
\[L_{track}=|Y_{v}^{bev}-O_{v}^{bev}|. \tag{4}\]
**Putting everything together.** We train our network by minimizing the overall training objective, that is composed of modal detection loss \(L_{det}\), semantic segmentation loss \(L_{seg}\), instance segmentation loss \(L_{mem}\), and optionally for sequences, modal velocity regression loss \(L_{track}\):
\[L_{total}=L_{det}+L_{seg}+L_{mem}+L_{track}. \tag{5}\]
**Inference.** During inference, we fuse segmentation branch predictions \(O^{wox}\), modal center heatmaps \(O^{bev}\), and point-center memberships \(O^{mem}\) to obtain 4D panoptic predictions. We utilize segmentation labels predicted by the segmentation branch, and instance labels predicted by the modal centroid membership branch. We provide pseudocode in the supplementary, but summarize it here: using the segmentation branch predictions \(O^{wox}\), we assign point-level predictions \(O^{point}\) to all points within the voxel. Similarly, we apply non-maximum supression (NMS) over the predicted center heatmaps \(O^{bev}\) to generate predicted modal centers \(\hat{\mu}\). We then compute the membership of each point \(p\) within the RoI of each center using _PointSegMLP_, resolving overlapping
RoIs by the most confident center \(\hat{u}^{*}=\operatorname*{argmax}_{\hat{u}}(O^{mem}(p,\hat{u}))\). Next, we assign the predicted center label to all points that are its members, and assign a unique instance id. For all _stuff_ points, we directly utilize the predicted semantic label. To extend our method to panoptic tracking, we associate instances across sweeps by using predicted center velocities \(O_{v}^{bev}\), following the approach in [1]: we greedily form _tracklets_ by matching previous-sweep centers to current sweep centers with subtracted velocity offsets. Finally, all the instances of an object belonging to a _tracklet_ are assigned a temporally consistent unique ID.
**Implementation details.** We train our network in two stages. In the first stage, we optimize the modal detection and segmentation branch using \(L_{det}\), \(L_{seg}\), and \(L_{track}\). Next, we freeze the first stage network and only train the second stage using a per-point \(L_{mem}\) loss. The first stage network is trained with Adam optimizer with a learning rate of \(1e^{-3}\), with a batch size of 8. For the second stage of training of _PointSegMLP_, we utilize SGD optimizer with a learning rate as \(5e^{-4}\). The network is trained for a total of 20 epochs. Architecture: the per-point feature extraction layer and _PointSegMLP_ are simple 4-layer MLPs with BatchNorm and ReLU layers. The voxel grid encoder downsamples the input point cloud by a factor of 8, while the decoder upsamples the bottleneck layer back to the original resolution. The modal detection branch comprises of two 3x3 convolution layers with ReLU activation layers. We accumulate previous 10 frames. We do not employ any test time augmentation while reporting our results. Please refer to the supplementary for additional details.
## IV Experimental Evaluation
In this section, we first summarize our evaluation test-bed, including datasets, benchmarks and evaluation metrics used to conduct our experiments (Sec. IV-A). We next ablate various stages and design decisions of _MOST_'s network architecture for joint lidar panoptic segmentation and tracking (Sec. IV-B). Finally, we outline and discuss official benchmark results obtained on singe-scan and multi-scan (4D) lidar panoptic segmentation (Sec. IV-C).
### _Evaluation Test-Bed_
**Datasets.** We evaluate our work using SemanticKITTI [13, 10] and Panoptic nuScenes [14] datasets, that contain per-point temporally-consistent semantic and instance labels. SemanticKITTI [13, 10] contains \(1.5h\) of lidar scans, recorded using a 64-beam sensor, and labels for 28 semantic classes. Panoptic nuScenes [14] contains \(1,000\) short scenes with a 32-beam sensor. It contains labels for 32 semantic classes, with a labeling frequency of 2Hz. For both datasets, we follow the official splits for training/validation and evaluate our final models on the hidden test set.
**Tasks and evaluation metrics.** For _single-scan (3D) lidar panoptic segmentation_, we report well-established panoptic quality **PQ** metric [27], a soft version of the F1-score that treats _thing_ and _stuff_ classes in a unified manner. Following the official evaluation procedure, we set the minimum number of points on an instance to be 15 for nuScenes and 50 for SemanticKITTI. We additionally report mean intersection-over-union (**mIoU**) [36] that evaluates per-point semantic segmentation. For _multi-scan (4D) lidar panoptic segmentation_, we report (lidar) segmentation and tracking quality **LSTQ**[47, 5], evaluated as the geometric mean between mIoU and point association quality (**AQ**). AQ evaluates whether a point was associated with the correct instance in space and time. For Panoptic nuScenes, we additionally report the recently introduced panoptic tracking (**PAT**) metric, which combines PQ and LSTQ.
### _Ablations_
**Modal recognition or center-offset regression?** We first study the impact of our top-down modal recognition-based approach to panoptic segmentation and compare it to a bottom-up center-offset regression approach by DS-Net [7], for which code is available. This method predicts _center
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Method & PQ & mIoU & Membership Acc (\%) \\ \hline Nearest Neighbor (baseline) & 70.9 & 77.2 & 76.3 \\ Ours (semantic + geometric) & 72.9 & 77.5 & **95.4** \\ + BEV feat. & 72.6 & 77.4 & 95.0 \\ + Point feat. & **73.8** & **79.7** & **95.4** \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Different membership functions:** We compare our _PointSegMLP_ with a simple yet surprisingly effective nearest neighbor heuristic (_NN-baseline_). As can be seen, our learning-based _PointSegMLP_ based on 3D positions and semantic predictions already significantly outperforms the heuristic. While instance bird’s-eye-view (BEV) features in isolation do not benefit our model, they further improve the performance when combined with per-point semantic features.
\begin{table}
\begin{tabular}{l|c c|c|c c c} \hline \hline Method & PQ & PQ\({}^{Th}\) & PQ\({}^{R}\) & mIoU & mIoU\({}^{Th}\) & mIoU\({}^{R}\) \\ \hline Offset reg. (DS-Net) & 68.2 & 66.1 & 71.6 & 75.5 & 71.8 & 81.7 \\ \hline Modal Det. (SW), w/o sharing & 72.2 & 71.7 & 72.9 & 77.2 & 74.5 & 81.7 \\ Modal Det. (SW), w/ sharing & 73.8 & 74.4 & 73.1 & 79.7 & 78.7 & 81.3 \\ \hline Modal Det. (DSB) & 70.1 & 68.4 & 72.9 & 77.6 & 75.2 & 81.3 \\ Modal Det. (CWM) & 74.2 & 74.6 & 73.5 & 79.7 & 78.7 & 81.3 \\ Modal Det. (MAX) & 77.1 & 79.3 & **73.6** & **80.3** & **79.4** & **81.7** \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Different strategies for amodalization:** We compare different methods using the same segmentation backbone. We start with bottom-up centroid regression, as done in DS-Net. Naively creating a shrink-wrapped (SW) modal cuboid and training a top-down object-centric modal detector already improves performance, with or without sharing weights between the detection and segmentation branches. We also explore other ways of generating modal cuboids, including using a class-wise mean (CWM) cuboid dimension or taking the max dimension value across time (MAX). We see MAX performing the best when only modal annotations are available, while closing the gap with amodal annotations. The top results are **bolded**, while the second best are underlined.
_offsets_ followed by mean-shift clustering to obtain object instances. For comparison, we take the identical semantic segmentation network (Cylinder3D [12]), but instead of offset regression and mean-shift clustering, we train a _separate_ modal instance recognition network, followed by our proposed instance segmentation network. We refer to this variant as _Modal Det. (SW) w/o sharing_ in Tab. I. As the semantic segmentation networks are identical, this experiment highlights the effectiveness of our modal detection branch. In this setting, we regress tightly-fitting "shrink-wrap (SW)" bounding boxes derived from segmentation labels. With this simple approach, we improve by \(+4\) PQ points.
Joint training.Next, we train a single network for _joint_ semantic segmentation and modal instance recognition and segmentation (as explained in Sec. III), _i.e._, _Modal Det. (SW) w/ sharing_ in Tab. I. This yields _PQ_ score of 73.8 (\(+0.6\)), confirming the benefits of joint training of segmentation and modal recognition networks.
annotation cost. _This begs the question, can we close the gap using only segmentation-level labels?_
**Closing the gap.** While regressing the full extent of the object is beneficial, we do not have access to this information (_e.g._, in SemanticKITTI [13]). Can we do better than naive _shrink-wrapping_ baseline (73.8 PQ)? First, we observe this gap is due to the sensitivity of the modal recognition head to occlusions and decreasing sensor resolution, resulting in a large number of instances containing only a few points. An obvious remedy is to simply drop small boxes (DSB), _i.e._, to exclude small tightly-fitting bounding boxes from training. However, this approach yields 70.1 PQ, likely due to the exclusion of a large portion of training data. A better strategy is to replace small boxes with _class-wise mean_ (CWM) box sizes, which yields 74.2 PQ. Finally, we utilize the sequential information and compute tightly-fitting boxes throughout the instance trajectory. Taking max dimension over time for supervision (MAX) produces PQ of 77.1, which is reasonably close to full _amodal_ supervision. This implies that using sequential point-level labels, we can get better estimates of object extent resulting in better performance.
**PointSegMLP.** Next, we justify design decisions behind our instance segmentation network, _PointSegMLP_, that predicts binary point-center memberships for each detected object instance. In addition to reporting standard _PQ_ and _mIoU_, we also evaluate _membership accuracy_ (_mem. acc._), which computes the percentage of points within a given RoI that have been assigned to the correct center.
We first evaluate the performance of a simple geometric baseline, denoted as _NN-baseline_ in Tab. II. This method assigns each point to its nearest _semantically-compatible_ detected center within the RoI of the detected center. We conclude that this simple baseline works surprisingly well, obtaining a _PQ_ of 70.9. However, clearly, there is space for improvement in terms of _mem. acc._ (76.3). While most points can be unambiguously assigned to the nearest instance, a certain percentage of _fuzzy_ points could be assigned to two or more instances. This motivates the usage of data-driven _PointSegMLP_ to perform segmentation.
Next, we compare this baseline to our _PointSegMLP_ that learns to segment points. In the first variant (denoted as _semantic + geometric_), we only utilize the 3D point coordinates of points and instance centers, along with their semantic predictions. We observe that just using geometrical features significantly outperforms the _NN-baseline_ with an improvement of \(+19.1\) in terms of _mem.acc_, and this translates into \(+2\) improvement in _PQ_. Solely adding bird-eye-view (BEV) feature (_wrt._ detected instance) does not improve the performance; however, adding both instance _BEV_ feature and fine-grained _point_ features leads to a \(+0.9\) increase in _PQ_. We also visualize the results in Fig. 3.
### _Benchmark results_
This section compares our method to published state-of-the-art methods on standard benchmarks for 3D and 4D lidar panoptic segmentation datasets [13, 14].
**Lidar panoptic segmentation.** We report the results for panoptic segmentation in Tab. III (Panoptic nuScenes) and Tab. IV (SemanticKITTI). We utilize temporal supervised (MAX) for obtaining object targets for the benchmark submission. We focus this discussion on the test set results and show results on the validation set for completeness. On the nuScenes dataset, _MOST_ is the second-best method with 76.1 _PQ_. Note that we only utilize standard convolution layers as opposed to proprietary Transformer-based Panoptic-PHNet [9], so there is potential to replace our lightweight components with stronger transformer-based counterparts to achieve better performance. The end-to-end latency of over system is 169.9ms. Moreover, Panoptic-PHNet [9] could not be easily extended for sequence-level scene understanding (ie, tracking), while as we will show next, _MOST_ can achieve competitive performance on tracking by simply appending a greedy association module. _MOST_ outperforms other approaches by a large margin (\(+13.5\)_PQ_). On Semantic-KITTI, _MOST_ is a close-second obtaining 61.0 _PQ_, with state-of-the-art being 61.5 _PQ_. This highlights that _MOST_ generalizes well across different datasets. _MOST_ also performs favorably against recent, query-based network [33], that extends state-of-the-art image-based approach, Mask2Former [51] to the lidar domain.
**Lidar panoptic tracking.** We report the results for 4D lidar panoptic tracking on SemanticKITTI and nuScenes datasets in Tab. V. Being a top-down method, _MOST_ can easily extend to 4D panoptic segmentation through the greedy association of predicted velocity offsets. On nuScenes, _MOST_ obtains 73.2 LSTQ and 74.9 PAT on the test set, establishing new state-of-the-art on this benchmark. _MOST_ improves by +6.8 LSTQ and \(+4.5\) PAT points over second-best approach, _Efficient-LPT_[45]. On Semantic-KITTI, _MOST_ obtains competitive results (60.3 LSTQ) with a simple greedy approach. These results affirm that _MOST_ is a versatile approach that performs consistently across different benchmarks, 3D and 4D panoptic segmentation on multiple datasets. We refer the reader to the accompanying video for qualitative results.
## V Conclusions
This paper presents a _top-down_ approach to lidar panoptic segmentation and tracking using only _modal_ annotations.
\begin{table}
\begin{tabular}{l l|l l l l l l} \hline \hline & Method & _LSTQ_ & _PAT_ & _S\({}_{\text{max}}\)_ & _S\({}_{\text{cl}}\)_ & _PTQ_ & _PQ_ \\ \hline \multirow{7}{*}{\begin{tabular}{l} **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ \end{tabular} } & RangeNet++ [21] + PP + MOT & 35.5 & - & 24.1 & 52.4 & - & - \\ & KPConv [17] + PP + SP & 38.5 & - & 26.6 & 55.9 & - & - \\ & 4D-PLS [5] & 56.9 & - & 56.4 & 57.4 & - & - \\ & Contrastive Association [35] & 63.1 & - & 65.7 & 60.6 & - & - \\ & 4D-SOP [50] & **63.9** & - & **69.5** & 58.8 & - & - \\ & **Ours** & 60.3 & - & 57.8 & **62.8** & - & - \\ \hline \hline \multirow{7}{*}{\begin{tabular}{l} **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ \end{tabular} } &
\begin{tabular}{l} **TemplateTrackNet [34]** \\ **4D-PLS [5]** \\ **57.8 \\ **E-LPS [45]** \\ **E-LPS [45]** \\ **E-LPT [45]** \\ **E-LPT [45]** \\ **E-LPT [45]** \\ **E-LPT & **73.2** & **74.9** & **66.6** & **80.4** & **72.0** & **76.0** \\ \hline \hline \end{tabular}
\end{table} TABLE V: **4D Lidar Panoptic Segmentation Benchmarks**. Our method is \(1^{st}\) on nuScenes and \(3^{nd}\) on SemanticKITTI. The top results are **bolded**, while the second best are underlined. MOT: tracking-by-detection [48], SFP: scene flow based propagation [49], PP: PointPillars 3D detector [3].
Our unified network jointly detects objects as modal points and classifies voxels to obtain per-point panoptic segmentation predictions. Instances are associated across 4D spatio-temporal data using learned modal velocity offsets to obtain panoptic tracking predictions. Our method establishes a new state-of-the-art on Panoptic nuScenes 4D panoptic segmentation benchmark. We hope that this work will inspire future developments in recognition-centric methods for lidar panoptic segmentation and tracking.
###### Acknowledgements.
This project was funded by, in parts, by Sofja Kovalevskaja Award of the Humboldt Foundation.
|
2301.04547 | A Scale-Dependent Distance Functional between Past Light Cones in
Cosmology | We discuss a rigorous procedure for quantifying the difference between our
past lightcone and the past lightcone of the fiducial
Friedmann-Lemaitre-Robertson-Walker spacetime modeling the large-scale
description of cosmological data in the standard $\Lambda\mathrm{CDM}$
scenario. This result is made possible by exploiting the scale-dependent
distance functional between past lightcones recently introduced by us. We
express this harmonic map type functional in terms of the physical quantities
that characterize the actual measurements along our past lightcone, namely the
area distance and the lensing distortion, also addressing the very delicate
problem of the presence of lightcone caustics. This analysis works beautifully
and seems to remove several of the difficulties encountered in comparing the
actual geometry of our past lightcone with the geometry of the fiducial FLRW
lightcone of choice. We also discuss how, from the point of view of the FLRW
geometry, this distance functional may be interpreted as a scale-dependent
effective field, the pre-homogeneity field, that may be of relevance in
selecting the FLRW model that best fits the observational data. | Mauro Carfora, Francesca Familiari | 2023-01-11T16:17:59Z | http://arxiv.org/abs/2301.04547v1 | # A scale-dependent distance functional between past-lightcones in cosmology
###### Abstract.
We discuss a rigorous procedure for quantifying the difference between our past lightcone and the past lightcone of the fiducial Friedmann-Lemaitre-Robertson-Walker spacetime modeling the large scale description of cosmological data in the standard \(\Lambda\)CDM scenario. This result is made possible by exploiting the scale-dependent distance functional between past lightcones recently introduced by us in [12]. We express this harmonic map type functional in terms of the physical quantities that characterize the actual measurements along our past lightcone, namely the area distance and the lensing distortion, also addressing the very delicate problem of the presence of lightcone caustics. This analysis works beautifully and seems to remove several of the difficulties encountered in comparing the actual geometry of our past lightcone with the geometry of the fiducial FLRW lightcone of choice. We also discuss how, from the point of view of the FLRW geometry, this distance functional may be interpreted as a scale-dependent effective field, the pre-homogeneity field, that may be of relevance in selecting the FLRW model that best fits the observational data.
1
Footnote 1: Characterized by the actual temperature of the cosmic microwave background \(T_{CMB}\,=\,2.725\,K\) as measured in the frame centered on us but stationary with respect to the CMB.
2
Footnote 2: The actual averaging scale marking the statistical onset of isotropy and homogeneity is still much debated. For the sake of the argument presented in this paper, we adopt the rather conservative estimate of the scales over which an average isotropic expansion is seen to emerge, namely \(70-120\,h^{-1}Mpc\), and ideally extending to a few times this scale [54].
3
Footnote 3: At the Hubble scale, the problem of _cosmic variance_ may alter the statistical significance of the data samples we gather.
## 1. Introduction
_It is a pleasure to dedicate this paper to Maurizio Gasperini who has always liked it best on the past light cone even if the routes are tough, but in such a rugged landscape that is to be expected_
The \(\Lambda\)CDM model and the Friedman-Lemaitre-Robertson-Walker (FLRW) spacetimes provide a rather accurate physical and geometrical representation of the universe in the present era1 and over spatial scales ranging from\({}^{2}\approx\,100\,h^{-1}\) Mpc to the visual horizon of our past light cone [27], [34], [50], where \(h\) is the dimensionless parameter describing the relative uncertainty of the true value of the present-epoch Hubble-Lemaitre constant. Within such observational range, and on scales significantly smaller than the Hubble scale4, we have a testable ground for statistical isotropy in the distribution of the dark and visible matter components on our past light cone. Homogeneity of this distribution is difficult to test directly via astronomical surveys, but a number of observational results [41] and in particular the kinematic Sunyaev-Zeldovich effect [55], [18] imply that fluctuations around spatial homogeneity cannot be too large. Thus, without resorting to an axiomatic use of the Copernican principle, we have an observational ground for assuming that spatial homogeneity holds, in a statistically averaged sense, over large scales. It must be stressed that it is in a statistical sense and only over large scales that this weak form of the cosmological principle provides observational support for best fitting the description of spacetime geometry in terms of a member of the FLRW family of solutions of the Einstein equations. In particular, to whatever degree one accepts this FLRW scenario, one has to address the fact that the role of FLRW spacetime geometry becomes delicate to interpret when past light cone data are gathered in our
## 1. Introduction
The large-scale structure of the universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology. The universe is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, it is a fundamental problem in cosmology, and it is a fundamental problem in cosmology, it
R. Maartens, W. Stoeger, and A. Whitman [20] (see also [21]) by characterizing the set of cosmological observables on the past lightcone which, together with the Einstein field equations, allows to reconstruct the spacetime geometry in a way adapted to the process of observation [20], [15], [16].
In this paper we address an important step in this cosmographical framework. In particular we discuss a rigorous procedure for quantifying the difference between our past lightcone and the reference past lightcone that, for consistency, we associate with the fiducial large-scale FLRW spacetime. This result is made possible by exploiting the scale-dependent (harmonic map type) distance functional between past lightcones recently introduced by us in [12], and which extended the light-cone theorem [14]. We express this functional in terms of the physical quantities that characterize measurements along our past lightcone, namely the area distance and the lensing distortion, also briefly addressing the very delicate problem of the presence of lightcone caustics. This analysis works beautifully and seems to remove several of the difficulties encountered in comparing the actual geometry of our past lightcone with the geometry of a fiducial FLRW lightcone of choice. We also discuss how, from the point of view of the FLRW geometry, this distance functional may be interpreted as a scale-dependent effective field that may be of relevance in selecting the FLRW model that best fits the observative data. In this connection and in line with the introductory remarks above its worthwhile to stress that our choice of a reference FLRW spacetime is strictly related to the prevalence of this family of metrics in discussing the \(\Lambda\)CDM model. The results presented here can be easily extended to more general reference metrics. It is also important to make clear that in this paper we are not addressing the extremely delicate averaging problem on the past lightcone, a problem to which Maurizio Gasperini has significantly contributed with the seminal paper [24], and that has seen importat recent progress in [7]... _but the past lightcone routes are still tough and the landscape rugged..._.
## 2. The past light cone and the celestial sphere
Throughout this paper \((M,g)\) denotes a cosmological spacetime where \(g\) is a Lorentzian metric, and where \(M\) is a smooth 4-dimensional manifold which for our purposes we can assume diffeomorphic to \(\mathbb{R}^{4}\) (or to \(V^{3}\times\mathbb{R}\), for some smooth compact or complete 3-manifold \(V^{3}\)). In local coordinates \(\{x^{i}\}_{i=1}^{4}\), we write \(g=g_{ik}dx^{i}\otimes dx^{k}\), where the metric components \(g_{ik}\,:=\,g(\partial_{i},\partial_{k})\) in the coordinate basis \(\{\partial_{i}:=\partial/\partial x^{i}\}_{i=1}^{4}\) have the Lorentzian signature \((+,+,+,-)\), and the Einstein summation convention is in effect5. We assume that \((M,g)\) is associated with the evolution of a universe which is (statistically) isotropic and homogeneous on sufficiently large scales \(L>L_{0}\) where, according to the introductory remarks, we indicatively assume \(L_{0}\,\cong\,100h^{-1}\) Mpc, and let local inhomogeneities dominate for \(L\,<\,L_{0}\). The matter content in \((M,g)\) is phenomenologically described by a (multi-component) energy-momentum tensor \(T\,=\,T_{ik}\,dx^{i}\otimes dx^{k}\), (typically in the form of a perfect fluid, dust, and radiation). If not otherwise stated, the explicit expression of \(T\) is not needed for our analysis. We assume that in \((M,g)\) the motion of the matter components characterize a _phenomenological Hubble flow_ that generates a family of preferred world-lines parametrized by proper time \(\tau\)
Footnote 5: If not otherwise stated we adopt geometrical units, \(c\,=\,1\,=\,G\).
\[\gamma_{s}\,:\,\mathbb{R}_{>0} \longrightarrow (M,g)\] \[\tau \longmapsto \gamma_{s}(\tau)\;, \tag{1}\]
and labeled by suitable comoving (Lagrangian) coordinates \(s\) adapted to the flow. We denote by \(\dot{\gamma}_{s}\,:=\,\frac{d\gamma_{s}(\tau)}{d\tau}\), \(g(\dot{\gamma}_{s},\dot{\gamma}_{s})\,=\,-1\), the corresponding 4-velocity field. For simplicity, we assume that at the present era these worldlines are geodesics, _i.e._\(\nabla_{\dot{\gamma}_{s}}\,\dot{\gamma}_{s}\,=\,0\). This phenomenological Hubble flow is strongly affected by the peculiar motion of the astrophysical sources and by the complex spacetime geometry that dominates on the pre-homogeneity scales. In particular, it exhibits a
complex pattern of fluctuations with respect to the linear _FLRW Hubble flow_ that sets in, relatively to the standard of rest provided by the cosmic microwave background (CMB), when we probe the homogeneity scales, \(L\,\gtrsim\,100h^{-1}\) Mpc. Again, we stress that the transitional region between the phenomenological Hubble flow and the statistical onset of the large-scale FLRW linear Hubble flow is quite uncertain and still actively debated [54]. If we adopt the weak form of the cosmological principle described in the introduction, \((M,g,\gamma_{s})\) can be identified with the phenomenological background spacetime or _Phenomenological Background Solution (PBS)_[39] associated with the actual cosmological data gathered from our past lightcone observations. In the same vein, we define _Phenomenological Observers_ the collection of observers \(\{\gamma_{s}\}\) comoving with the phenomenological Hubble flow (1). Since in our analysis we fix our attention on a given observer, we drop the subscript \(s\) in (1), and describe a finite portion of the observer's world-line with the timelike geodesic segment \(\tau\longmapsto\,\gamma(\tau),-\delta<\tau<\delta,\,\,\) for some \(\delta>0,\,\,\) where \(p\,:=\,\gamma(\tau=0)\) is the selected event corresponding to which the cosmological data are gathered. To organize and describe these data in the local rest frame of the observer \(p\,:=\,\gamma(\tau=0)\), let \(\big{(}T_{p}M,\,g_{p},\,\{E_{(i)}\}\big{)}\) be the tangent space to \(M\) at \(p\) endowed with a \(g\)-orthonormal frame \(\{E_{(i)}\}_{i=1,\ldots,4}\), \(g_{p}\,\big{(}E_{(i)},E_{(k)}\big{)}=\eta_{ik}\), where \(\eta_{ik}\) is the Minkowski metric, and where we identify \(E_{(4)}\) with the observer \(4\)-velocity \(\dot{\gamma}(\tau)|_{\tau=0}\), _i.e._\(E_{(4)}\,:=\,\dot{\gamma}(\tau)|_{\tau=0}\). Thus, if we denote by \(\{\breve{E}^{\,(i)}\}_{i=1,\ldots,4}\), the \(1\)-forms basis dual to \(\{E_{(i)}\}_{i=1,\ldots,4}\), we write
\[g_{p}\,=\,\eta_{ik}\,\breve{E}^{\,(i)}\,\otimes\,\breve{E}^{\,(k)}\,. \tag{2}\]
Since we have the distinguished choice \(E_{(4)}\,:=\,\dot{\gamma}(\tau)|_{\tau=0}\) for the timelike basis vector \(E_{(4)}\), we can also introduce in \(\big{(}T_{p}M,\,\{E_{(i)}\}\big{)}\) a reference positive definite metric \(g_{p}^{(\delta)}\) associated with the frame \(\{E_{(i)}\}_{i=1,\ldots,4}\) by setting
\[g_{p}^{(\delta)}\,:=\,\delta_{ik}\,\breve{E}^{\,(i)}\,\otimes\,\breve{E}^{\, (k)}\,, \tag{3}\]
where \(\delta_{ik}\) denote the components of the standard Euclidean metric. As discussed in detail by Chen and LeFloch [13], this reference metric comes in handy in the characterization of the functional Lipschitz and Banach space norms of tensor fields defined on the past lightcone6.
Footnote 6: The indefinite character of a Lorentzian metric makes it unsuitable for defining integral norms of tensor fields, and for such a purpose one is forced to introduce a reference positive definite metric. In particular, by exploiting the Nash embedding theorem, one typically uses the Euclidean metric and the associated definitions of the functional space of choice, say a Sobolev space of tensor fields. Different choices of reference metrics, as long as they are of controlled geometry, induce equivalent Banach space norms. In our case, we can exploit the natural choice provided by (3) by using normal coordinates and identifying \((T_{p}M,\,\{E_{(i)}\},\,g_{p}^{\delta})\) with the Euclidean space \((\mathbb{R}^{4},\,g_{p}^{\delta})\).
### The celestial sphere
Let
\[C^{-}\,\big{(}T_{p}M,\,\{E_{(i)}\}\big{)}\,:=\,\big{\{}X\,=\,\mathbb{X}^{i}E_ {(i)}\,\neq\,0\,\in\,T_{p}M\mid g_{p}(X,X)\,=\,0,\,\mathbb{X}^{4}+r=0\big{\}} \,\,, \tag{4}\]
\[\overline{C^{-}\,\big{(}T_{p}M,\,\{E_{(i)}\}\big{)}}\,:=\,\big{\{}X\,=\, \mathbb{X}^{i}E_{(i)}\,\neq\,0\,\in\,T_{p}M\mid g_{p}(X,X)\,\leq\,0,\,\mathbb{ X}^{4}+r\,\leq\,0\big{\}}\,\,, \tag{5}\]
respectively denote the set of past-directed null vectors and the set of past-directed causal vectors in \((T_{p}M,\,\{E_{(i)}\})\), where
\[r:=(\sum_{a=1}^{3}(\mathbb{X}^{a})^{2})^{1/2}\,, \tag{6}\]
is the radial coordinate in the hyperplane \(\mathbb{X}^{4}\,=\,0\,\subset\,T_{p}M\) parametrizing the one-parameter family of \(2\)-spheres
\[\mathbb{S}_{r}^{2}(T_{p}M)\,:=\,\{X\in C^{-}\,\big{(}T_{p}M,\{E_{(i)}\}\big{)} \mid\mathbb{X}^{4}\,=\,-\,r,\,\,\sum_{a=1}^{3}(\mathbb{X}^{a})^{2}=r^{2},\,\,r \in\,\mathbb{R}_{>0}\}\,, \tag{7}\]
that foliates \(C^{-}\left(T_{p}M,\{E_{(i)}\}\right)/\{p\}\). The sphere \(\mathbb{S}^{2}_{r}(T_{p}M)\) can be thought of as providing a representation of the sky directions, at a given value of \(r\), in the rest space \(\left(T_{p}M,\{E_{(i)}\}\right)\) of the (instantaneous) observer \((p,\dot{\gamma}(0))\). In particular, the 2-sphere \(\left.\mathbb{S}^{2}_{r}(T_{p}M)\right|_{r=1}\) or, equivalently, its projection on the hyperplane \(\mathbb{X}^{4}\,=\,0\) in \(T_{p}M\),
\[\mathbb{S}^{2}\left(T_{p}M\right)\,:=\,\left\{X\,=\,\mathbb{X}^{i}E_{(i)}\, \neq\,0\,\in\,T_{p}M\,\mid\,\mathbb{X}^{4}=0,\,\,\sum_{a=1}^{3}(\mathbb{X}^{a} )^{2}=1\right\}\,\,, \tag{8}\]
can be used to parametrize the (spatial) past directions of sight constituting the field of vision of the observer \((p,\,\dot{\gamma}(0))\). In the sense described by R. Penrose [46], this is a representation of the abstract sphere \(\mathcal{S}^{-}(p)\) of past null directions parameterizing the past-directed null geodesics through \(p\). Explicitly, let
\[n(\theta,\phi) := \sum_{a=1}^{3}\,n^{a}(\theta,\phi)\,E_{(a)}\] \[= \cos\phi\sin\theta\,E_{(1)}\,+\,\sin\phi\sin\theta\,E_{(2)}\,+\, cos\,\theta\,E_{(3)}\,,\,\,\,\,0\leq\theta\leq\pi,\,\,0\leq\phi<2\pi\,,\]
denote the spatial direction in \(T_{p}M\) associated with the point \((\theta,\phi)\,\in\,\mathbb{S}^{2}\left(T_{p}M\right)\), (by abusing notation, we often write \(n(\theta,\phi)\,\in\,\mathbb{S}^{2}\left(T_{p}M\right)\)). Any such spatial direction characterizes a corresponding past-directed null vector \(\ell(\theta,\phi)\,\in\,\left(T_{p}M,\{E_{(i)}\}\right)\),
\[\ell(\theta,\phi)\,=\,(n(\theta,\,\phi),\,-\,\dot{\gamma}(\tau)|_{\tau=0})\,= \,\sum_{a=1}^{3}\,n^{a}(\theta,\phi)E_{(a)}\,-\,E_{(4)}\,, \tag{10}\]
normalized according to
\[g_{p}\left(\ell(\theta,\phi),\dot{\gamma}(\tau)|_{\tau=0}\right)\,=\,g_{p} \left(\ell(\theta,\phi),E_{(4)}\right)\,=\,1\,. \tag{11}\]
The corresponding past-directed null rays
\[\mathbb{R}_{\geq 0}\,\ni\,r\,\longmapsto\,r\,\ell(n(\theta,\phi))\,,\,\,\, \,\,\,\,\,(\theta,\phi)\,\in\,\mathbb{S}^{2}\left(T_{p}M\right)\,, \tag{12}\]
generate \(C^{-}\left(T_{p}M,\{E_{(i)}\}\right)\). Note that in such a kinematical setup for the instantaneous rest space \(\left(T_{p}M,\,\{E_{(a)}\}\right)\) of the observer \((p,\,\dot{\gamma}(0))\), a photon reaching \(p\) from the past-directed null direction \(\ell(\theta,\phi)\), is characterized by the (future-pointing) wave vector
\[k(\theta,\phi)\,:=\,-\,\nu\,\ell(\theta,\phi)\,\in\,T_{p}M\,, \tag{13}\]
where \(\nu\,=\,-\,g_{p}\left(k,\,E_{(4)}\right)\) is the photon frequency as measured by the instantaneous observer \(\gamma(\tau)|_{\tau=0}\). The spherical surface \(\mathbb{S}^{2}\left(T_{p}M\right)\) endowed with the standard round metric
\[\widetilde{h}(\mathbb{S}^{2})\,=\,d\theta^{2}\,+\,\sin^{2}\theta\,d\phi^{2}\,, \tag{14}\]
and the associated area form \(d\mu_{\mathbb{S}^{2}}\,=\,\sqrt{\det(\widetilde{h}(\mathbb{S}^{2}))}\,d \theta d\phi\,=\,\sin\theta\,d\theta d\phi\), defines [46] the _celestial sphere_
\[\mathbb{C}\,\mathbb{S}(p)\,:=\,\left(\mathbb{S}^{2}\left(T_{p}M\right),\, \widetilde{h}(\mathbb{S}^{2})\right) \tag{15}\]
providing, in the instantaneous rest space \(\left(T_{p}M,\{E_{(i)}\}\right)\), the geometrical representation of the set of all directions towards which the observer can look at astrophysical sources from her instantaneous location in \((M,g)\). In this connection, \(d\mu_{\mathbb{S}^{2}}\) can be interpreted as the element of solid angle subtended on the celestial sphere \(\mathbb{C}\,\mathbb{S}(p)\) by the observed astrophysical sources. It is also useful to keep track of the radial coordinate7\(r\) as a possible parametrization of the past-directed null
geodesics, and introduce a celestial sphere that provides also this information according to
\[\mathbb{C}\,\mathbb{S}_{r}(p)\,:=\,\left(\mathbb{S}_{r}^{2}\left(T_{p}M\right),\, r^{2}\widetilde{h}(\mathbb{S}^{2})\right)\,. \tag{16}\]
Lacking a better name, we shall refer to \(\mathbb{C}\,\mathbb{S}_{r}(p)\) as the _celestial sphere at radius \(r\)_ in \(\left(T_{p}M,\{E_{(i)}\}\right)\). The celestial sphere \(\mathbb{C}\,\mathbb{S}(p)\) plays a basic role in what follows since it provides the logbook where astrophysical data are recorded.
Let \(m_{(\alpha)}(\theta,\phi)\in\,T_{p}M\), with \(\alpha\,=\,2,3\), denote two spatial \(g_{p}\)-orthonormal vectors spanning the tangent space \(T_{(\theta,\phi)}\mathbb{S}^{2}\left(T_{p}M\right)\) to \(\mathbb{S}^{2}\left(T_{p}M\right)\) at the point \((\theta,\phi)\), _i.e._,
\[g_{p}\left(m_{(\alpha)},n\right)\,=\,0\,=\,g_{p}\left(m_{(\alpha)},E_{(4)} \right),\,\,g_{p}\left(m_{(\alpha)},m_{(\beta)}\right)\,=\,\delta_{\alpha\beta }\,. \tag{17}\]
The tetrad
\[\left(n,m_{(2)},m_{(3)},\ell(n)\right) \tag{18}\]
provides a basis for \(T_{p}M\) (the Sachs basis), and the pair \(\left(T_{(\theta,\phi)}\mathbb{S}^{2}\left(T_{p}M\right),\,m_{(\alpha)}(\theta,\phi)\right)\) defines the _screen plane_\(T_{n}\mathbb{C}\,\mathbb{S}(p)\) associated with the direction of sight \(n(\theta,\phi)\in\,\mathbb{C}\,\mathbb{S}(p)\) in the celestial sphere \(\mathbb{C}\,\mathbb{S}(p)\), _i.e._
\[T_{n}\mathbb{C}\,\mathbb{S}(p)\,:=\,\left(T_{(\theta,\phi)}\mathbb{S}^{2}\left( T_{p}M\right),\,\,m_{(\alpha)}(\theta,\phi)\right)\,. \tag{19}\]
In the instantaneous rest space of the observer, the screen \(T_{(\theta,\phi)}\mathbb{C}\,\mathbb{S}(p)\) is the (spatial) 2-plane on which the apparent image of the astrophysical source, pointed by the direction \(n\in\mathbb{C}\,\mathbb{S}(p)\), is by convention displayed.
### Sky sections and observational coordinates on the past light cone
We transfer the above kinematical setup from \(T_{p}M\) to \((M,g)\) by using the exponential map based at \(p\),
\[\begin{array}{rcl}\exp_{p}\,:\,W_{p}\,\subseteq\,T_{p}M&\longrightarrow&M\\ X&\longmapsto&exp_{p}\left(X\right)\,:=\,\lambda_{X}(1)\,,\end{array} \tag{20}\]
where \(\lambda_{X}\,:\,I_{W}\,\longrightarrow\,(M,g)\), for some maximal interval \(I_{W}\subseteq\mathbb{R}_{\geq 0}\), is the past-directed causal geodesic emanating from the point \(p\) with initial tangent vector \(\hat{\lambda}_{X}(0)\,=\,X\in W_{p}\), and where \(W_{p}\,\subseteq\,T_{p}M\) is the maximal domain of \(\exp_{p}\). Thus, the past lightcone \(\mathscr{C}^{-}(p,g)\,\in\,(M,g)\) with the vertex at \(p\), _i.e._ the set of all events \(q\in(M,g)\) that can be reached from \(p\) along the past-pointing null geodesics \(r\,\longmapsto\,\exp_{p}(r\ell(n(\theta,\phi))),r\in\,I_{W}\), \((\theta,\phi)\in\mathbb{C}\,\mathbb{S}(p)\), can be represented as
\[\mathscr{C}^{-}(p,g)\,:=\,\exp_{p}\left[W_{p}\cap C^{-}\left(T_{p}M,g_{p} \right)\right]\,, \tag{21}\]
and the portion of \(\mathscr{C}^{-}(p,g)\) accessible to observations for a given value \(r_{0}\,\in\,I_{W}\) of the affine parameter \(r\) is given by
\[\mathscr{C}^{-}(p,g;\,r_{0})\,:=\,\left\{\,q\,\in\,M\,|\,\,\,q\,=\,\exp_{p}(r \ell(n(\theta,\phi))),\,\,0\leq r<r_{0},\,\,(\theta,\phi)\in\mathbb{C}\, \mathbb{S}(p)\right\}\,.\]
The exponential map representation, on the celestial spheres \(\mathbb{C}\,\mathbb{S}(p)\) and \(\mathbb{C}\,\mathbb{S}_{r}(p)\), provides a natural setup for a description of observational data gathered from \(\mathscr{C}^{-}(p,g)\). It emphasizes the basic role of past-directed null geodesics and provides the framework for interpreting the physical data in the local rest frame of the observer at \(p\). In particular, it allows us to represent on \(\mathbb{C}\,\mathbb{S}(p)\) and \(\mathbb{C}\,\mathbb{S}_{r}(p)\) the actual geometry of the observed sky at a given length scale. This role is quite effective in a neighborhood of \(p\), where we can introduce normal coordinates associated with \(\exp_{p}\), but it is delicate to handle in regions where \(\exp_{p}\) is not a diffeomorphism of \(W_{p}\cap C^{-}\left(T_{p}M,g_{p}\right)\) onto its image. To set notation, our strategy is to start with the standard description [20], [21] of observational coordinates on \(\mathscr{C}^{-}(p,g)\) associated with the usual assumption that the exponential map is a diffeomorphism8 in a sufficiently small neighborhood of \(p\), and then we move to the more
general, low regularity, Lipschitz case. In this connection, it is worthwhile to stress that the standard normal coordinates description is strictly associated with the assumption that the metric of \((M,g)\) is sufficiently regular, with components \(g_{ij}(x^{\ell})\) which are at least twice continuously differentiable, _i.e._\(g_{ij}(x^{\ell})\,\in\,C^{k}(\mathbb{R}^{4},\mathbb{R}),\) for \(k\geq 2\). Under this hypothesis, there is a star-shaped neighborhood \(N_{0}(g)\) of \(0\) in \(W_{p}\subseteq T_{p}M\) and a corresponding geodesically convex neighborhood of \(p\), \(U_{p}\subseteq\,(M,g)\), restricted to which \(\exp_{p}\,:\,N_{0}\,\subseteq\,T_{p}M\,\longrightarrow\,\,U_{p}\,\subseteq\,M\) is a diffeomorphism. In such \(U_{p}\) we can introduce geodesic normal coordinates \((x^{i})\) according to
\[x^{i}\,:\,=\,\mathbb{X}^{i}\,\circ\,\exp_{p}^{-1}\,:\,M\cap\,U_{p} \longrightarrow \mathbb{R}^{4}\] \[q \longmapsto x^{i}(q)\,:\,=\,\mathbb{X}^{i}\left(\exp_{p}^{-1}(q)\right) \tag{22}\]
where \(\mathbb{X}^{i}\left(\exp_{p}^{-1}(q)\right)\) are the components, in the \(g\)-orthonormal frame \(\{E_{(i)}\}\), (or with respect to the corresponding basis (18)), of the vector \(\exp_{p}^{-1}(q)\,\in\,W_{p}\subseteq T_{p}M\). Thus, in \(\mathscr{C}^{-}(p,g)\,\cap\,U_{p}\) we can write,
\[\exp_{p}\ :\,C^{-}\left(T_{p}M,\,\{E_{(i)}\}\right)\cap\,N_{0}(g) \longrightarrow \mathscr{C}^{-}(p,g)\,\cap\,U_{p}\] \[r\ell(n(\theta,\phi))\,=\,r\left(n^{a}(\theta,\phi)E_{(a)}\,-\,E _{(4)}\right) \longmapsto \exp_{p}(r\ell(n))\,=\,q\] \[\Longrightarrow\,q \longmapsto \left\{x^{i}(q)\,:=\,\exp_{p}^{-1}(q)\,=\,(r\,n^{a}(\theta,\phi),\,-\,r)\right\}\,. \tag{23}\]
According to (21) and to the Gauss lemma applied to \(\exp_{p}\,:\,C^{-}\left(T_{p}M,\,\{E_{(i)}\}\right)\cap\,N_{0}(g)\,\longrightarrow \mathscr{C}^{-}(p,g)\,\cap\,U_{p}\), the past ligl cone region \(\mathscr{C}^{-}(p,g)\,\cap\,U_{p}\setminus\{p\}\) is foliated by the \(r\)-dependent family of \(2\)-dimensional surfaces \(\Sigma(p,r)\), the _cosmological sky sections_, defined by
\[\Sigma(p,r)\,:=\,\exp_{p}\left[\mathbb{C}\,\mathbb{S}_{r}(p)\right]\,=\,\left\{ \exp_{p}\left(r\,\ell(n(\theta,\phi))\right)\big{|}\,\,\left(\theta,\phi\right) \,\in\,\mathbb{C}\,\mathbb{S}(p)\right\}\,, \tag{24}\]
and \(g\)-orthogonal to all null geodesics originating at \(p\), _i.e._
\[g\left(d_{(r,\theta,\phi)}\exp_{p}(\ell(r,\underline{n})),\,d_{(r,\theta,\phi )}\exp_{p}(\underline{v})\right)\big{|}_{\exp_{p}(\ell(r,\underline{n}))}\,=\, 0\,. \tag{25}\]
Here \(d_{(r,\theta,\phi)}\exp_{p}(...)\) denotes the tangent mapping associated to \(\exp_{p}\) evaluated at the point \((\theta,\phi)\,\in\,\mathbb{S}_{r}^{2}(p)\), and \(\underline{v}\,\in\,T_{\theta,\phi}\,\mathbb{S}_{r}^{2}(p)\) is the generic vector tangent to \(\mathbb{S}_{r}^{2}(p)\). In \(\mathscr{C}^{-}(p,g)\cap U_{p}\setminus\{p\}\), each surface \(\Sigma(p,r)\) is topologically a \(2\)-sphere endowed with the \(r\)-dependent two-dimensional Riemannian metric
\[g|_{\Sigma(p,r)}\,:\,=\,\iota_{r}^{*}\,\,g|_{\mathscr{C}^{-}(p,g)} \tag{26}\]
induced by the inclusion \(\iota_{r}:\Sigma(p,r)\,\hookrightarrow\,\mathscr{C}^{-}(p,g)\) of \(\Sigma(p,r)\) into \(\mathscr{C}^{-}(p,g)\,\cap\,U_{p}\setminus\{p\}\). We can pull back this metric to the celestial sphere \(\mathbb{C}\,\mathbb{S}_{r}(p):=\,\left(\mathbb{S}_{r}^{2}\left(T_{p}M\right), \,r^{2}\widetilde{h}(\mathbb{S}^{2})\right)\) by using the exponential map according to
\[h(r,\theta,\phi)\,:=\,\left(\exp_{p}^{*}\,g|_{\Sigma(p,r)}\right)_{\alpha\beta }\,dx^{\alpha}dx^{\beta}\Big{|}_{r}\,,\,\,\,\alpha,\,\beta\,=\,2,3,\,\,\,\,x ^{2}:=\theta,\,x^{3}:=\phi\,. \tag{27}\]
This metric can be profitably compared with the pre-existing round metric \(r^{2}\widetilde{h}(\mathbb{S}^{2})\) on \(\mathbb{C}\,\mathbb{S}_{r}(p)\) (see (14) and (16)). To this end, let \(r\,n(\theta,\phi)\,\in\,\mathbb{C}\,\mathbb{S}_{r}(p)\) be the direction of sight pointing, in the celestial sphere \(\mathbb{C}\,\mathbb{S}_{r}(p)\), to the (extended) astrophysical source located around the point \(q\,\in\,\Sigma(p,r)\). If \(r\ell(n(\theta,\phi))\,=\,r\left(n^{a}(\theta,\phi)E_{(a)}\,-\,E_{(4)}\right)\) is the corresponding null direction in \(C^{-}\left(T_{p}M,\,\{E_{(i)}\}\right)\), then according to (23) we have \(\exp_{p}(r\ell(n))\,=\,q\) and, via the exponential map along the past-directed null geodesic reaching the observer located at \(p\) from the astrophysical source located at \(q\), we can pull-back the area element of \(\left(\Sigma(p,r),\,g|_{\Sigma(p,r)}\right)\) on the celestial sphere \(\mathbb{C}\,\mathbb{S}_{r}(p)\) of the observer at \(p\). We have
\[d\mu_{h(r)}(p,n(\theta,\phi),r)\,:=\,\exp_{p}^{*}d\mu_{g|_{\Sigma(p,r)}}\,\circ \,\exp_{p}(r\ell(n))\,=\,\sqrt{\det(h(r,\theta,\phi))}\,d\theta d\phi\,. \tag{28}\]
This defines the area element associated with the metric (27), and can be interpreted [21] as the cross-sectional area element at the source location as seen by the observer at \(p\). Since the round
measure \(d\mu_{\mathbb{S}^{2}_{r}}=r^{2}\,d\mu_{\mathbb{S}^{2}}=r^{2}\,\sin\theta\,d\theta\,d\varphi\) and the actual physical measure \(d\mu_{h(r)}\) are both defined over the celestial sphere \(\mathbb{C}\,\mathbb{S}_{r}(p)\in\,T_{p}M\), we can introduce the relative density of \(d\mu_{h(r)}\) with respect to the Euclidean solid angle measure \(d\mu_{\mathbb{S}^{2}}\), _viz._ the function \(D(r,\theta,\phi)\) defined by the relation
\[d\mu_{h(r)}\,=\,D^{2}(r,\theta,\phi)\,d\mu_{\mathbb{S}^{2}}\,, \tag{29}\]
or equivalently, \(\sqrt{\det(h(r,\theta,\phi))}\,=\,D^{2}(r,\theta,\phi)\,\sqrt{\det(\widetilde{ h}(\mathbb{S}^{2}))}\). The function \(D(r,\theta,\phi)\) is _the observer area distance_[20], [21], [33]. By definition, it provides the ratio of an object's cross sectional area to its (apparent) angular size as seen on the celestial sphere \(\mathbb{S}^{2}(p)\,\subset\,T_{p}M\). Roughly speaking, it converts the angular separations as seen in the images of an astrophysical source, gathered by the observer at \(p\), into proper separations at the source. In general, \(D(r)\,:=\,D(r,\theta,\phi)|_{\theta,\phi=const.}\) cannot be used as an affine parameter along the past-directed null geodesic \(r\,\mapsto\,\exp_{p}(k(r,\underline{n}))\) since it is not a monotonic function of \(r\), (for instance in FLRW models, monotonicity fails around \(z\,\sim\,1\)). However, if we have an accurate knowledge of the brightness and of the spectrum of the astrophysical source seen at the past light cone location \(q\,:=\,\exp_{p}(\ell(r,\underline{n}))\,\in\,\mathscr{C}^{-}(p,g)\), then \(D(r,\theta,\phi)\) is, at least in principle, a measurable quantity (see paragraph 4.3 of [20] and 7.4.3 of [21] for a discussion of this point9). As stressed above, we can also compare the physical metric (27), \(h(r,\theta,\phi)\,:=\,\left(\exp_{p}^{*}\,g|_{\Sigma(p,r)}\right)_{\alpha\beta }\,dx^{\alpha}dx^{\beta}\Big{|}_{r}\), with the round metric \(r^{2}\widetilde{h}(\mathbb{S}^{2})\) pre-existing on the celestial sphere \(\mathbb{C}\,\mathbb{S}_{r}(p)\), and introduce [20], [21] the set of functions \(\mathcal{L}_{\alpha\beta}(r,\theta,\phi)\), \(\alpha,\,\beta\,=\,2,3\), implicitly defined by representing (27) in the distorted polar form
Footnote 9: Beware that in [20], the observer area distance \(D^{2}(r,\theta,\phi)\) is denoted by \(r\), whereas our \(r\) corresponds to their \(y\).
\[h_{\alpha\beta}|_{\mathbb{S}^{2}_{r}}\,=\,D^{2}(r,\theta,\phi)\left( \widetilde{h}_{\alpha\beta}(\mathbb{S}^{2})\,+\,\mathcal{L}_{\alpha\beta} \right)\,. \tag{30}\]
We normalize this representation by imposing [20] that, in the limit \(r\,\searrow\,0\), the distortion, \(\mathcal{L}_{\alpha\beta}(r,\theta,\phi)=\frac{h_{\alpha\beta}(r,\theta,\phi)} {D^{2}(r,\theta\,\phi}\,-\widetilde{h}_{\alpha\beta}(\mathbb{S}^{2}),\) of the normalized metric \(h(r)/D^{2}(r)\) with respect to the round metric \(\widetilde{h}(\mathbb{S}^{2})\) goes to zero uniformly, _i.e._,
(31) \[\lim_{r\searrow 0}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
of variation of the metric tensor \(h(r)\) as \(r\) varies. Dropping the angular dependence for notational ease, we get
\[\Theta_{\alpha\beta}\,:=\,\frac{d}{dr}\,h_{\alpha\beta}(r) = \frac{d}{dr}\,\left[D^{2}(r)\left(\widetilde{h}_{\alpha\beta}( \mathbb{S}^{2})\,+\,\mathcal{L}_{\alpha\beta}(r)\right)\right]\] \[= 2h_{\alpha\beta}(r)\,\frac{d}{dr}\,\ln D(r)\,+\,D^{2}(r)\,\frac {d}{dr}\,\mathcal{L}_{\alpha\beta}(r)\,, \tag{35}\]
where we exploited \(d\widetilde{h}_{\alpha\beta}(\mathbb{S}^{2})/dr\,=\,0\) and rewrote \(D(r)dD(r)/dr\) as \(D^{2}(r)d\ln\,D(r)/dr\). Similarly, from the defining relation \(\sqrt{\det(h(r,\theta,\phi))}\,=\,D^{2}(r,\theta,\phi)\,\sqrt{\det(\widetilde {h}(\mathbb{S}^{2}))}\), (see (29)), we compute
\[\frac{d}{dr}\,\sqrt{\det(h(r))} = \frac{d}{dr}\,\left(D^{2}(r)\,\sqrt{\det(\widetilde{h}(\mathbb{ S}^{2}))}\right)\,=\,2\,\sqrt{\det(h(r))}\,\frac{d}{dr}\,\ln D(r)\] \[\Rightarrow\,\,\frac{d}{dr}\,\ln\sqrt{\det(h(r))} = 2\,\frac{d}{dr}\,\ln D(r)\,. \tag{36}\]
Inserting this relation in (35) we obtain
\[\Theta_{\alpha\beta}\,:=\,h_{\alpha\beta}(r)\,\frac{d}{dr}\,\ln\sqrt{\det(h(r ))}\,+\,D^{2}(r)\,\frac{d}{dr}\,\mathcal{L}_{\alpha\beta}(r)\,. \tag{37}\]
The shear \(\widetilde{\sigma}_{\alpha\beta}\) is the trace-free part of this expression, \(\widetilde{\sigma}_{\alpha\beta}:=\Theta_{\alpha\beta}-\frac{1}{2}h_{\alpha \beta}\,h^{\mu\nu}\Theta_{\mu\nu}\). Since
\[\frac{1}{2}h_{\alpha\beta}\,h^{\mu\nu}\Theta_{\mu\nu}\,=\,\frac{1}{2}h_{\alpha \beta}\,h^{\mu\nu}\frac{d}{dr}h_{\mu\nu}\,=\,h_{\alpha\beta}\,\frac{d}{dr}\, \ln\sqrt{\det(h(r))}\,, \tag{38}\]
we eventually get
\[\widetilde{\sigma}_{\alpha\beta}\,=\,D^{2}(r)\,\frac{d\mathcal{L}_{\alpha \beta}(r)}{dr}\;, \tag{39}\]
as might have been expected. Note that, in contrast to \(\mathcal{L}_{\alpha\beta}\), \(\widetilde{\sigma}_{\alpha\beta}\) is trace-free (but with respect to the physical metric \(h_{\alpha\beta}\)). Now, let us introduce the other basic player of our narrative.
## 3. The background FLRW past light cone.
As already pointed out, the standard \(\Lambda\)CDM model is built on the assumption that over scales \(L\,>\,100\,h^{-1}\,\mathrm{Mpc}\), the phenomenological background spacetime \((M,g,\gamma_{s})\) follows on average the dynamics of a FLRW model with a (linear) Hubble expansion law. It is also assumed that below the scale of statistical homogeneity, deviations from this average scenario can be described by FLRW perturbation theory. Since there is no smooth transition between the large-scale FLRW Hubble flow and the phenomenological Hubble flow, this latter assumption rests on quite delicate ground. For instance, the field of peculiar velocities \(\{\dot{\gamma}_{s}(\tau)\}\) of the phenomenological observers \(\{\tau\longrightarrow\gamma_{s}(\tau)\}\) shows a significant statistical variance [53] with respect to the average FLRW Hubble flow and the standard of rest provided by the cosmic microwave background (CMB). This remark has an important effect on the relation between the celestial sphere \(\mathbb{C}\,\mathbb{S}_{r}(p)\) of the phenomenological observer (\(p\), \(\dot{\gamma}(0)\)) and the corresponding celestial sphere \(\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{\widehat{r}}(p)\) of the idealized FLRW observer (\(p\), \(\widehat{\dot{\gamma}}(0)\)). They cannot be identified and must be connected by a Lorentz boost that takes into account the origin of this statistical variance. The actual scenario is significantly constrained by the coupling of the matter inhomogeneities with a spacetime geometry that is no longer Friedmannian. As a consequence, the peculiar velocity field of the phenomenological observer may have a rather complex origin, and its variance with respect to the FLRW average expansion may become a variable of relevance in cosmography. This scenario naturally calls into play a delicate comparison between the geometry of \(\mathcal{C}^{-}(p,g)\) and the geometry of the associated FLRW past light cone that sets in at scales \(L\,>\,100\,h^{-1}\,\mathrm{Mpc}\). For this purpose, along with the physical metric \(g\), we consider on the spacetime manifold \(M\) a reference FLRW metric \(\hat{g}\) and the associated family of global Friedmannian observers \(\hat{\tau}\longmapsto\dot{\gamma}_{s}(\hat{\tau})\). Strictly speaking, the FLRW model \((M,\hat{g},\dot{\gamma}_{s}(\hat{\tau}))\) should be used only over
the scales \(L\,>\,L_{0}\simeq\,100\,h^{-1}\,{\rm Mpc}\). We need to consider it also over the inhomogeneity scales \(L\,<\,L_{0}\) where it plays the role of the geometrical background used to interpret the data according to the standard perturbative FLRW point of view recalled above. In such an extended role, the chosen FLRW is the _Global Background Solution (GBS_ according to [39]) we need to check against the physical metric \(g\) representing the phenomenological background solution. In this section, we set up the kinematical aspects for such a comparison. First some standard verbiage for introducing the FLRW model \((M,\hat{g},\hat{\gamma}_{s}(\hat{\tau}))\). In terms of the radial, and angular FRLW coordinates \(y^{\alpha}\,:=\,\left(\hat{r},\hat{\theta},\hat{\varphi}\right)\), and of the proper time of the comoving fundamental observers \(y^{4}\,:=\,\hat{\tau}\), we set
\[\widehat{g} := \,-d\hat{\tau}^{2}\,+\,a^{2}(\hat{\tau})\,\left[d\hat{r}^{2}\,+\,f ^{2}(\hat{r})\,\left(d\hat{\theta}^{2}\,+\,\sin^{2}\hat{\theta}\,d\hat{ \varphi}^{2}\right)\right]\,,\quad\quad\widehat{\gamma}^{h}\,=\,\delta^{h}_{4}, \tag{40}\]
where \(a(\hat{\tau})\) is the time-dependent scale factor, \(k\) is the normalized dimensionless spatial curvature constant, and \(\widehat{\gamma}^{h}\) are the components of the 4-velocity \(\widehat{\gamma}\) of the fundamental FLRW observers. According to the above remarks, the geodesics \(\tau\longmapsto\gamma(\tau)\), and \(\hat{\tau}\longmapsto\hat{\gamma}(\hat{\tau}),-\delta<\,\tau,\,\hat{\tau}<\delta\), associated with the corresponding Hubble flow in \((M,g,\gamma)\) and \((M,\hat{g},\hat{\gamma})\), are assumed to be distinct, but in line with the scale-dependent cosmographic approach adopted here we assume that they share a common observational event \(p\,\in\,M\). We denote by \(\widehat{\mathscr{C}}^{-}(p,\hat{g})\) the associated FLRW past light cone, and normalize the proper times \(\tau\) and \(\hat{\tau}\) along \(\gamma(\tau)\) and \(\hat{\gamma}(\hat{\tau})\) so that at \(\tau\,=\,0\,=\,\hat{\tau}\) we have \(\gamma(0)\,=\,p\,=\,\hat{\gamma}(0)\). As stressed, the two instantaneous observers \((p,\dot{\gamma}(0))\) and \((p,\widehat{\dot{\gamma}}(0))\) have different 4-velocities, \(\dot{\gamma}(0)\neq\widehat{\dot{\gamma}}(0)\), and their respective celestial spheres, \(\mathbb{C}\,\mathbb{S}(p)\) and \(\widehat{C}\,\mathbb{S}^{2}(p)\) are quite distinct. They are related by a Lorentz trasformation describing the aberration of the sky mapping of one instantaneous observer with respect to the other. This mapping will play a basic role in our analysis, and to provide an explicit description of its properties, we start by adapting to the FLRW instantaneous observer \((p,\dot{\widehat{\gamma}}(0))\,\in\,(M,\hat{g},\hat{\gamma})\) the setup characterizing the celestial spheres \(\mathbb{C}\,\mathbb{S}(p)\) and \(\mathbb{C}\,\mathbb{S}_{r}(p)\) of the instantaneous observer \((p,\dot{\gamma}(0))\,\in\,(M,g,\gamma)\).
### The FLRW celestial sphere and the associated sky sections
Let \(\left(\widehat{T}_{p}M,\,\widehat{g}_{p},\,\{\widehat{E}_{(i)}\}\right)\) be the tangent space to \((M,\hat{g},\hat{\gamma})\) at \(p\) endowed with a \(\widehat{g}\)-orthonormal frame \(\{\widehat{E}_{(i)}\}_{i=1,\ldots,4}\), \(\widehat{g}_{p}\left(\widehat{E}_{(i)},\widehat{E}_{(k)}\right)=\eta_{ik}\), where \(\eta_{ik}\) is the Minkowski metric, and where we identify \(\widehat{E}_{(4)}\) with the FLRW-observer's 4-velocity \(\widehat{\dot{\gamma}}(\tau)|_{\tau=0}\), _i.e._\(\widehat{E}_{(4)}\,:=\,\widehat{\dot{\gamma}}(\tau)|_{\tau=0}\). For ease of notation, we shall often use the shorthand \(\widehat{T}_{p}M\) when referring to the tangent space to \((M,\hat{g},\hat{\gamma})\) at \(p\). Let
\[C^{-}\left(\widehat{T}_{p}M,\,\{\widehat{E}_{(i)}\}\right)\,:=\,\left\{Y\,=\, \mathbb{Y}^{i}\widehat{E}_{(i)}\,\neq\,0\,\in\,\widehat{T}_{p}M\mid\widehat{ g}_{p}(Y,Y)\,=\,0,\,\,\mathbb{Y}^{4}+\widehat{r}=0\right\}\;, \tag{41}\]
\[\overline{C^{-}\left(\widehat{T}_{p}M,\,\{\widehat{E}_{(i)}\}\right)\,:=\, \left\{Y\,=\,\mathbb{Y}^{i}\widehat{E}_{(i)}\,\neq\,0\,\in\,\widehat{T}_{p}M \mid\widehat{g}_{p}(Y,Y)\,\leq\,0,\,\,\mathbb{Y}^{4}+\widehat{r}\,\leq\,0\right\} \;, \tag{42}\]
respectively denote the set of past-directed null vectors and the set of past-directed causal vectors in \((\widehat{T}_{p}M,\,\{\widehat{E}_{(i)}\})\), where \(\widehat{r}:=(\sum_{a=1}^{3}(\mathbb{Y}^{a})^{2})^{1/2}\) is the radial coordinate (see (6)) in the hyperplane \(\mathbb{Y}^{4}\,=\,0\,\subset\,\widehat{T}_{p}M\) parametrizing the one-parameter family of 2-spheres
\[\mathbb{S}^{2}_{\widehat{r}}(\widehat{T}_{p}M)\,:=\,\{Y\in C^{-}\left(\widehat {T}_{p}M,\{\widehat{E}_{(i)}\}\right)\,\mid\,\mathbb{Y}^{4}\,=\,-\,\widehat{r},\,\,\,\sum_{a=1}^{3}(\mathbb{Y}^{a})^{2}=\widehat{r}^{2},\,\,\widehat{r}\in \,\mathbb{R}_{>0}\}\,, \tag{43}\]
that foliate \(C^{-}\left(\widehat{T}_{p}M,\{\widehat{E}_{(i)}\}\right)/\{p\}\). The 2-spheres \(\mathbb{S}^{2}_{\widehat{r}}(\widehat{T}_{p}M)\), endowed with the round metric
\[\widehat{\widehat{h}}(\mathbb{S}^{2})\,=\,\widehat{\widehat{h}}_{\alpha\beta}( \mathbb{S}^{2})dy^{\alpha}dy^{\beta}\,=\,d\widehat{\theta}^{2}\,+\,\sin^{2} \widehat{\theta}\,d\widehat{\phi}^{2}\,,\,\,\,\,0\leq\widehat{\theta}\leq\pi, \,\,0\leq\widehat{\phi}<2\pi \tag{44}\]
can be thought of as providing a representation of the sky, at a given value of the radial coordinate \(\widehat{r}\), in the instantaneous rest space \(\left(\widehat{T}_{p}M,\{\widehat{E}_{(i)}\}\right)\) of the FLRW observer. In analogy with the characterization (8) of the celestial sphere \(\mathbb{C}\,\mathbb{S}(p)\), we use the projection of \(\mathbb{S}^{2}_{\widehat{r}}(\widehat{T}_{p}M)\Big{|}_{\widehat{r}=1}\) on the hyperplane \(\mathbb{Y}^{4}\,=\,0\) in \(\widehat{T}_{p}M\), to define the FLRW _celestial sphere_
\[\widehat{\mathbb{C}}\,\mathbb{S}(p)\,\left(\left.\mathbb{S}^{2}_{\widehat{r} }(\widehat{T}_{p}M)\right|_{\widehat{r}=1},\widehat{\widehat{h}}(\mathbb{S}^{2 })(p)\right)\,:=\,\left\{Y\,=\,\mathbb{Y}^{i}E_{(i)}\,\neq\,0\,\in\,\widehat{T }_{p}M\,\,|\,\,\mathbb{Y}^{4}=0,\,\,\sum_{a=1}^{3}(\mathbb{Y}^{a})^{2}=1 \right\}\,\,, \tag{45}\]
parametrizing the directions of sight
\[\widehat{n}(\widehat{\theta},\widehat{\phi})\,:=\,(\cos\widehat{\phi}\sin \widehat{\theta},\,\sin\widehat{\phi}\sin\widehat{\theta},\,cos\,\widehat{ \theta})\,,\,\,\,\,0\leq\widehat{\theta}\leq\pi,\,\,0\leq\widehat{\phi}<2\pi \tag{46}\]
in the instantaneous rest space \(\left(\widehat{T}_{p}M,\{\widehat{E}_{(i)}\}\right)\) of the FLRW observer. In full analogy with (16), we define the FLRW _celestial sphere at radius \(\widehat{r}\)_ in \(\left(\widehat{T}_{p}M,\{\widehat{E}_{(i)}\}\right)\) according to
\[\widehat{\mathbb{C}}\,\mathbb{S}_{\widehat{r}}(p)\,:=\,\left(\mathbb{S}^{2}_{ \widehat{r}}\left(\widehat{T}_{p}M\right),\,\widehat{r}^{2}\widehat{\widehat{h }}(\mathbb{S}^{2}(p))\right)\,. \tag{47}\]
With a straightforward adaptation to the FLRW geometry of the definitions (10), (18), and (19), we also introduce in \(\widehat{T}_{p}M\) the tetrad
\[\left(\widehat{n},\widehat{m}_{(2)},\widehat{m}_{(3)},\widehat{\ell}(\widehat {n})\right) \tag{48}\]
and associate with the pair \(\left(\widehat{T}_{(\widehat{\theta},\widehat{\phi})}\mathbb{S}^{2}\left( \widehat{T}_{p}M\right),\,\widehat{m}_{(\alpha)}(\widehat{\theta},\widehat{ \phi})\right)\) the _screen plane_\(T_{\widehat{n}}\widehat{\mathbb{C}}\,\mathbb{S}(p)\) associated with the direction of sight \(\widehat{n}(\widehat{\theta},\widehat{\phi})\) in the FLRW celestial sphere \(\widehat{\mathbb{C}}\,\mathbb{S}(p)\),
\[T_{\widehat{n}}\widehat{\mathbb{C}}\,\mathbb{S}(p)\,:=\,\left(T_{(\widehat{ \theta},\widehat{\phi})}\mathbb{S}^{2}\left(\widehat{T}_{p}M\right),\,\widehat {m}_{(\alpha)}(\widehat{\theta},\widehat{\phi})\right)\,. \tag{49}\]
Together with the observational normal coordinates \(\{X^{i}\}\) in \((M,g,\gamma)\), describing the local geometry on the past lightcone \(\mathscr{C}^{-}(p,g)\,\cap\,U_{p}\), we introduce corresponding (normal) coordinates \(\{Y^{k}\}\) on the past light cone \(\widehat{\mathscr{C}}^{-}(p,\hat{g})\) in the reference FLRW spacetime \((M,\hat{g},\hat{\gamma})\). To begin with, let \(\widehat{\exp}_{p}\) denote the exponential mapping based at the event \(p=\hat{\gamma}(0)\), _i.e._
\[\begin{array}{rcl}\widehat{\exp}_{p}\,:\,\widehat{W}_{p}\,\subseteq\,\widehat {T}_{p}M&\longrightarrow&(M,\hat{g}),\\ &\mathbb{Y}&\longmapsto&exp_{p}\left(\mathbb{Y}\right)\,:=\,\lambda_{ \mathbb{Y}}(1)\,,\end{array} \tag{50}\]
where \(\widehat{W}_{p}\) is the maximal domain of \(\widehat{\exp}_{p}\). To keep on with the notation set by (21) and (22), we characterize the past lightcone \(\widehat{\mathscr{C}}^{-}(p,\hat{g})\,\in\,(M,\widehat{g})\), with vertex at \(p\), according to
\[\widehat{\mathscr{C}}^{-}(p,\hat{g})\,:=\,\widehat{\exp}_{p}\left[\widehat{W}_ {p}\cap C^{-}\left(\widehat{T}_{p}M,\widehat{g}_{p}\right)\right]\,, \tag{51}\]
and we denote by
\[\widehat{\mathscr{C}}^{-}(p,\widehat{g};\,\widehat{r}_{0})\,:=\,\left\{\,q\, \in\,M\,|\,\,\,q\,=\,\widehat{\exp}_{p}(\widehat{r}\widehat{\ell}(\widehat{n}( \widehat{\theta},\widehat{\phi}))),\,\,0\leq\widehat{r}<\widehat{r}_{0},\,\,( \widehat{\theta},\widehat{\phi})\in\widehat{\mathbb{C}}\,\mathbb{S}(p)\right\}\,,\]
the portion of \(\widehat{\mathscr{C}}^{-}(p,\hat{g})\) accessible to observations for a given value \(\widehat{r}_{0}\) of the radial parameter \(\widehat{r}\). That said, if \(\hat{U}_{p}\,\subset\,(M,\hat{g})\) denotes the region of injectivity of \(\widehat{\exp}_{p}\), then normal coordinates are
defined by
\[y^{i}\,:=\,\mathbb{Y}^{i}\,\circ\,\widehat{\exp}_{p}^{-1}\,:\,(M,\widehat{g})\, \cap\,\widehat{U}_{p}\,\longrightarrow\,\mathbb{R}\,, \tag{52}\]
where \(\mathbb{Y}^{i}\) are the components of the vectors \(\mathbb{Y}\in\,\widehat{T}_{p}M\) with respect to a \(\hat{g}\)-orthonormal frame \(\{\hat{E}_{(i)}\}_{i=1,\dots,4}\) with \(\hat{E}_{(4)}\,:=\,\hat{\gamma}(0)\). We can parametrize \(\widehat{\mathscr{C}}^{-}(p,\widehat{g})\,\cap\,\widehat{U}_{p}\) in terms of the \(2\)-dimensional FLRW sky sections
\[\widehat{\Sigma}(p,\hat{r})\,:=\,\widehat{\exp}_{p}\left[\widehat{\mathbb{C} }\,\widehat{\mathbb{S}}_{\widehat{r}}(p)\right]\,=\,\left\{\widehat{\exp}_{p} \left(\widehat{r}\,\widehat{\ell}(\widehat{n}(\widehat{\theta},\widehat{\phi}) )\right)\Big{|}\,\,\,\widehat{(\theta,\phi)}\,\in\,\widehat{\mathbb{C}}\, \widehat{\mathbb{S}}(p)\right\}\,\,, \tag{53}\]
endowed with the metric induced by the inclusion of \(\widehat{\Sigma}(p,\hat{r})\) into \(\widehat{\mathscr{C}}^{-}(p,\hat{g})\), _i. e._
\[\widehat{g}|_{\widehat{\Sigma}(p,\hat{r})}\,:=\,\,(\widehat{g})_{\alpha\beta} \,dy^{\alpha}dy^{\beta}\Big{|}_{\hat{r}}\,=\,a^{2}(\widehat{\tau}(\widehat{r} ))\,f^{2}\,(\widehat{r})\left(d\widehat{\theta}^{2}\,+\,\sin^{2}\widehat{ \theta}d\widehat{\phi}^{2}\right)\,, \tag{54}\]
where \(a(\widehat{\tau}(\widehat{r}))\) is the FLRW expansion factor \(a(\widehat{\tau})\) (see (40)) evaluated in correspondence of the given value of the radial coordinate \(\widehat{r}\in\widehat{T}_{p}M\). We proceed as in Subsection 2.2, and exploit the exponential map \(\widehat{\exp}_{p}\) to pull back \(\widehat{g}|_{\widehat{\Sigma}(p,\hat{r})}\) on the celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{\widehat{r}}(p)\),
\[\widehat{h}(\widehat{r},\widehat{\theta},\widehat{\phi})\,:=\,\left(\widehat {\exp}_{p}^{\ast}\,\widehat{g}|_{\widehat{\Sigma}(p,\widehat{r})}\right)_{ \alpha\beta}\,dy^{\alpha}dy^{\beta}\Big{|}_{\widehat{r}}\,,\,\,\,\alpha,\, \beta\,=\,2,3,\quad y^{2}:=\widehat{\theta},\,y^{3}:=\widehat{\phi}\,. \tag{55}\]
This pull-back can be explicitly computed. To wit, let \(y^{i}_{q}=(\widehat{r}_{q},\widehat{\theta}_{q},\widehat{\phi}_{q},\widehat{ \tau}_{q})\) the normal coordinates of the event \(q\in\,\widehat{\mathscr{C}}^{-}(p,\hat{g})\) associated with the observation of a given astrophysical source. The equation for the radial, past-directed, null geodesic connecting \(q\) to the observation event \(p\) reduces in the FLRW case to [19]
\[d\widehat{r}\,=\,-\,\frac{d\widehat{\tau}}{a(\widehat{\tau})}\,,\,\,\,\, \widehat{\tau}(p)\,=\,0\,=\,\widehat{r}(p)\,, \tag{56}\]
that integrates to the expression providing the (matter-comoving) radial coordinate distance between \(p\) and \(q\)
\[\widehat{r}_{q}\,=\,\int_{0}^{\widehat{\tau}_{q}}\frac{d\widehat{\tau}}{a( \widehat{\tau})}\,. \tag{57}\]
Thus, the metric (55), evaluated at \(\widehat{\exp}_{p}^{-1}(q)\), can be written in terms of \(\widehat{\tau}_{q}\) as
\[\widehat{h}_{q}\,:=\,\widehat{h}(\widehat{r}_{q},\widehat{\theta}_{q},\widehat {\phi}_{q})\,=\,a^{2}(\widehat{\tau}_{q})\,f^{2}\,(\widehat{r}_{q})\left(d \widehat{\theta}_{q}^{2}\,+\,\sin^{2}\widehat{\theta}_{q}d\widehat{\phi}_{q}^{2 }\right)\,, \tag{58}\]
If we introduce the dimensionless FLRW cosmological redshift corresponding to the event \(q\),
\[z_{q}\,:=\,z\,(\widehat{\tau}_{q})\,=\,\frac{a_{0}}{a(\widehat{\tau}_{q})}\,- \,1\,, \tag{59}\]
where \(a_{0}\,:=\,a(\widehat{\tau}=0)\), then we can rewrite \(\widehat{h}(\widehat{r}_{q},\widehat{\theta}_{q},\widehat{\phi}_{q})\) as
\[\widehat{h}_{q}\,=\,\frac{a_{0}^{2}}{(1\,+\,z_{q})^{2}}\,f^{2}\,(\widehat{r}_{ q})\left(d\widehat{\theta}_{q}^{2}\,+\,\sin^{2}\widehat{\theta}_{q}d\widehat{\phi}_{q}^{2 }\right)\,\,. \tag{60}\]
Note that the area element associated with the metric \(\widehat{h}_{q}\),
\[d\mu_{\widehat{h}_{q}}\,=\,\frac{a_{0}^{2}}{(1\,+\,z_{q})^{2}}\,f^{2}\,( \widehat{r}_{q})\,\,d\mu_{\mathbb{S}^{2}} \tag{61}\]
characterizes the FLRW _observer area distance_ (see (29)) of the event \(q\in\,\widehat{\mathscr{C}}^{-}(p,\hat{g})\) according to
\[\widehat{D}(\widehat{r}_{q})\,=\,\frac{a_{0}}{1\,+\,z_{q}}\,f\,(\widehat{r}_{ q})\,\,. \tag{62}\]
Comparing the celestial spheres \(\mathbb{C}\,\mathbb{S}(p)\) and \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\)
As stressed in the previous Section, the celestial sphere \(\mathbb{C}\,\mathbb{S}(p)\) of the phenomenological observer \((p,\dot{\gamma}(0))\), and the celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) of the FLRW ideal observer \((p,\widehat{\dot{\gamma}}(0))\) cannot be directly identified as they stand. The velocity fields \(\dot{\gamma}(0)\) and \(\widehat{\dot{\gamma}}(0)\) are distinct and to compensate for the induced aberration, the celestial spheres \(\mathbb{C}\,\mathbb{S}(p)\) and \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) can be identified only up to Lorentz boosts. In the standard FLRW view, this is the familiar global boost taking care of the kinematical dipole component in the CMB spectrum due to our peculiar motion with respect to the standard of rest provided by the CMB. However, in a cosmographic setting and presence of a complex pattern of local inhomogeneities coupled with a non-FLRW spacetime geometry over scales \(\lesssim 100h^{-1}\) Mpc, the peculiar motion of the phenomenological observer has a dynamical origin, driven by the gravitational interaction and not just by a kinematical velocity effect. Even if we factor out the effect of coherent bulk flows due to the non-linear local gravitational dynamics, and average the rate of expansion over spherical shells at increasing distances from \((p,\,\dot{\gamma}(0))\), the variance in the peculiar velocity of \((p,\,\dot{\gamma}(0))\) with respect to the average rate of expansion is significant [54]. These remarks imply that the Lorentz boosts connecting \(\mathbb{C}\,\mathbb{S}(p)\) and \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) acquire a dynamical meaning that plays a basic role in what follows. As a first step, we describe the Lorentz boost in the idealized pure kinematical situation where we need to compensate for a well-defined velocity field of the celestial sphere \(\mathbb{C}\,\mathbb{S}(p)\) with respect to the celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) taken as providing a well-defined standard of rest. As a second step, we move to the more general setting required in the pre-homogeneity region where we sample scales \(\lesssim 100h^{-1}\) Mpc. In this latter case, a pure kinematical Lorentz boost will not suffice, the large fluctuations in the sources distribution require a suitable localization of the Lorentz boosts to compare the data on \(\mathbb{C}\,\mathbb{S}(p)\) with those on \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\).
### The kinematical setting
To describe a kinematical Lorentz boost acting between \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) and \(\mathbb{C}\,\mathbb{S}(p)\), we find it convenient to use in this section the well-known correspondence between the restricted Lorentz group and the six-dimensional projective special linear group \(\mathrm{PSL}(2,\mathbb{C})\) describing the automorphisms of the Riemann sphere \(\mathbb{S}^{2}\,\simeq\,\mathbb{C}\,\cup\,\{\infty\}\). More expressively, \(\mathrm{PSL}(2,\mathbb{C})\) can be viewed as the group of the conformal transformations of the celestial spheres that correspond to the restricted Lorentz transformations connecting \(\mathbb{C}\,\mathbb{S}(p)\) to \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\). In order to set notation, let us recall that the elements of \(\mathrm{PSL}(2,\mathbb{C})\) can be identified with the set of the Mobius transformations of the Riemann sphere \(\mathbb{S}^{2}\,\simeq\,\mathbb{C}\,\cup\,\{\infty\}\), _i.e._ the fractional linear transformations of the form
\[\zeta\,:\,\mathbb{C}\,\cup\,\{\infty\} \longrightarrow \mathbb{C}\,\cup\,\{\infty\}\] \[w \longmapsto \zeta(w)\,:=\,\frac{aw+b}{cw+d}\,,\ \ a,b,c,d\,\in\,\mathbb{C}\,,\ ad\,-\,bc\,\neq\,0\,, \tag{63}\]
where, to avoid a notational conflict with the redshift parameter \(z\), we have labeled the complex coordinate in \(\mathbb{C}\,\cup\,\{\infty\}\) with \(w\) rather than with the standard \(z\). Let \(Y\,=\,\widehat{n}(\widehat{\theta},\widehat{\phi})\) denote a point on the celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\), and let \(\widehat{w}\) denote its stereographic projection10 on the Riemann sphere \(\mathbb{C}\,\cup\,\{\infty\}\), _i.e._,
Footnote 10: From the north pole \(\theta\,=\,0\in\,\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\).
\[\mathcal{P}_{\mathbb{S}^{2}}\,:\,\widehat{\mathbb{C}}\,\widehat{ \mathbb{S}}(p)\,\longrightarrow\,\mathbb{C}\cup\{\infty\}\] \[\mathbb{Y}^{\alpha}\,\longmapsto\,\mathcal{P}_{\mathbb{S}^{2}}( \mathbb{Y}^{\alpha})\,=\,\widehat{w}\,:=\,\frac{\mathbb{Y}^{1}+i\,\mathbb{Y}^ {2}}{1-\mathbb{Y}^{3}}\,=\,\frac{\cos\widehat{\phi}\sin\widehat{\theta}\,+\,i \,\sin\widehat{\phi}\sin\widehat{\theta}}{1\,-\,\cos\widehat{\theta}}\,, \tag{64}\]
with \(\,0<\theta\leq\pi,\ 0\leq\phi<2\pi\). It is worthwhile to stress once more that the celestial spheres \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) and \(\mathbb{C}\,\mathbb{S}(p)\) play the role of a mapping frame, a celestial globe where astrophysical positions are registered, and where the Lorentz boost \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\,\longrightarrow\,\mathbb{C}\, \mathbb{S}(p)\) must be interpreted actively as affecting only the recorded astrophysical data. In other words, the Lorentz boost affects the null directions
in \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\), mapping them in the corresponding directions in \(\mathbb{C}\,\mathbb{S}(p)\). To quote a few illustrative examples [46] of the \(\operatorname{PSL}(2,\mathbb{C})\) transformations associated to the Lorentz group action between the celestial spheres \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) and \(\mathbb{C}\,\mathbb{S}(p)\), let \(v\) denote the modulus of the relative 3-velocity of the FLRW ideal observer \((p,\,\widehat{\gamma}(0))\) with respect to the phenomenological observer \((p,\,\dot{\gamma}(0))\), (where \(E^{4}\) is identified with the observer's 4-velocity \(\dot{\gamma}(0)\)). If the map between \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) and \(\mathbb{C}\,\mathbb{S}(p)\) is a pure Lorentz boost in a common direction, say \(E^{3}\), then the associated \(\operatorname{PSL}(2,\mathbb{C})\) transformation is provided by
\[\operatorname{PSL}(2,\mathbb{C})\times\widehat{\mathbb{C}}\, \widehat{\mathbb{S}}(p) \longrightarrow \mathbb{C}\,\mathbb{S}(p)\] \[(\zeta_{boost},\,\widehat{w}) \longmapsto \zeta(\widehat{w})\,=\,w\,=\,\sqrt{\frac{1\,+\,v}{1\,-\,v}}\, \widehat{w}\,, \tag{65}\]
where \(\sqrt{\frac{1\,+\,v}{1\,-\,v}}\) is the relativistic Doppler factor and \(w\) is the point in the Riemann sphere corresponding, under stereographic projection, to the direction \(n(\theta,\phi)\in\mathbb{C}\,\mathbb{S}(p)\). Similarly, if \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) and \(\mathbb{C}\,\mathbb{S}(p)\) differ by a pure rotation through an angle \(\alpha\) about the \(E^{3}\) direction, then the associated \(\operatorname{PSL}(2,\mathbb{C})\) transformation is given by
\[\operatorname{PSL}(2,\mathbb{C})\times\widehat{\mathbb{C}}\, \widehat{\mathbb{S}}(p) \longrightarrow \mathbb{C}\,\mathbb{S}(p) \tag{67}\] \[(\zeta_{rot},\,\widehat{w}) \longmapsto \zeta(\widehat{w})\,=\,w\,=\,e^{i\,\alpha}\,\widehat{w}\,. \tag{66}\]
By composing them, _e. g._ by considering a rotation through an angle \(\alpha\) about the \(E^{3}\) direction, followed by a boost with rapidity \(\beta\,:=\,\log\,\sqrt{\frac{1\,+\,v}{1\,-\,v}}\) along the \(E^{3}\) axis, we get
\[\operatorname{PSL}(2,\mathbb{C})\times\widehat{\mathbb{C}}\, \widehat{\mathbb{S}}(p) \longrightarrow \mathbb{C}\,\mathbb{S}(p)\] \[(\zeta,\,\widehat{w}) \longmapsto \zeta(\widehat{w})\,=\,w\,=\,\sqrt{\frac{1\,+\,v}{1\,-\,v}}\,e^{ i\,\alpha}\,\widehat{w}\,, \tag{68}\]
describing the general fractional linear transformation mapping \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) and \(\mathbb{C}\,\mathbb{S}(p)\). From the physical point of view, this corresponds to the composition of the adjustment of the relative orientation of the spatial bases \(\{E_{(\alpha)}\}\) with respect to \(\{\widehat{E}_{(\alpha)}\},\,\alpha=1,2,3,\,\) followed by a Lorentz boost adjusting for the relative velocity of \((p,\dot{\gamma}(0))\) with respect to \((p,\widehat{\gamma}(0))\). Since the spatial directions \(n(\theta,\phi)\,\in\,\mathbb{C}\,\mathbb{S}(p)\) and \(\widehat{n}(\widehat{\theta},\widehat{\phi})\,\in\,\widehat{\mathbb{C}}\, \widehat{\mathbb{S}}(p)\) characterize corresponding past-directed null vectors \(\ell(\theta,\phi)\,\in\,\big{(}T_{p}M,\{E_{(i)}\}\big{)}\) and \(\widehat{\ell}(\widehat{\theta},\widehat{\phi})\,\in\,\Big{(}\widehat{T}_{p}M,\{\widehat{E}_{(i)}\}\big{)}\) (see (10) and (48)), we can associate with the spatial directions \(\{E_{(\alpha)}\}\) and \(\{\widehat{E}_{(\alpha)}\}\) the respective null directions
\[\ell_{(\alpha)} = E_{(\alpha)}\,-\,E_{(4)}\,=\,E_{(\alpha)}\,-\,\dot{\gamma}(0)\,,\] \[\widehat{\ell}_{(\alpha)} = \widehat{E}_{(\alpha)}\,-\,\widehat{E}_{(4)}\,=\,\widehat{E}_{( \alpha)}\,-\,\widehat{\gamma}(0)\,. \tag{69}\]
### The pre-homogeneity setting
From the above remarks, it follows that the Lorentz mapping from \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) to \(\mathbb{C}\,\mathbb{S}(p)\) is fully determined if we specify the three distinct null directions on the FLRW celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)\) that are the images, under the \(\operatorname{PSL}(2,\mathbb{C})\)-transformation, of three chosen distinct sources on \(\mathbb{C}\,\mathbb{S}(p)\). The selection of these three distinct sources of choice and of the corresponding null directions on \(\mathbb{C}\,\mathbb{S}(p)\) will depend on the scale \(L\) we are probing in our cosmological observations. This is a particularly delicate matter when looking at the pre-homogeneity scales \(L\,\lesssim\,100\,h^{-1}\,\text{Mpc}\), where astrophysical sources are characterized by a complex distribution of peculiar velocities with respect to the assumed Hubble flow. To keep track of this scale dependence, let us consider the celestial spheres \(\mathbb{C}\,\mathbb{S}_{r}(p)\,\) and \(\,\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{\widehat{r}}(p)\) defined by (16) and (47), respectively. For
\(L\,>\,0\), let \(\widehat{r}(L)\) be the value of \(\widehat{r}\) such that the FLRW sky section (53)
\[\widehat{\Sigma}(p,\hat{r}(L))\,:=\,\widehat{\exp}_{p}\left[\widehat{\mathbb{C} }\,\mathbb{S}_{\widehat{r}(L)}(p)\right]\,=\,\left\{\,\widehat{\exp}_{p}\left( \widehat{r}(L)\,\widehat{\ell}(\widehat{n}(\widehat{\theta},\widehat{\phi})) \right)\,\right|\,\,(\widehat{\theta},\widehat{\phi})\,\in\,\widehat{\mathbb{ C}}\,\mathbb{S}(p)\right\}\, \tag{70}\]
probes the length scale \(L\). Similarly, we let \(r(L)\) denote the value of \(r\) such that the physical sky section (71)
\[\Sigma(p,r(L))\,:=\,\exp_{p}\left[\mathbb{C}\,\mathbb{S}_{r(L)}(p)\right]\,=\, \left\{\,\exp_{p}\left(r(L)\,\ell(n(\theta,\phi))\right)\,\right|\,\,(\theta, \phi)\,\in\,\mathbb{C}\,\mathbb{S}(p)\right\}\,, \tag{71}\]
probes the length scale \(L\). Since the FLRW area distance (62),
\[\widehat{D}(\widehat{r})\,=\,\frac{a_{0}}{1\,+\,z}\,f\left(\widehat{r}\right)\,, \tag{72}\]
is isotropic and may be directly expressed in terms of \(z\), we may well use the redshift parameter \(z\) as the reference \(L\). Given \(z\), we denote by \(L(z)\) the corresponding length-scale of choice. As long as \(\widehat{D}(\widehat{r})\) is an increasing function, we can identify \(L(z)\) with the area distance \(\widehat{D}(\widehat{r})\), but in general, we leave the selection of the most appropriate \(L(z)\) to the nature of the cosmographical observations one wants to perform. Given \(\zeta\in\operatorname{PSL}(2,\mathbb{C})\) and a value of the redshift \(z\), we have a corresponding relation between the "radial" variables \(\widehat{r}(L(z))\) and \(r(L(z))\) in (70) and (71). We can take advantage of this relation to simplify the notation for the celestial spheres and the associated sky sections according to
\[\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\,:=\,\widehat{\mathbb{C}}\,\mathbb{S }_{\widehat{r}(L(z))}(p)\,\Longrightarrow\,\widehat{\Sigma}_{z}\,:=\,\widehat {\Sigma}(p,\hat{r}(L(z)))\,:=\,\widehat{\exp}_{p}\left[\widehat{\mathbb{C}}\, \mathbb{S}_{z}(p)\right]\,, \tag{73}\]
and
\[\mathbb{C}\,\mathbb{S}_{z}(p)\,:=\,\mathbb{C}\,\mathbb{S}_{r(L(z))}(p)\, \Longrightarrow\,\Sigma_{z}\,:=\,\Sigma(p,r(L(z)))\,:=\,\exp_{p}\left[ \mathbb{C}\,\mathbb{S}_{z}(p)\right]\,, \tag{74}\]
a notation that, if not otherwise stated, we adopt henceforth. Since in the pre-homogeneity region \(L(z)\,\lesssim\,100\,h^{-1}\,\mathrm{Mpc}\), the large variance in peculiar velocities of the astrophysical sources implies a great variability in the selection of the three reference null directions that fix the \(\operatorname{PSL}(2,\mathbb{C})\) action, we localize this action according to the following construction.
* We assume that there is a finite collection of points \(\{y_{(I)}\}\,\in\,\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\) and a corresponding collection of open disks \(\{\widehat{B}(y_{(I)},\delta)\}\) of radius \(\delta\), centered at the points \(\{y_{(I)}\}\), and defined by (75) \[\widehat{B}(y_{(I)},\delta)\,:=\,\{y^{\prime}\in\widehat{\mathbb{C}}\, \mathbb{S}_{z}(p)\,|\,d_{\mathbb{S}^{2}}(y^{\prime},y_{(I)})\,\leq\,\delta\} \subset\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\] where \(d_{\mathbb{S}^{2}}(y^{\prime},y_{(I)})\) denotes the distance in the round unit metric on \(\mathbb{S}^{2}\). We also assume that any such \(\widehat{B}(y_{(I)},\delta)\) contains the images of three reference astrophysical sources of choice, call them \(A_{(I,\,k)},\;k=1,2,3,\), with celestial coordinates in \(\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\) given by \(y_{(I,\,k)}\,=:\,\widehat{n}_{(I,\,k)}(\widehat{\theta},\widehat{\phi})\).
* We adopt a similar partition on the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}(p)\), to the effect that associated with each disk \(\widehat{B}(y_{(I)},\delta)\) there is, in \(\mathbb{C}\,\mathbb{S}_{z}(p)\), a corresponding metric disk (76) \[B(x(y_{(I)}),\,\delta)\,=\,\{x^{\prime}\in\mathbb{C}\,\mathbb{S}_{z}(p)\ |\,d_{\mathbb{S}^{2}}(x^{\prime},x(y_{(I)}))\leq\delta\}\,\subset\,\mathbb{C} \,\mathbb{S}_{z}(p)\,.\] We require that the images \(A_{(I,\,k)}\) of the three reference astrophysical sources of choice, that in \(\widehat{B}(y_{(I)},\delta)\) have celestial coordinates \(y_{(I,\,k)}\), are represented in \(B(x(y_{(I)}),\,\delta)\) by three distinct points with celestial coordinates \(x_{(I,\,k)}\,=:\,n_{(I,\,k)}(\theta,\phi)\).
* We further assume that the past null directions \(\widehat{\ell}_{(I,\,k)}\,=\,\widehat{n}_{(I,\,k)}(\widehat{\theta},\widehat{ \phi})-\widehat{\gamma}(0)\), associated with the location of the reference sources \(A_{(I,\,k)}\) in the portion of the celestial sphere \(\widehat{B}(y_{(I)},\delta)\cap\,\widehat{\phi}_{(I)}\,=\,\widehat{\phi}_{(I)}\). We assume that the past null directions \(\widehat{\ell}_{(I,\,k)}\,=\,\widehat{n}_{(I,\,k)}(\widehat{\theta},\widehat{ \phi})-\widehat{\gamma}(0)\), associated with the location of the reference sources \(A_{(I,\,k)}\) in the portion of the celestial sphere \(\widehat{B}(y_{(I)},\delta)\cap\,\widehat{\phi}_{(I)}\,=\,\widehat{\phi}_{(I)}\).
\(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\), are related to the corresponding null directions \(\ell_{(I,\,k)}\,=\,n_{(I,\,k)}(\theta,\phi)-\dot{\gamma}(0)\), locating the sources \(A_{(I,\,k)}\) in \(B(x(y_{(I)}),\delta)\cap\mathbb{C}\,\mathbb{S}_{z}(p)\), by the \(\mathrm{PSL}(2,\mathbb{C})\) map
\[\zeta_{(I)}\,:\,\widehat{B}(y_{(I)},\delta)\cap\widehat{\mathbb{C }}\,\widehat{\mathbb{S}}_{z}(p) \longrightarrow B(x(y_{(I)}),\delta)\cap\mathbb{C}\,\mathbb{S}_{z}(p)\] \[\widehat{w} \longmapsto \zeta_{I}(\widehat{w})\,=\,w\,=\,\sqrt{\frac{1\,+\,v}{1\,-\,v}} \,e^{i\,\alpha(A_{(I,\,k)})}\,\widehat{w}\,, \tag{77}\]
where \(\sqrt{\frac{1\,+\,v}{1\,-\,v}}\,e^{i\,\alpha(A_{(I,\,k)})}\) is the composition of the Lorentz boost (\(v\) being the relative \(3\)-velocity of \(\dot{\gamma}(0)\) with respect to \(\widehat{\dot{\gamma}}(0)\)) and of the spatial rotation that, at the given scale \(L(z)\), allow us to align the portion of the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}(p)\) described by \(B(x(y_{(I)}),\delta)\) with the portion of the FLRW celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) described by \(\widehat{B}(y_{(I)},\delta)\).
* Finally, we require that the finite collections of _celestial coordinate bins_\(\{\widehat{B}(y_{(I)},\delta)\}\) and \(\big{\{}B(x(y_{(I)}),\delta)\big{\}}\) cover the respective celestial spheres \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) and \(\mathbb{C}\,\mathbb{S}_{z}(p)\).
It is worthwhile to stress that the collections of bins \(\{\widehat{B}(y_{(I)},\delta)\}\) and \(\big{\{}B(x(y_{(I)}),\delta)\big{\}}\) can be chosen in many distinct ways, according to the cosmographic observations one wishes to carry out (we use disks for mathematical convenience). Whatever choice of the above type we make, we can extend the localized \(\mathrm{PSL}(2,\mathbb{C})\) maps (77) by using a smooth partition of unity \(\big{\{}\chi_{(I)}\big{\}}\) subordinated to the finite covering \(\{\widehat{B}(y_{(I)},\delta)\}\) of \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\), _i.e._ a collection of smooth functions \(\chi_{(I)}\,:\,\widehat{B}(y_{(I)},\delta)\longrightarrow\,[0,1]\) whose support is such that \(\mathrm{supp}\,\chi_{(I)}\subseteq\,\widehat{B}(y_{(I)},\delta)\) and such that \(\sum_{y\in\,\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)}\,\chi_{(I)}(y )\,=\,1\). We define the _localized \(\mathrm{PSL}(2,\mathbb{C})\) map_ connecting, at scale \(L(z)\), the celestial spheres \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) and \(\mathbb{C}\,\mathbb{S}_{z}(p)\), decorated with the respective coordinate bins \(\{\widehat{B}(y_{(I)})\}\) and \(\{B(x(y_{(I)}),\delta)\}\), according to
\[\zeta_{(z)}\,:\,\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p) \longrightarrow \mathbb{C}\,\mathbb{S}_{z}(p)\] \[\widehat{w} \longmapsto \zeta_{(z)}(\widehat{w})\,:=\,\sum_{y\in\,\widehat{\mathbb{C}} \,\widehat{\mathbb{S}}_{z}(p)}\,\chi_{(I)}(y)\,\zeta_{(I)}(w)\,, \tag{78}\]
where \(\zeta_{(I)}(w)\) is provided by (77). Note that, when necessary, this localized \(\mathrm{PSL}(2,\mathbb{C})\) map can be further generalized by completing it in the Sobolev space of maps which together with their derivatives are square-summable over \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\). This completion requires some care which we do not enter here (see [12] for details), and it is needed when discussing the distance between the FLRW and the cosmographic lightcones.
It is worthwhile to stress that in the pre-homogeneity region \(L(z)\,\lesssim\,100\,h^{-1}\,\mathrm{Mpc}\), the large variance in peculiar velocities of the astrophysical sources implies a great variability in the selection of the three reference null directions that fix the local \(\mathrm{PSL}(2,\mathbb{C})\) action characterizing the map \(\zeta_{(z)}\). This implies that \(\zeta_{(z)}\) may vary considerably with \(L(z)\). Recall that the role of the celestial spheres \(\mathbb{C}\,\mathbb{S}_{z}(p)\) and \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) is simply that of representing past null directions at the observational event \(p\,\in\,M\), directions that respectively point to the astrophysical sources on the sky section \(\Sigma_{z}\), as seen by \((p,\dot{\gamma}(0))\), and on \(\widehat{\Sigma}_{z}\), as seen according to \((p,\widehat{\dot{\gamma}}(0))\). These data are transferred from these sky sections to the respective celestial spheres through null geodesics, thus we can associate with the localized \(\mathrm{PSL}(2,\mathbb{C})\) action the map between the sky sections \(\widehat{\Sigma}_{z}\) and \(\Sigma_{z}\) given by
\[\psi_{(z)}\,:\,\widehat{\Sigma}_{z} \longrightarrow \Sigma_{z}\] \[q \longmapsto \psi_{(z)}(q)\,:=\,\exp_{p}\circ\zeta_{(z)}\,\circ\,\widehat{ \exp}_{p}^{-1}(q)\,, \tag{79}\]
for any point \(q\,\in\,\widehat{\Sigma}_{z}\).
The comparison between the screen planes \(T_{\widehat{n}}\widehat{CS}_{z}(p)\) and \(T_{n}C\,S_{z}(p)\)
The localized \(\mathrm{PSL}(2,\mathbb{C})\) map \(\zeta_{(z)}\) induces a corresponding map between the _screen plane_\(T_{\widehat{n}}\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}(p)_{z}\) associated with the direction of sight \(\widehat{n}(\widehat{\theta},\widehat{\phi})\) in the FLRW celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) (see (49)), and the _screen plane_\(T_{n}\mathbb{C}\,\mathbb{S}_{z}(p)\) associated with the direction of sight \(n(\theta,\phi)\,=\,\zeta_{(z)}\left(\widehat{n}(\widehat{\theta},\widehat{ \phi})\right)\) in the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}(p)\) (see (19)). The geometry of this correspondence is quite sophisticated since it is strictly related to harmonic map theory and it will be described here in some detail. To begin with, we denote by \(T\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}\) and by \(T\mathbb{C}\,\mathbb{S}_{z}\) the _screen bundles_ associated with the screen planes on \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) and \(\mathbb{C}\,\mathbb{S}_{z}(p)\), respectively. These are just two copies of the usual tangent bundle \(T\widehat{\mathbb{S}}^{2}\) of the 2-sphere. If there is no danger of confusion, we use both notations in what follows. Under such notational assumptions, we can associate with the map (78),
\[\zeta_{(z)}\,:\,\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\,\longrightarrow \,\mathbb{C}\,\mathbb{S}_{z}(p)\,, \tag{80}\]
the pull-back bundle \(\zeta_{(z)}^{-1}\,T\mathbb{C}\,\mathbb{S}_{z}\) whose sections \(\,\mathrm{v}\,\equiv\zeta_{(z)}^{-1}V:=V\circ\zeta_{(z)},\ V\in C^{\infty}( \mathbb{C}\,\mathbb{S}_{z}(p),T\mathbb{C}\,\mathbb{S}_{z})\), are the vector fields over \(\mathbb{C}\,\mathbb{S}_{z}(p)\) covering the map \(\zeta_{(z)}\). In physical terms, the vectors \(v\) are the tangent vector on the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}(p)\) that describe the (active) effect of the combination of rotation and Lorentz boost induced by \(\zeta_{(z)}\) on the null direction \(\widehat{\ell}(\widehat{n})\). More expressively, let us remark that for a given direction of sight \(\zeta_{(z)}(\widehat{n})\,=\,n(\theta,\phi)\,\in\,\mathbb{C}\,\mathbb{S}_{z}(p)\), the vectors \(V\in T_{n}\mathbb{C}\,\mathbb{S}_{z}(p)\) can be used to describe the geometrical characteristics of the astrophysical images on the screen \(T_{n}\mathbb{C}\,\mathbb{S}_{z}(p)\), for instance, the apparent diameters of the source. Thus, the vectors \(\mathrm{v}\,\equiv\zeta_{(z)}^{-1}V:=V\circ\zeta_{(z)}\), sections of the pull-back bundle \(\zeta_{(z)}^{-1}\,T\mathbb{C}\,\mathbb{S}_{z}\), can be interpreted as transferring the "images" of the screens in \(T\mathbb{C}\,\mathbb{S}_{z}\) back to \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) so as to be able to compare them with the reference screen-shots in \(T\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}\). In terms of the local coordinates \(y^{a}\,:=\,\left(\widehat{\theta},\widehat{\phi}\right)\), \(a=1,2,\ \text{on}\ \widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\) (see (52))11, we can write the section \(\mathrm{v}\,\equiv\zeta_{(z)}^{-1}V:=V\circ\zeta_{(z)}\) as12
Footnote 11: In what follows the \((\widehat{\theta},\widehat{\phi})\), corresponding to \((y^{2},y^{2})\) in the normal coordinates string \(\{y^{\alpha}\}\), are relabelled as \(\{y^{a}\}\), with \(a=1,2\); a similar relabeling is also adopted for the normal coordinates \((\theta,\phi)\) on \(\mathbb{C}\,\mathbb{S}_{z}(p)\).
Footnote 12: In what follows we freely refer to the excellent [22], [32], and [36] for a detailed analysis of the geometry of the computations involved in harmonic map theory.
\[\mathbb{C}\,\mathbb{S}_{z}(p)\,\ni\,y^{a}\,\longmapsto\,\mathrm{v}(y^{a})\,= \,\mathrm{v}^{b}(y)\,\frac{\partial}{\partial\zeta_{(z)}^{b}(y)}\,\in\, \zeta_{(z)}^{-1}T\mathbb{C}\,\mathbb{S}_{z}\Big{|}_{y}\, \tag{81}\]
where \(\zeta_{(z)}^{b}(y),\ b\,=1,2\), are the coordinates of the point (direction of sight) in \(\zeta_{(z)}(y)\,\in\mathbb{C}\,\mathbb{S}_{z}(p)\) given, in terms of the \(y^{a}\) by (64). In particular, if \(T^{*}\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}\) denotes the cotangent bundle to \(\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}(p)\), we can locally introduce the differential
\[d\zeta_{(z)}\,=\,\frac{\partial\zeta_{(z)}^{b}}{\partial y^{a}}dy^{a}\otimes \frac{\partial}{\partial\zeta_{(z)}^{b}}\,, \tag{82}\]
and interpret it as a section of the product bundle \(T^{*}\widehat{[\mathbb{C}\,\mathbb{S}]}_{z}\otimes\zeta_{(z)}^{-1}\,T\mathbb{ C}\,\mathbb{S}_{z}\). To provide a comparison between the geometrical information gathered from the astrophysical data, let us recall that on the screens \(T\widehat{\mathbb{C}}\,\widehat{\mathbb{S}}_{z}\) and \(T\mathbb{C}\,\mathbb{S}_{z}\) we have the inner products respectively defined by the pull-back metrics (55) and (27), _i.e._
\[\widehat{h}(\widehat{r}(L(z)),\widehat{\theta},\widehat{\phi})\,:=\,\left( \widehat{\exp}_{p}^{*}\,\widehat{g}|_{\widehat{\mathbb{S}}_{z}}\right)_{ab}dy^{ a}dy^{b}\Big{|}_{\widehat{r}(L(z))}\,,\ \ a,b\,=\,1,2,\ \ \ \ y^{1}:=\widehat{\theta},\,y^{2}:=\widehat{\phi}\,. \tag{83}\]
and
\[h(r(L(z)),\theta,\phi)\,:=\,\left(\exp_{p}^{*}\,g|_{\Sigma_{z}}\right)_{ab}\,dx^{ a}dx^{b}\Big{|}_{r(L(z))}\,,\ \ a,b\,=\,1,2,\ \ \ \ x^{1}:=\theta,\,x^{2}:=\phi\,. \tag{84}\]
The Riemannian metric in the pull-back screen \(\left(\zeta_{(z)}^{-1}\,T\mathbb{C}\,\mathbb{S}_{z}\right)_{y}\) over \(y\in\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\) is provided by \(h(\zeta_{(z)}(y))\), hence the tensor bundle \(T^{*}\widehat{\mathbb{C}}\,\mathbb{S}_{z}\otimes\zeta_{(z)}^{-1}\,T\mathbb{C} \,\mathbb{S}_{z}\) over the celestial sphere \(\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\) is endowed with the pointwise inner product
\[\langle\cdot,\cdot\rangle_{T^{*}\widehat{[\mathbb{C}\,\mathbb{S}]_{z}\otimes \zeta_{(z)}^{-1}}\,T\mathbb{C}\,\mathbb{S}_{z}}\,:=\,\widehat{h}^{-1}(y)\otimes h (\zeta_{(z)}(y))(\cdot,\cdot)\, \tag{85}\]
where \(\widehat{h}^{-1}(y)\,:=\,\widehat{h}^{ab}(y)\,\partial_{a}\otimes\partial_{b}\) is the metric tensor in \(T^{*}_{y}\widehat{\mathbb{C}}\,\mathbb{S}_{z}\). The corresponding Levi-Civita connection will be denoted by \(\nabla^{\langle\cdot\rangle}\). Explicitly, if \(W\,=\,W^{b}_{a}\,dy^{a}\otimes\frac{\partial}{\partial\zeta_{(z)}^{b}}\) is a section of \(T^{*}\widehat{\mathbb{C}}\,\mathbb{S}_{z}\otimes\zeta_{(z)}^{-1}\,T\mathbb{C} \,\mathbb{S}_{z}\), the covariant derivative of \(W\) in the direction \(\frac{\partial}{\partial y^{b}}\) is provided by
\[\nabla^{\langle\cdot\rangle}_{b}\,W\,=\,\nabla^{\langle\cdot \rangle}_{b}\,\left(W^{c}_{a}\,dy^{a}\otimes\frac{\partial}{\partial\zeta_{(z) }^{c}}\right)\] \[\,=\,\frac{\partial}{\partial y^{b}}\,W^{c}_{a}\,dy^{a}\otimes \frac{\partial}{\partial\zeta_{(z)}^{c}}+W^{c}_{a}\left(\widehat{\nabla}_{b} \,dy^{a}\right)\otimes\frac{\partial}{\partial\zeta_{(z)}^{c}}\] \[\,+\,W^{c}_{a}\,dy^{a}\otimes\,\nabla^{*}_{b}\left(\frac{ \partial}{\partial\zeta_{(z)}^{c}}\right)\, \tag{86}\]
where \(\widehat{\nabla}\) denotes the Levi-Civita connection on \((\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p),\widehat{h})\), and \(\nabla^{*}\) is the pull back on \(\zeta_{(z)}^{-1}\,T\mathbb{C}\,\mathbb{S}_{z}\) of the Levi-Civita connection of \((\mathbb{C}\,\mathbb{S}_{z},h)\). If \(\widehat{\Gamma}^{a}_{bc}(\widehat{h})\) and \(\Gamma^{a}_{bc}(h)\) respectively denote the Christoffel symbols of \((\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p),\widehat{h})\) and \((\mathbb{C}\,\mathbb{S}_{z}(p),h)\), then \(\widehat{\nabla}_{b}\,dy^{a}\,=\,-\,\widehat{\Gamma}^{a}_{bc}(\widehat{h})\, dy^{c}\) and \(\nabla^{*}_{b}\left(\frac{\partial}{\partial\zeta_{(z)}^{c}}\right)\,=\,\frac{ \partial\zeta_{(z)}^{i}}{\partial y^{b}}\,\Gamma^{k}_{ci}(h)\,\frac{\partial}{ \partial\zeta_{(z)}^{k}}\), and one computes
\[\nabla^{\langle\cdot\rangle}_{b}\,W\,=\,\left(\frac{\partial}{\partial y^{b} }\,W^{i}_{a}\,-\,W^{i}_{c}\widehat{\Gamma}^{c}_{ba}(\widehat{h})\,+\,W^{k}_{a }\frac{\partial\zeta_{(z)}^{j}}{\partial y^{b}}\,\Gamma^{i}_{kj}(h)\right)\, dy^{a}\otimes\frac{\partial}{\partial\zeta_{(z)}^{i}}. \tag{87}\]
These remarks on the geometry of the map (80) allow us to compare the data on the screens \(T\mathbb{C}\,\mathbb{S}_{z}\) and \(T\widehat{\mathbb{C}}\,\mathbb{S}_{z}\). For this purpose, the relevant quantity is the norm, evaluated with respect to the inner product (85), of the differential (82) of the \(\mathrm{PSL}(2,\mathbb{C})\) map \(\zeta_{(z)}\). Direct computation provides
\[e(\widehat{h},\zeta_{(z)};h) := \langle d\zeta_{(z)},d\zeta_{(z)}\rangle_{T^{*}\widehat{[\mathbb{C }\,\mathbb{S}]_{z}\otimes\zeta_{(z)}^{-1}}\,T\mathbb{C}\,\mathbb{S}_{z}}\] \[= \widehat{h}^{ab}(y)\,\frac{\partial\zeta_{(z)}^{i}(y)}{\partial y ^{a}}\frac{\partial\zeta_{(z)}^{j}(y)}{\partial y^{b}}\,h_{ij}(\zeta_{(z)}(y)) \,=\,tr_{\widehat{h}(y)}\,(\zeta_{(z)}^{*}\,h)\,, \tag{88}\]
where \(tr_{\widehat{h}(y)}\,(\zeta_{(z)}^{*}\,h)\) denotes the trace, with respect to the metric \(\widehat{h}\) of the pull-back metric \(\zeta_{(z)}^{*}\,h\). In other words, at any point \(y\), \(e(\widehat{h},\zeta_{(z)};h)(y)\) is the sum of the eigenvalues of the metric \(\zeta_{(z)}^{*}\,h\), thus providing the sum of the squares of the length stretching generated by the (pull-back of) the physical metric \(\zeta_{(z)}^{*}\,h\) along the orthogonal directions \((\widehat{\theta},\widehat{\phi})\). To such stretching, we can associate the _tension field_ of the map \(\zeta_{(z)}\), defined by
\[\tau^{i}(\zeta_{(z)})\,:=\,\Delta_{(\widehat{h})}\,\zeta_{(z)}^{i}\,+\, \widehat{h}^{kj}\,\Gamma^{i}_{ab}(h)\,\frac{\partial\zeta_{(z)}^{a}}{\partial y ^{k}}\frac{\partial\zeta_{(z)}^{b}}{\partial y^{j}}\,. \tag{89}\]
To provide some intuition on these geometrical quantities, we can adapt to our case a nice heuristic remark by J. Eells and L. Lemaire described in their classical paper on harmonic map theory [22]. Let us imagine the FLRW celestial sphere \((\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p),\widehat{h})\) as a rubber balloon, decorated with dots
representing the astrophysical sources recorded from the sky section \(\widehat{\Sigma}_{z}\). This balloon has the geometry described by the round metric \(\widehat{h}(z,\widehat{\theta},\widehat{\phi})\) defined by (83), explicitly (see (60))
\[\widehat{h}(\widehat{r}(z),\widehat{\theta},\widehat{\phi})\,=\,\frac{a_{0}^{2 }}{(1\,+\,z_{L})^{2}}\,f^{2}\left(\widehat{r}(z)\right)\left(d\widehat{\theta} ^{2}\,+\,\sin^{2}\widehat{\theta}d\widehat{\phi}^{2}\right)\;, \tag{90}\]
where \(z_{L}\) is the redshift associated with the length scale \(L\). Conversely, let us imagine the physical celestial sphere \((\mathbb{C}\,\mathbb{S}_{z}(p),\,h)\) as a rigid surface with the geometry induced by the metric \(h(r(z),\theta,\phi)\) defined by (84), _i.e._, (see (30)),
\[h\left(r(z),\theta,\phi\right)\] \[=\,D^{2}(r(z),\theta,\phi)\left(d\theta^{2}\,+\,\sin^{2}\theta d \phi^{2}\,+\,\mathcal{L}_{ab}(r(z),\theta,\phi)\,dx^{a}dx^{b}\right)\,,\;\;\;x ^{1}:=\theta,\,x^{2}:=\phi\,, \tag{91}\]
providing the geometric landscape of the astrophysical sources reaching us along null geodesics from the physical sky section \(\Sigma_{z}\). We can think of the \(PSL(2,\mathbb{C})\) map \(\zeta_{(z)}\) as stretching the elastic surface \((\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p),\widehat{h})\) on the rigid surface \((\mathbb{C}\,\mathbb{S}_{z}(p),\,h)\). The purpose of this stretching is to overlap the images of the astrophysical sources recorded on \((\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p),\widehat{h})\) with the images of the corresponding sources as registered on \((\mathbb{C}\,\mathbb{S}_{z}(p),h)\). In general, this overlap is not successful without stretching the surface, and to any point \(y\,\in\,(\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p),\widehat{h})\) we can associate a corresponding vector measuring the stretch necessary for connecting the images of the same source on the two celestial spheres13\((\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p),\widehat{h})\) and \((\mathbb{C}\,\mathbb{S}_{z}(p),h)\). To leading order, the required stretching is provided by the tension vector \(\tau^{i}(\zeta_{(z)},y)\) at \(y\). Both the Hilbert-Schmidt norm (88) and the tension vector field (89) of the map \(\zeta_{(z)}\) are basic quantities in harmonic map theory, and to understand the strategy we will follow in comparing, at a given length scale \(L\), the FLRW past light cone \(\widehat{\mathcal{C}}(p,\widehat{g})\) with the physical observational past light cone \(\mathcal{C}(p,g)\) we need to look into the harmonic map theory associated with \(\zeta_{(z)}\). Let us start by associating with \(\langle d\zeta_{(z)},d\zeta_{(z)}\rangle_{T^{*}[\mathbb{C}\,\mathbb{S}]_{z} \otimes\zeta_{(z)}^{-1}T\mathbb{C}\,\mathbb{S}_{z}}\) the density
Footnote 13: This is not to be confused with the phenomenon of strong gravitational lensing that occurs in a given celestial sphere. It is simply a mismatch due to the comparison between the description of the same astrophysical source on two distinct celestial spheres.
\[e(\widehat{h},\zeta_{(z)},h)\,\,d\mu_{\widehat{h}}\,:=\,\langle d\zeta_{(z)},d\zeta_{(z)}\rangle_{T^{*}[\mathbb{C}\,\mathbb{S}]_{z}\otimes\zeta_{(z)}^{-1 }T\mathbb{C}\,\mathbb{S}_{z}}\,d\mu_{\widehat{h}}\,=\,tr_{\widehat{h}(y)}\,( \zeta_{(z)}^{*}\,h)\,d\mu_{\widehat{h}}\;, \tag{92}\]
where \(d\mu_{\widehat{h}}\) is the volume element defined by the metric \(\widehat{h}\) on the FLRW celestial sphere \(\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p)\). An important property of the density \(e(\widehat{h},\zeta_{(z)};h)\,\,d\mu_{\widehat{h}}\) is that it is invariant under the two-dimensional conformal transformations
\[(\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p),\,\widehat{h}_{ab})\,\longmapsto\, (\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p),\,e^{-f}\,\widehat{h}_{ab})\;, \tag{93}\]
where \(f\) is a smooth function on \(\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p)\). In this connection, it is worthwhile to recall that conformal invariance is strictly related to the action of the Lorentz group on the celestial spheres (and it is ultimately the rationale for the relation between Lorentz transformations and the fractional linear transformations of \(\mathrm{PSL}(2,\mathbb{C})\)).
The expression \(\frac{1}{2}\,e(\widehat{h},\zeta_{(z)};h)\,\,d\mu_{\widehat{h}}\) characterizes the harmonic map energy functional associated to the map \(\zeta_{(z)}\), _viz._
\[E[\widehat{h},\zeta_{(z)},\,h]\,:=\,\frac{1}{2}\,\int_{\widehat{\mathbb{C} \,\mathbb{S}_{z}}}\,e(\widehat{h},\zeta_{(z)},h)\,\,d\mu_{\widehat{h}}\;. \tag{94}\]
It is worthwhile to put forward a more explicit characterization of the nature of the harmonic map functional \(E[\widehat{h},\zeta_{(z)};\,h]\) by making explicit, together with the celestial spheres \(\widehat{\mathbb{C}\,\mathbb{S}_{z}}(p)\) and \(\mathbb{C}\,\mathbb{S}_{z}(p)\)
the role of the corresponding sky sections \(\widehat{\Sigma}_{z}\) and \(\Sigma_{z}\). To this end, let us consider the map (79) acting between the sky sections \(\widehat{\Sigma}_{z}\) and \(\Sigma_{z}\),
\[\psi_{(z)}\,:\,(\widehat{\Sigma}_{z},\,\widehat{g}|_{\widehat{ \Sigma}_{z}}) \longrightarrow (\Sigma_{z},\,\,g|_{\Sigma_{z}})\] \[y \longmapsto \psi_{(z)}(q)\,:=\,\exp_{p}\circ\zeta_{(z)}\,\circ\,\widehat{ \exp}_{p}^{-1}(y)\,. \tag{95}\]
The corresponding harmonic map functional is provided by
\[E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\,:=\,\frac{1}{2}\,\int _{\widehat{\Sigma}_{z}}\,(\widehat{g}_{(z)})^{ab}\,\frac{\partial\psi_{(z)}^{ i}(y)}{\partial y^{a}}\frac{\partial\psi_{(z)}^{k}(y)}{\partial y^{b}}\,(g_{(z)})_{ik} \,d\mu_{\widehat{g}_{(z)}} \tag{96}\]
where, for notational ease, we have set \(\widehat{g}_{(z)}\,:=\,\widehat{g}_{\,\Sigma_{z}}\) and \(g_{(z)}\,:=\,g_{\,\Sigma_{z}}\). We can equivalently write \(E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\) in terms of pull-backs of the relevant maps, and get the following chain of relations
\[E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right] = \frac{1}{2}\,\int_{\widehat{\Sigma}_{z}}\,(\widehat{g}_{(z)})^{ ab}\,\left(\psi_{(z)}^{*}g_{(z)}\right)_{ab}\,d\mu_{\widehat{g}_{(z)}}\] \[= \frac{1}{2}\,\int_{\widehat{\exp}_{p}(\widehat{\mathbb{C}}\widehat {\mathbb{S}}_{z})}\,(\widehat{g}_{(z)})^{ab}\,\left(\psi_{(z)}^{*}g_{(z)} \right)_{ab}\,d\mu_{\widehat{g}_{(z)}}\] \[= \frac{1}{2}\,\int_{\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{z}} \,\widehat{\exp}_{p}\,^{*}\left[(\widehat{g}_{(z)})^{ab}\,\left(\psi_{(z)}^{* }g_{(z)}\right)_{ab}\right]\,\widehat{\exp}_{p}^{*}(d\mu_{\widehat{g}_{(z)}})\] \[= \frac{1}{2}\,\int_{\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{z}} \,\widehat{h}^{ab}\,\left(\widehat{\exp}_{p}\,^{*}\,\left(\psi_{(z)}^{*}g_{(z) }\right)\right)_{ab}\,d\mu_{\widehat{h}}\] \[= \frac{1}{2}\,\int_{\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{z}} \,\widehat{h}^{ab}\,\left(\widehat{\exp}_{p}\,^{*}\,\left(\exp_{p}\circ\zeta_ {(z)}\,\circ\,\widehat{\exp}_{p}^{-1}\right)^{*}g_{(z)}\right)_{ab}\,d\mu_{ \widehat{h}}\] \[= \frac{1}{2}\,\int_{\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{z}} \,\widehat{h}^{ab}\,\left(\zeta_{(z)}^{*}h\right)_{ab}\,d\mu_{\widehat{h}}\,= \,E[\widehat{h},\,\zeta_{(z)},\,h]\,,\]
from which it follows that the harmonic map energy functional associated with the localized \(\mathrm{PSL}(2,\mathbb{C})\) map \(\zeta_{(z)}\) and with the map \(\psi_{(z)}\), defined by (95), can be identified. This is not surprising since \(\psi_{(z)}\,:=\,\exp_{p}\circ\zeta_{(z)}\,\circ\,\widehat{\exp}_{p}^{-1}\) can be seen as the representation of \(\zeta_{(z)}\) on the sky sections \(\widehat{\Sigma}_{z}:=\widehat{\exp}_{p}\left(\widehat{\mathbb{C}}\widehat{ \mathbb{S}}_{z}(p)\right)\) and \(\Sigma_{z}:=\exp_{p}\left(\mathbb{C}\,\mathbb{S}_{z}(p)\right)\). From the conformal nature of the map \(\zeta_{(z)}:\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{z}(p)\longrightarrow \mathbb{C}\,\mathbb{S}_{z}(p)\), it follows that \(\psi_{(z)}\) acts as a conformal diffeomorphism between \(\hat{\Sigma}_{z}\) and \(\Sigma_{z}\) as long as the exponential maps are diffeomorphisms from \(\widehat{\mathbb{C}}\widehat{\mathbb{S}}_{z}(p)\) and \(\mathbb{C}\,\mathbb{S}_{z}(p)\) onto their respective images \(\widehat{\Sigma}_{z}\) and \(\Sigma_{z}\). Later we shall see how this result can be extended, under suitable hypotheses, to the less regular case of Lipschitzian exponential map. Here, we restrict our attention to the stated regularity assumptions on the exponential maps \(\widehat{\exp}_{p}\) and \(\exp_{p}\). They imply that the sky sections \(\hat{\Sigma}_{z}\) and \(\Sigma_{z}\) have the topology of a 2-sphere. Moreover, we can take advantage of the fact that \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\) is a (rescaled) round sphere, thus we can apply the Poincare-Koebe uniformization theorem, to the effect that there is a positive scalar function \(\Phi_{\hat{\Sigma}\,\Sigma}\,\in\,C^{\infty}(\widehat{\Sigma}_{z},\mathbb{R}_{> 0})\) such that
\[\left(\psi_{(z)}^{*}g_{(z)}\right)_{ab}\,=\,\frac{\partial\psi_{(z)}^{i}(y)}{ \partial y^{a}}\frac{\partial\psi_{(z)}^{k}(y)}{\partial y^{b}}\,(g_{(z)})_{ ik}\,=\,\Phi_{\widehat{\Sigma}\,\Sigma}^{2}\,(\widehat{g}_{(z)})_{ab}\,. \tag{98}\]
The required conformal factor \(\Phi_{\widehat{\Sigma}\,\Sigma}\,\in\,C^{\infty}(\widehat{\Sigma}_{z},\mathbb{R} _{>0})\) is the solution, (unique up to the \(\mathrm{PSL}(2,\mathbb{C})\) action on \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\)), of the elliptic partial differential equation on \((\widehat{\Sigma}_{z},\widehat{g}_{(z)})\) defined by [2]
\[-\,\Delta_{\widehat{g}_{(z)}}\,\ln(\Phi_{\widehat{\Sigma}\Sigma}^{2})\,+\,R( \widehat{g}_{(z)})\,=\,R(\psi_{(z)}^{*}g_{(z)})\,\Phi_{\widehat{\Sigma}\Sigma} ^{2}\,, \tag{99}\]
where \(\Delta_{\widehat{g}_{(z)}}\,:=\,\widehat{g}_{(z)}^{ab}\nabla_{a}\nabla_{b}\) is the Laplace-Beltrami operator on \((\widehat{\Sigma}_{z},\hat{g}_{(z)})\), and where we respectively denoted by \(R(\widehat{g}_{(z)})\) and \(R(\psi_{(z)}^{*}g_{(z)})\) the scalar curvature of the metrics \(\widehat{g}_{(z)}\) and \(\psi_{(z)}^{*}g_{(z)}\). Notice that
the scalar curvature \(R(\hat{g}_{(z)})\) is associated with the metric (60) evaluated for \(\widehat{r}\,=\,\widehat{r}(L)\) and hence is given by the constant \(R(\hat{g}_{(z)})\,=\,\left[\frac{a_{0}^{2}}{(1\,+\,z)^{2}}\,f^{2}\left(\widehat{ r}\right)\right]^{-1}\). Similarly, \(R(g_{(z)})\) is associated with the metric (30) evaluated for \(r\,=\,r(z)\), and as such it depends on the area distance \(D^{2}(r(z),\theta,\phi)\) and the lensing distortion \(\mathcal{L}_{ab}\).
By tracing (98) with respect to \(\widehat{g}_{(z)}^{ab}\), we get \(tr_{\widehat{g}_{(z)}(y)}\,\left(\psi_{(z)}^{*}g_{(z)}\right)\,=\,2\Phi_{ \widehat{\Sigma}\,\Sigma}^{2}\), and we can write
\[\Phi_{\widehat{\Sigma}\Sigma}^{2}\,=\,\frac{1}{2}\,tr_{\widehat{g}_{(z)}(y)} \,\left(\psi_{(z)}^{*}g_{(z)}\right)\,=\,\frac{1}{2}\,\widehat{g}_{(z)}^{ab} \,\frac{\partial\psi_{(z)}^{i}(y)}{\partial y^{a}}\frac{\partial\psi_{(z)}^{k }(y)}{\partial y^{b}}\,(g_{(z)})_{ik}\;. \tag{100}\]
From (98) we also get \(\det\left(\psi_{(z)}^{*}g_{(z)}\right)\,=\,\Phi_{\widehat{\Sigma}\Sigma}^{4} \,\det(\widehat{g}_{(z)})\), hence we can equivalently express the conformal factor \(\Phi_{\widehat{\Sigma}\Sigma}^{2}\) as the Radon-Nikodym derivative of the Riemannian measure \(d\mu_{\psi^{*}g_{(z)}}:=\psi_{(z)}^{*}d\mu\) of the pulled back metric \(\psi_{(z)}^{*}g_{(z)}\) on the sky section \(\widehat{\Sigma}_{z}\), with respect to the Riemannian measure \(d\mu_{\widehat{g}_{(z)}}\) of the round metric \(\widehat{g}_{(z)}\) on \(\widehat{\Sigma}_{z}\), _i.e._,
\[\Phi_{\widehat{\Sigma}\Sigma}^{2}\,=\,\frac{d\mu_{\psi_{(z)}^{*}g_{(z)}}}{d \mu_{\widehat{g}_{(z)}}}\,=\,\frac{\psi_{(z)}^{*}d\mu_{g_{(z)}}}{d\mu_{\widehat {g}_{(z)}}}\;. \tag{101}\]
Directly from this latter relation and from \(E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\,=\,E\left[\widehat{h },\,\zeta_{(z)},\,h\right]\) (see (97)), we get
\[E\left[\widehat{h},\,\zeta_{(z)},\,h\right]\,=\,\int_{\widehat{\Sigma}_{z}}\, \Phi_{\widehat{\Sigma}\Sigma}^{2}\,d\mu_{\hat{h}}\,, \tag{102}\]
which expresses the harmonic map functional \(E\left[\widehat{h},\,\zeta_{(z)},\,h\right]\) in terms of the conformal factor \(\Phi_{\widehat{\Sigma}\Sigma}^{2}\). As the \(\mathrm{PSL}(2,\mathbb{C})\)-localized map \(\zeta_{(z)}\) varies with the scale \(L(z)\), relation (102) shows that \(E\left[\widehat{h},\,\zeta_{(z)},\,h\right]\) describes the \(\zeta_{(z)}\)-dependent total "energy" associated with the conformal stretching of \((\widehat{\mathbb{C}}\,\widehat{\mathrm{S}}_{z}(p),\,\widehat{h})\) over \((\mathbb{C}\,\mathbb{S}_{z}(p),\,h)\).
### A local expression for \(\Phi_{\widehat{\Sigma}\Sigma}^{2}\)
It is worthwhile to provide a local expression for \(\Phi_{\widehat{\Sigma}\Sigma}^{2}\) showing the explicit dependence from the celestial coordinates \((\theta,\phi)\), the area distances \(\widehat{D}(\widehat{r}(L))\),
\(D(r(L),\theta,\phi)\), and the distortion tensor \(\mathcal{L}\) (see (30)). We proceed as follows. Let us consider one of the coordinate bin \(\widehat{B}(y_{(I)},\delta)\) (see (75)) in the celestial sphere \(\widehat{\mathbb{C}}\,\widehat{\mathrm{S}}_{z}(p)\). For \(y=(r(z),\widehat{\theta},\widehat{\phi})\in\widehat{B}(y_{(I)},\delta)\) let \(q:=\widehat{\mathrm{exp}}_{p}(y)\) the point in the sky section \(\widehat{\Sigma}_{z}\) reached, at the scale \(L(z)\), along the past-directed null geodesics associated with the observational direction \(y=(\widehat{\theta},\widehat{\phi})\). From the expression (101) of the conformal factor \(\Phi_{\widehat{\Sigma}\Sigma}^{2}\) in terms of the measure \(\psi_{(z)}^{*}d\mu_{g_{(z)}}\) we get, by massaging pull-backs,
\[\Phi_{\widehat{\Sigma}\Sigma}^{2}\,d\mu_{\widehat{g}_{(z)}}(q) = \psi_{(z)}^{*}d\mu_{g_{(z)}}(q)\] \[= \left(\exp_{p}\circ\zeta_{(z)}\,\circ\,\widehat{\exp}_{p}^{-1} \right)^{*}d\mu_{g_{(z)}}\] \[= (\widehat{\mathrm{exp}}_{p}^{-1})^{*}(\zeta_{(z)}^{*}d\mu_{h})\] \[\Rightarrow \widehat{\mathrm{exp}}_{p}^{*}\left(\Phi_{\widehat{\Sigma}\Sigma} ^{2}\,d\mu_{\widehat{g}_{(z)}}(q)\right)\,=\,\zeta_{(z)}^{*}d\mu_{h}(y)\] \[\Phi_{\widehat{\Sigma}\Sigma}^{2}(y)\,d\mu_{\widehat{h}}(y) = \zeta_{(z)}^{*}d\mu_{h}(y)\,. \tag{103}\]
Hence, on \((\widehat{\mathbb{C}\,\mathbb{S}}(p),\,\widehat{h})\), we need to compute the Radon-Nicodym derivative
\[\Phi^{2}_{\widehat{\Sigma}\Sigma}(y)\,=\,\frac{\zeta^{*}_{(z)}d\mu_{h}}{d\mu_{ \widehat{h}}}(y)\,. \tag{104}\]
If we take into account the characterization \(\sqrt{\det(h(r(z),\theta,\phi))}\,=\,D^{2}(r(z),\theta,\phi)\,\sqrt{\det( \widetilde{h}(\mathbb{S}^{2}))}\) of the area distance \(D^{2}(r(z),\theta,\phi)\) (see (29)), we compute
\[\zeta^{*}_{(z)}d\mu_{h}(y)\,=\,\left|\mathrm{Jac}_{y}(\zeta_{(z)})\right|\,D^ {2}(y)\,d\mu_{\mathbb{S}^{2}}\,, \tag{105}\]
where \(|\mathrm{Jac}_{y}(\zeta_{(z)})|\) is the Jacobian determinant associated with the localized \(\mathrm{PSL}(2,\mathbb{C})\) map \(\zeta_{(z)}\), and where \(D^{2}(y)\) is a shorthand notation for the area distance \(D^{2}(\zeta_{(z)}(\widehat{r}(z),\widehat{\theta},\widehat{\phi}))\) pulled back at \(y\in(\widehat{\mathbb{C}\,\mathbb{S}}_{z}(p),\,\widehat{h})\) by the localized \(\zeta_{(z)}\). Similarly, from (61) we compute \(d\mu_{\widehat{h}}(y)\,=\,\frac{a_{0}^{2}}{(1\,+\,z_{L})^{2}}\,f^{2}\left( \widehat{r}(L)\right)\,d\mu_{\mathbb{S}^{2}}\,.\) Thus, we can write
\[\Phi^{2}_{\widehat{\Sigma}\Sigma}(\widehat{r}(z),\widehat{\theta},\widehat{ \phi})\,=\,\left|\mathrm{Jac}\left(\zeta_{(z)}(\widehat{r}(z),\widehat{\theta },\widehat{\phi})\right)\right|\,\frac{D^{2}(\zeta_{(z)}\left(\widehat{r}(z), \widehat{\theta},\widehat{\phi}\right))\,(1\,+\,z)^{2}}{a_{0}^{2}\,f^{2}\left( \widehat{r}(z)\right)}\,. \tag{106}\]
In terms of the FLRW area distance
\[\widehat{D}(\widehat{r}(z))\,=\,\frac{a_{0}}{1\,+\,z}\,f\left(\widehat{r} \right)\,, \tag{107}\]
we can equivalently write (106) in the simpler form (where, to have handy the formula for later use, we have taken the square root)
\[\Phi_{\widehat{\Sigma}\Sigma}(\widehat{r}(z),\widehat{\theta},\widehat{\phi}) \,=\,\left|\mathrm{Jac}\left(\zeta_{(z)}(\widehat{r}(z),\widehat{\theta}, \widehat{\phi})\right)\right|^{\frac{1}{2}}\,\frac{D\left(\zeta_{(z)}( \widehat{r}(z),\widehat{\theta},\widehat{\phi})\right)}{\widehat{D}(\widehat{r }(z))}\,. \tag{108}\]
This clearly shows that the conformal factor \(\Phi_{\widehat{\Sigma}\Sigma}\) is an explicit and, at least in principle, measurable quantity associated with the local Lorentz mapping (described by the localized \(\mathrm{PSL}(2,\mathbb{C})\) map \(\zeta_{(z)}\)) needed for adjusting the three reference null directions in the chosen celestial coordinates bin \(\widehat{B}(y_{(I)},\delta)\) in the celestial sphere \(\widehat{\mathbb{C}\,\mathbb{S}}_{z}(p)\). This adjustment allows to transfer to \(\widehat{B}(y_{(I)},\delta)\) the actual area distance, namely, compute \(D(\zeta_{(z)}(\widehat{r}(z),\widehat{\theta},\widehat{\phi}))\), and compare its distribution on the FLRW celestial sphere \(\widehat{\mathbb{C}\,\mathbb{S}}_{z}(p)\) with respect to the isotropic FLRW area distance \(\widehat{D}(\widehat{r}(z))\). The anisotropies in the angular distribution with respect to \(\widehat{D}(\widehat{r}(z))\) give rise to fluctuations in \(\Phi_{\widehat{\Sigma}\Sigma}\). It may appear somewhat surprising that, after all, the conformal factor does not explicitly depend also from the distortion tensor \(\mathcal{L}_{ab}\) defined by (30). This dependence is implicit in the definition of the area distance (29) and of the coordinate parametrization (30) characterizing \(\mathcal{L}_{ab}\). These definitions give rise to the relation (34) that, as can be easily checked, remove the explicit \(\mathcal{L}_{ab}\) dependence from \(\Phi_{\widehat{\Sigma}\Sigma}\). As we shall see, this fact will turn to our advantage when extending our analysis to the more general case of fractal-like sky sections.
## 6. The sky section comparison functional at scale \(L\)
The harmonic energy \(E\left[\widehat{h},\,\zeta_{(z)},\,h\right]\), or equivalently \(E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\), associated with the maps \(\zeta_{(z)}\) and \(\psi_{(z)}\), cannot be used directly as comparison functional between the sky sections \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\) and \((\Sigma_{z},\,g_{(z)})\). This follows directly as a consequence of the conformal invariance (93) which implies
\[E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\,=\,\frac{1}{ 2}\,\int_{\widehat{\Sigma}_{z}}\left(\widehat{g}_{(z)}\right)^{ab}\frac{\partial \psi^{i}_{(z)}(y)}{\partial y^{a}}\frac{\partial\psi^{k}_{(z)}(y)}{\partial y^{b }}\,(g_{(z)})_{ik}\,d\mu_{\widehat{g}_{(z)}}\] \[=\frac{1}{2}\,\int_{\widehat{\Sigma}_{z}}\,\left[\frac{a_{0}^{2} \,f^{2}\left(\widehat{r}_{(z)}\right)}{(1\,+\,z_{L})^{2}}\right]^{-1}( \widehat{h}(\mathbb{S}^{2}))^{ab}\,\frac{\partial\psi^{i}_{(z)}(y)}{\partial y ^{a}}\frac{\partial\psi^{k}_{(z)}(y)}{\partial y^{b}}\,(g_{(z)})_{ik}\,\left[ \frac{a_{0}^{2}\,f^{2}\left(\widehat{r}_{(z)}\right)}{(1\,+\,z_{L})^{2}} \right]\,d\mu_{\mathbb{S}^{2}}\] \[=\frac{1}{2}\,\int_{\widehat{\Sigma}_{z}}\,(\widehat{\widetilde{h }}(\mathbb{S}^{2}))^{ab}\,\frac{\partial\psi^{i}_{(z)}(y)}{\partial y^{a}} \frac{\partial\psi^{k}_{(z)}(y)}{\partial y^{b}}\,(g_{(z)})_{ik}\,d\mu_{ \mathbb{S}^{2}}\,, \tag{109}\]
where, as usual, \(\widehat{\widetilde{h}}(\mathbb{S}^{2}))\) is the round metric on the unit 2-sphere \(\mathbb{S}^{2}\). From the above relation it follows that \(E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\), (and similarly for \(E\left[\widehat{h},\,\zeta_{(z)},\,h\right]\)), does not depend from the area distance \(\frac{a_{0}^{2}}{(1\,+\,z)^{2}}\,f^{2}\left(\widehat{r}(z)\right)\) on the FLRW past lightcone \(\widehat{\mathcal{C}}^{-}(p)\). Thus, \(E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\) cannot be a good candidate for the role of the functional that compares the sky sections \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\) and \((\Sigma_{z},\,g_{(z)})\). For this role, we introduced in [12] a functional whose structure was suggested by the rich repertoire of functionals used in the problem of comparing shapes of surfaces in relation to computer graphic and visualization problems (see _e.g._[35] and [28], to quote two relevant papers in a vast literature). In particular, we were inspired by an energy functional introduced, under the name of _elastic energy_, in a remarkable paper by J. Hass and P. Koehl [29], who use it as a powerful means of comparing the shapes of genus-zero surfaces in problems relevant to surface visualization.
In the more complex framework addressed in cosmography, we found it useful to define the sky section comparison functional at scale \(L(z)\) according to
\[E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\,:=\,\int_{\widehat{\Sigma}_{z}}\,( \Phi_{\widehat{\Sigma}\Sigma}\,-\,1)^{2}\,d\mu_{\hat{g}_{(z)}}\,, \tag{110}\]
that can be, more expressively, rewritten as (see (108))
\[E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\,:=\,\int_{\widehat{\Sigma}_{z}}\, \left[\frac{\left|\operatorname{Jac}\left(\zeta_{(z)}(\widehat{r}(z))\right) \right|^{\frac{1}{2}}\,D\left(\zeta_{(z)}(\widehat{r}(z),\widehat{\theta}, \widehat{\phi})\right)\,-\,\widehat{D}(\widehat{r}(z))}{\widehat{D}(\widehat{ r}(z))}\right]^{2}\,d\mu_{\hat{g}_{(z)}}\,. \tag{111}\]
Thus, from the physical point of view, \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\) describes the mean square fluctuations of the physical area distance \(D\left(\zeta_{(z)}(\widehat{r}(z),\widehat{\theta},\widehat{\phi})\right)\) (biased by the localized \(\operatorname{PSL}(2,\mathbb{C})\) mapping \(\zeta_{(z)}\)) with respect to the reference FLRW isotropic area distance \(\widehat{D}(\widehat{r}(z))\).
Notice that, whereas the harmonic map energy \(E\left[\widehat{g}_{(z)},\,\psi_{(z)},\,g_{(z)}\right]\) is a conformal invariant quantity, the functional \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\) is not conformally invariant. Under a conformal transformation \(\hat{h}\,\longrightarrow\,e^{2f}\,\hat{h}\) we get
\[\int_{\widehat{\Sigma}_{z}}\,\left(e^{\,-\,f}\Phi_{\widehat{\Sigma}\Sigma}\,- \,1\right)^{2}\,e^{2f}\,d\mu_{\hat{h}}\,\,. \tag{112}\]
Since we can also write
\[\Phi_{\widehat{\Sigma}\Sigma}\,=\,\left[\frac{\psi^{*}_{(z)}d\mu_{g_{(z)}}}{d \mu_{\widehat{g}_{(z)}}}\right]^{\frac{1}{2}}\,, \tag{113}\]
(see (101)), it is also clear from its definition that corresponding to large linear "stretches" in conformally mapping \(\psi^{*}_{(z)}g_{(z)}\) on \(\widehat{g}_{(z)}\), \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\) tends to the harmonic map energy.
In our particular framework, the functional \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\) has many important properties that make it a natural candidate for comparing, at the given length scale \(L\), the sky sections \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\) and \((\Sigma_{z},\,g_{(z)})\) and, as the length-scale \(L\) varies, the physical lightcone region \(\mathcal{C}_{L}^{-}(p,g)\) with the FLRW reference region \(\mathcal{C}_{L}^{-}(p,\hat{g})\). These properties are discussed in detail in [12] (see Lemma 8 and Theorem 9), here we recollect them, without presenting their proof, in the following14
Footnote 14: In [12], the general notation is somehow at variance from the one adopted here, since we address the analysis of \(E_{\widehat{\Sigma}\Sigma}\) directly on the surfaces \(\widehat{\Sigma}\) and \(\Sigma\). In particular, we refer to \(\widehat{\Sigma}\) and \(\Sigma\) as celestial spheres rather than sky sections.
**Theorem 1**.: _The functional \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\) is symmetric_
\[E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\,=\,E_{\Sigma\widehat{\Sigma}}[\psi_{ (z)}^{-1}]\;, \tag{114}\]
_where_
\[E_{\Sigma\widehat{\Sigma}}[\psi_{(z)}^{-1}]\,:=\,\int_{\Sigma_{z}}(\Phi_{ \Sigma\widehat{\Sigma}}\,-\,1)^{2}\,d\mu_{g_{(z)}}\;, \tag{115}\]
_is the comparison functional associated with the inverse map \(\psi_{(z)}^{-1}\,:\,\Sigma_{z}\,\longrightarrow\,\widehat{\Sigma}_{z}\), and \(\Phi_{\Sigma\widehat{\Sigma}}\) is the corresponding conformal factor. Let \((M,\,\widetilde{g})\) be another member of the FLRW family of spacetimes, distinct from \((M,\,\hat{g})\), that we may wish to use as a control in a best-fitting procedure for the physical spacetime \((M,g)\). Let \((\widetilde{\Sigma}_{z},\,\widetilde{g}_{(z)})\) denote the sky section on the past lightcone \(\widetilde{\mathcal{C}}_{L_{0}}^{-}(p,\tilde{g})\), with vertex at \(p\), and let \(\widetilde{\psi}_{(z)}\,:\,\Sigma_{z}\,\longmapsto\,\widetilde{\Sigma}_{z}\), and \(\,\Phi_{\Sigma\widehat{\Sigma}}\) respectively denote the corresponding diffeomorphism and conformal factor. Then to the composition of maps_
\[\widehat{\Sigma}_{z}\,\underset{\psi_{(z)}}{\longrightarrow}\,\Sigma_{z}\, \underset{\widetilde{\psi}_{(z)}}{\longrightarrow}\,\widetilde{\Sigma}_{z} \tag{116}\]
_we can associate the triangular inequality_
\[E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\,+\,E_{\Sigma\widehat{\Sigma}}[ \widetilde{\psi}_{(z)}]\,\geq\,E_{\widehat{\Sigma}\widehat{\Sigma}}[( \widetilde{\psi}_{(z)}\circ\psi_{(z)})]\,, \tag{117}\]
_where_
\[E_{\widehat{\Sigma}\widehat{\Sigma}}[(\widetilde{\psi}_{(z)}\circ\psi_{(z)}) ]\,:=\,\int_{\widehat{\Sigma}_{z}}(\Phi_{\widehat{\Sigma}\widehat{\Sigma}}\, -\,1)^{2}\,d\mu_{\widehat{g}_{(z)}}\;. \tag{118}\]
_Moreover,_
\[E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\,=\,0 \tag{119}\]
_iff the sky sections \((\widehat{\Sigma},\,\hat{g}_{(z)})\) and \((\Sigma,\,g_{(z)})\) are isometric. Finally, if we denote by \(\mathrm{W}^{1,2}_{\zeta_{(z)}}(\widehat{\mathbb{C}}\mathbb{S}_{z}(p),\, \mathbb{C}\,\mathbb{S}_{z}(p))\) the space of localized \(\mathrm{PSL}(2,\mathbb{C})\)- maps \(\zeta_{(z)}\) which are of Sobolev class \(\mathrm{W}^{1,2}\), (i.e. square summable together with their first derivatives), then_
\[d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\,:=\,\inf_{\zeta_{(z)} \in\mathrm{W}^{1,2}_{\zeta_{(z)}}(\widehat{\mathbb{C}}\mathbb{S}_{z}(p),\, \mathbb{C}\,\mathbb{S}_{z}(p))}\,E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}] \tag{120}\]
_defines a scale-dependent distance between the sky sections \((\widehat{\Sigma}_{z},\,\hat{g}_{(z)})\) and \((\Sigma_{z},\,g_{(z)})\) on the lightcone regions \(\mathcal{C}_{L}^{-}(p,\hat{g})\) and \(\mathcal{C}_{L}^{-}(p,g)\)._
We need to conclude our long lightcone journey addressing the real nature of the physical sky section \(\Sigma_{z}\). This forces us to leave the comfort zone of the assumed smoothness of the past physical lightcone \(\mathcal{C}^{-}(p,\widehat{g})\).
## 7. The Lipschitz geometry of the cosmological sky sections \(\Sigma_{z}\)
The celestial sphere description of the sky sections \(\Sigma_{z}\) discussed above is inherently vulnerable to the vagaries of the local distribution of astrophysical sources, and the associated strong gravitational lensing phenomena15 imply that the actual past light cone \(\mathscr{C}^{-}(p,g)\) is not smooth as we have assumed16. In particular, \(\mathscr{C}^{-}(p,g)\) may fail to be the boundary \(\partial\,\mathrm{I}^{-}(p,g)\) of the chronological past \(\mathrm{I}^{-}(p,g)\) of \(p\), (the set of all events \(q\in M\) that can be connected to \(p\) by a past-directed timelike curve), because past-directed null geodesics generators of \(\mathscr{C}^{-}(p,g),\ \lambda\,:\,[0,\delta)\,\longrightarrow\,(M,g)\), with \(\lambda(0)\,=\,p\), may leave \(\partial\mathrm{I}^{-}(p,g)\) and, under the action of the local spacetime curvature, plunge into the interior \(\mathrm{I}^{-}(p,g)\). A spacetime description of this behaviour in connection with the phenomenology of gravitational lensing is discussed in detail in [45], with a rich repertoire of examples of the possible singular structure that \(\mathscr{C}^{-}(p,g)\) may induce on the cosmological sky sections \(\Sigma(p,r)\). As a matter of fact, the sections \(\Sigma_{z}\) may evolve into fractal-like surfaces, and to describe them from the point of view of geometric analysis, we need to introduce a framework tailored to the low-regularity landscape generated by the local inhomogeneities.
Footnote 15: See [45] for a thorough analysis of the geometry of gravitational lensing.
Footnote 16: The restrictive nature of the smoothness assumption on the metric \(g\), typically represented by functions \(g_{ab}\in C^{k}(\mathbb{R}^{4},\mathbb{R}),k\geq 2\), and of the associated light cone, has been pointed out by many authors, mainly in the context of the proof of singularity theorems and in causality theory, (see _e.g._[13], [17], [38], [42], [51]).
### The Lipschitz landscape
Given a past-directed null geodesic \(I_{W}\,\ni\,r\longmapsto\exp_{p}(rk(n(\theta,\phi)))\), issued from \(p\in\,M\) in the direction \(n(\theta,\phi)\,\in\,\mathbb{C}\,\mathbb{S}_{z}\), we follow [37] and define its _terminal point_ as the last-point
\[q(r_{*},n(\theta,\phi)):=\exp_{p}(rk(n(\theta,\phi))) \tag{121}\]
that lies on the boundary \(\partial\mathrm{I}^{-}(p,g)\) of the chronological past of \(p\). Any such terminal point \(q(r_{*},n(\theta,\phi))\) is said to be: _i)_ a _conjugate terminal point_ if the exponential map \(\exp_{p}\) is singular at \((r_{*},\,n(\theta,\phi))\); _ii)_ a _cut locus terminal point_ if the exponential map \(\exp_{p}\) is non-singular at \((r_{*},\,n(\theta,\phi))\) and there exists another null geodesic, issued from \(p\), passing through \(q(r_{*},n(\theta,\phi))\), (see also [1], [45]). We denote [37] by \(\mathcal{T}^{-}(p)\) the set of all terminal points associated with the past null geodesic flow issuing from \(p\). In presence of cut points, \(\mathscr{C}^{-}(p,g)\) fails to be an embedded submanifold of \((M,g)\). Failure to be an immersed manifold is more directly related to conjugate points along the generators of \(\mathscr{C}^{-}(p,g)\) and of the associated conjugate locus [45]. It follows that in presence of terminal points the mapping
\[\exp_{p}\bigl{|}_{\mathscr{C}^{-}(p,g)}\,:\,\mathbb{C}\,\mathbb{S}_{z}\, \longrightarrow\,\Sigma_{z}\,:=\,\exp_{p}\left[\mathbb{C}\,\mathbb{S}_{z}\right] \tag{122}\]
is no longer one-to-one, and the cosmological sky section \(\Sigma_{z}\) fails to be a smooth surface. From the physical point of view, this is the geometrical setting associated with the generation of multiple images of astrophysical sources17 in the observer celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}\). The mathematical framework for handling such a scenario is to assume that the past null cone \(\mathscr{C}^{-}(p,g)\) has the regularity of a Lipschitz manifold, characterized by a maximal atlas \(\mathcal{A}\,=\,\{(U_{\alpha},\varphi_{\alpha})\}\) such that all transition maps between the coordinate charts \((U_{\alpha},\varphi_{\alpha})\) of \(\mathscr{C}^{-}(p,g)\),
Footnote 17: If the sources are not pointlike, we also have the more complex ring patterns typical of strong gravitational lensing.
\[\varphi_{\alpha\beta}:=\varphi_{\beta}\circ\varphi_{\alpha}^{-1}\,:\,\varphi_ {\alpha}\,(U_{\alpha}\cap\,U_{\beta})\,\longrightarrow\,\varphi_{\beta}\,(U_ {\alpha}\cap\,U_{\beta})\,, \tag{123}\]
are locally Lipschitz maps between domains of the Euclidean space \((\mathbb{R}^{3},\delta)\). On \(\mathscr{C}^{-}(p,g)\), the condition of being Lipschitz can be viewed as a weakened version of the differentiability. In particular, if \(f:\,\mathscr{C}^{-}(p,g)\,\ni\,U\longrightarrow\mathbb{R}^{3}\) is a continuous map between open sets, then \(f\) is Lipschitz if and only if it admits distributional partial derivatives that are in \(L^{\infty}(U)\) with respect to the
Lebesgue measure. This statement of Rademacher's theorem [23], [48] implies that the transition maps \(\varphi_{\alpha\beta}\) on \(\mathscr{C}^{-}(p,g)\) have differentials \(d\varphi_{\alpha\beta}\) that are defined almost everywhere, and which are locally bounded an measurable on their domains. In such a low-regularity setting the exponential map is quite delicate to handle. However, a key result, geometrically proved by M. Kunzinger, R. Steinbauer, M. Stojkovic [40], (based on work by B.-L. Chen and P. LeFloch [13]), and by E. Minguzzi [42], implies that the exponential map associated with a \(C^{1,\,1}\) metric can still be defined as a local bi-Lipschitz homeomorphism, namely a bijective map which along with its inverse is Lipschitz continuous in a sufficiently small neighborhood of \(p\). Thus, the exponential map retains an appropriate form of regularity in the sense that locally, for each point \(p\in\,M\), there exist open star-shaped neighborhoods, \(N_{0}(p)\) of \(0\in\,T_{p}M\) and \(U_{p}\,\subset\,(M,g)\), such that \(\exp_{p}\,:\,N_{0}(p)\,\longrightarrow\,U_{p}\) is a bi-Lipschitz homeomorphism [40]. In particular, each point \(p\in(M,g)\) possesses a basis of totally normal neighborhoods. It is worthwhile to stress that geodesic normal coordinates (see (22)) can be still defined, but the transition from the current smooth coordinate systems18 used around \(p\in M\) to the normal coordinates associated with \(\exp_{p}\) is only continuous.
Footnote 18: Recall that \(M\) is a smooth manifold, and that the low Lipschitz \(C^{1,\,1}\) regularity is caused by the metric \(g\), and not by the differentiable structure of \(M\).
### The fractal-like sky section \(\Sigma_{z}\)
We are interested in the geometry that such past light cone scenario induces on the cosmological sky section \(\Sigma_{z}\,:=\,\exp_{p}\left[\mathbb{C}\,\mathbb{S}_{z}\right]\) of \(\mathcal{C}^{-}(p,g)\). As long as \(\exp_{p}\) is bi-Lipschitz, the sky sections \(\Sigma_{z}\) are topological 2-spheres, and the results above seem to suggest that after all there is no such a strong motivation to abandon the comforts of the smooth framework in favor of a Lipschitzian rugged landscape. However, as the length scale \(L\) varies, the development of caustics in \(\mathscr{C}^{-}(p,g)\) generates cusps and crossings in the surfaces \(\Sigma_{z}\), to the effect that they are no longer homeomorphic to 2-spheres. In such a setting, the restriction of the exponential map to the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}\), characterizing the surface \(\Sigma_{z}\), (see (71)),
\[\exp_{p}\,:\,\mathbb{C}\,\mathbb{S}_{z}\,\subset\,T_{p}M\,\longrightarrow\, \Sigma_{z}\,:=\,\exp_{p}\left[\mathbb{C}\,\mathbb{S}_{z}\right]\,\subset\, \mathscr{C}^{-}(p,g)\,, \tag{124}\]
is only a Lipschitz map between the metric spaces \(\left(\mathbb{C}\,\mathbb{S}_{z},\,d_{\mathbb{S}_{r}^{2}}\right)\) and \(\left(\Sigma_{z},\,d_{g|\Sigma}\right)\), where \(d_{\mathbb{S}_{r}^{2}}\) is the standard distance function on the round 2-sphere \(\mathbb{S}_{r}^{2}\) or radius \(r\), and \(d_{g|\Sigma}\) is the distance function induced (almost everywhere) on \(\Sigma_{z}\) by the metric \(g|_{\Sigma_{z}}\) defined19 by (26). In general, the sky section \(\Sigma_{z}\) can be topologically very complex since it may contain terminal points of the exponential map \(\exp_{p}\), giving rise to cusps and swallow-tail points associated with self-intersections of \(\Sigma_{z}\). Even if this may evolve in a very complex picture of \(\Sigma_{z}\), we still have quite a geometric control over its metric structure. The Lipschitz regularity of \(\exp_{p}\) implies that there is a constant \(c_{r}\), depending on the parameter \(r\), such that
Footnote 19: In presence of cut points the inclusion map \(\iota_{r}:\Sigma_{z}\,\hookrightarrow\,\mathscr{C}^{-}(p,g)\) of the sky section \(\Sigma_{z}\) into \(\mathscr{C}^{-}(p,g)\) is Lipschitz, thus Rademacher’s theorem allows us to define the pull-back metric \(g|_{\Sigma_{z}}\,:=\,\iota_{r}^{*}\,\,g|_{\mathscr{C}^{-}(p,g)}\) only almost-everywhere
\[d_{\Sigma(p,r)}\left(\exp_{p}(x),\exp_{p}(y)\right)\,\leq\,c_{r}\,d_{\mathbb{ S}_{r}^{2}}(x,y),\,\,\,\,\forall\,x,\,y\,\in\,\mathbb{S}_{r}^{2}\,, \tag{125}\]
and we can define the pull-back on the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}\,\in\,T_{p}M\) of the distance function \(d_{\Sigma_{z}}\) according to
\[\exp_{p}^{*}d_{\Sigma_{z}}\,=\,d_{g\Sigma}\left(\exp_{p}(x),\exp_{p}(y)\right) \,,\,\,\,\,\forall\,x,\,y\,\in\,\mathbb{C}\,\mathbb{S}_{z}\,. \tag{126}\]
We can also pull-back the metric \(g|_{g|\Sigma_{z}}\) to \(\mathbb{C}\,\mathbb{S}_{z}\). By Rademacher's theorem \(\exp_{p}\) is differentiable almost everywhere, and
\[h(\theta,\phi)\,:=\,\left(\exp_{p}^{*}\,g|_{\Sigma_{z}}\right)_{\alpha\beta}\, dx^{\alpha}dx^{\beta}\,, \tag{127}\]
is a metric defined, almost everywhere on the celestial sphere \(\mathbb{C}\,\mathbb{S}_{z}\), (by a slight abuse of language, we have used the same notation as for the smooth version(27)). We can also define almost everywhere
the volume element \(d\mu_{h}\) associated with the metric (127), _i.e._
\[d\mu_{h}\,:=\,\exp_{p}^{*}\,d\mu_{g|_{\Sigma_{z}}}\,=\,\sqrt{\det(h(r(z),\theta, \phi))}\,d\theta d\varphi\,, \tag{128}\]
in full analogy with its smooth version (28). All this implies that with the proviso of the almost everywhere meaning, the characterization (29) of the angular diameter distance \(D(r,\theta,\phi)\) and of the shear-inducing distortion \(L_{\alpha\beta}\) defined by (30), carry over to the bi-Lipschitz case.
To put these geometrical remarks at work, let us stress that we cannot have reasonable control over the very complex topological structure of the sky section \(\Sigma_{z}\) induced by a cascade of (strong) lensing events. Moreover, the corresponding caustics and singularities at the terminal points on \(\Sigma_{z}\) provide a level of detail that is not relevant to the present analysis. Thus, as a reasonable compromise, we assume that the exponential map \(\exp_{p}\) is bi-Lipschitz, that \(\Sigma_{z}\) is topologically a 2-sphere, and we mimic the effect of the many lensing events that may affect \(\Sigma_{z}\) by assuming that the sky section \(\Sigma_{z}\) has the irregularities of a metric surface with the fractal geometry of a 2-sphere with the locally-finite Hausdorff 2-measure associated with (128). Under such assumptions, it can be shown that our smooth analysis can be safely extended, (in particular, we can still exploit the Poincare-Koebe uniformization theorem [44]), and the results obtained hold also in the more general setting of a Lipschitz description of the cosmographic past lightcone \(\mathcal{C}^{-}(p,g)\).
Concluding remarks: \(d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\) as a scale-dependent field
According to the physical characterization (111) of \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\), and the results described in Theorem 1, the distance function \(d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\), (for simplicity, one may work with the \(E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}]\) realizing the minimum), can be interpreted as defining a \(z\)-dependent field on the FLRW past light cone \(\widehat{\mathcal{C}}^{-}(p,\widehat{g})\) describing the mean square fluctuations of the anisotropies of the physical area distance \(D(\zeta_{(z)})\) with respect to the reference FLRW area distance \(\widehat{D}(\widehat{r}(z))\). These fluctuations provide information on how much the local area element on the physical sky section \(\Sigma_{z}\) differs from the corresponding (round) area element on the reference FLRW sky section \(\Sigma_{z}\). Since for 2-dimensional surfaces the local Riemannian geometry is fully described by the area element, the fluctuations in \(D(\zeta_{(z)})\) give information on how much the geometries of the sky sections \(\widehat{\Sigma}_{z}\) and \(\Sigma_{z}\) differ. When we reach the scale of homogeneity, the physical area distance \(D(\zeta_{(z)})\) becomes isotropic and can be identified with the reference FLRW \(\widehat{D}(\widehat{r}(z))\). The localized null-directions alignment between the corresponding celestial spheres \(\mathbb{C}\,\mathbb{S}_{z}(p)\) and \(\widehat{\mathbb{C}}\,\mathbb{S}_{z}(p)\) reduces to a global kinematical Lorentz boost (and a rotation). Thus, corresponding to this homogeneity scale, the distance function \(d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\) field vanishes.
Thus, we have an interesting scenario whereby it is possible to associate with the distance functional \(d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\) a scale-dependent field that describes a global effect that the reference FLRW past lightcone \(\widehat{\mathcal{C}}^{-}(p,\widehat{g})\) misses in describing the pre-homogeneity anisotropies of the actual past lightcone \(\mathcal{C}^{-}(p,\widehat{g})\). This _pre-homogeneity field_ is, in line of principle, measurable since it is the mean-square variation of the physical area distance \(D(\zeta_{(z)})\). The delicate question concerns its possible role in selecting the large-scale FLRW model that best fits the cosmological observations on large scales. A few qualitative indications in this direction, mainly of a perturbative nature, are discussed in [12]. The results presented here are however more precise since they connect directly the distance functional \(d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\) to the area distance \(D(\zeta_{(z)})\). To describe an important consequence of these results, let us consider the light cone regions \(\mathcal{C}_{L}^{-}(p,\hat{g})\) and \(\mathcal{C}_{L}^{-}(p,g)\) over a sufficiently small length scale \(L(z)\). If \(\zeta_{(z)}\) and the corresponding \(\psi_{(z)}\) denote the minimizing maps characterized in Theorem
1, then we can write [12]
\[E_{\widehat{\Sigma}\Sigma}[\psi_{(z)}] = \int_{\widehat{\Sigma}_{z}}(\Phi_{\widehat{\Sigma}\Sigma}\,-\,1)^{2 }\,d\mu_{\widehat{g}_{(z)}}\,=\,\int_{\widehat{\Sigma}_{z}}\Phi_{\widehat{ \Sigma}\Sigma}^{2}\,d\mu_{\widehat{g}_{(z)}}\,+\,\int_{\widehat{\Sigma}_{z}}\,d \mu_{\widehat{g}_{(z)}}\,-\,2\int_{\widehat{\Sigma}_{z}}\Phi_{\widehat{\Sigma} \Sigma}\,d\mu_{\widehat{g}_{(z)}}\] \[= \int_{\widehat{\Sigma}_{z}}\frac{\psi_{g_{(z)}}^{*}d\mu_{g_{(z)}} }{d\mu_{\widehat{g}_{(z)}}}\,d\mu_{\widehat{g}_{(z)}}\,+\,A\left(\widehat{ \Sigma}_{z}\right)\,-\,2\int_{\widehat{\Sigma}_{z}}\Phi_{\widehat{\Sigma}\Sigma} \,d\mu_{\widehat{g}_{(z)}}\] \[= \int_{\psi_{(z)}(\widehat{\Sigma}_{z})}d\mu_{g_{(z)}}\,+\,A\left( \widehat{\Sigma}_{z}\right)\,-\,2\int_{\widehat{\Sigma}_{z}}\Phi_{\widehat{ \Sigma}\Sigma}\,d\mu_{\widehat{g}_{(z)}}\] \[= A(\Sigma_{z})\,+\,A\left(\widehat{\Sigma}_{z}\right)\,-\,2\int_ {\widehat{\Sigma}_{z}}\Phi_{\widehat{\Sigma}\Sigma}\,d\mu_{\widehat{g}_{(z)}}\;, \tag{129}\]
where we have exploited the Radon-Nikodyn characterization of \(\widehat{\Phi}_{\widehat{\Sigma}\Sigma}^{2}\), (see (101)), the identification \(\psi_{(z)}(\widehat{\Sigma}_{z})\,=\,\Sigma_{z},\) and the relation
\[\int_{\widehat{\Sigma}_{z}}\frac{\psi_{(z)}^{*}d\mu_{g_{(z)}}}{d\mu_{\widehat {g}_{(z)}}}\,d\mu_{\widehat{g}_{(z)}}=\int_{\widehat{\Sigma}_{z}}\psi_{(z)}^{* }d\mu_{g_{(z)}}=\int_{\psi(\widehat{\Sigma}_{z})}d\mu_{g_{(z)}}=\int_{\Sigma_{ z}}d\mu_{g_{(z)}}\,=\,A(\Sigma_{z})\;, \tag{130}\]
where \(A(\Sigma_{z})\) and \(A\left(\widehat{\Sigma}_{z}\right)\) respectively denote the area of the sky sections \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\) and \((\Sigma_{z},\,g_{(z)})\). Thus,
\[d_{(z)}\left[\widehat{\Sigma}_{L},\,\Sigma_{L}\right]\,=\,E_{\widehat{\Sigma} \Sigma}[\psi_{(z)}]\,:=\,A\left(\widehat{\Sigma}_{z}\right)\,+\,A(\Sigma_{z}) \,-\,2\int_{\widehat{\Sigma}_{z}}\Phi_{\widehat{\Sigma}\Sigma}\,d\mu_{\widehat {g}_{(z)}}\,. \tag{131}\]
To simplify matters, we assume that at the given length scale \(L(z)\) the corresponding region \({\mathcal{C}}_{L}^{-}(p,g)\) is caustic free. Let us rewrite \(\Phi_{\Sigma\widehat{\Sigma}}\) as
\[\Phi_{\Sigma\widehat{\Sigma}} = \left(\Phi_{\Sigma\widehat{\Sigma}}\,-\,1\right)\,+\,1\] \[= \frac{\left|\mbox{Jac}\left(\zeta_{(z)}\right)\right|^{\frac{1}{ 2}}\,D(\zeta_{(z)})\,-\,\widehat{D}(\widehat{r}(z))}{\widehat{D}(\widehat{r}( z))}\,+\,1\,, \tag{132}\]
where we have simplified the notation used in (108). By introducing this in (131) we get
\[d_{L}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]\,=\,A\left(\Sigma_{z} \right)\,-\,A(\widehat{\Sigma}_{z})\,-\,2\int_{\widehat{\Sigma}_{z}}\,\left[ \frac{\left|\mbox{Jac}\left(\zeta_{(z)}\right)\right|^{\frac{1}{2}}\,D(\zeta_ {(z)})\,-\,\widehat{D}(\widehat{r}(z))}{\widehat{D}(\widehat{r}(z))}\right]\,d \mu_{\widehat{g}_{(z)}}\,. \tag{133}\]
This expression can be further specialized if we exploit the asymptotic expressions of the area \(A\left(\widehat{\Sigma}_{z}\right)\) and \(A\left(\Sigma_{z}\right)\) of the two surfaces \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)}),\,\,(\Sigma_{z},\,g_{(z)})\) on the corresponding lightcones \({\mathcal{C}}_{L}^{-}(p,\widehat{g})\) and \({\mathcal{C}}_{L}^{-}(p,g)\). These asymptotic expressions can be obtained if we consider the associated causal past regions \({\mathcal{J}}_{L}^{-}(p,\widehat{g})\) and \({\mathcal{J}}_{L}^{-}(p,g)\) sufficiently near the (common) observation point \(p\), in particular when the length scale \(L(z)\) we are probing is small with respect to the "cosmological" curvature scale. Under such assumption, there is a unique maximal 3-dimensional region \(V_{L}^{3}(p)\), embedded in \({\mathcal{J}}_{L}^{-}(p,g)\), having the surface \((\Sigma_{z},\,h)\) as its boundary. This surface intersects the world line \(\gamma(\tau)\) of the observer \(p\) at the point \(q=\gamma(\tau_{0}\,=\,-\,L(z))\) defined by the given length scale \(L(z)\). For the reference FLRW the analogous set up is associated to the constant-time slicing of the FLRW spacetime \((M,\widehat{g})\) considered. The corresponding 3-dimensional region \(\widehat{V}_{L}^{3}(p)\), embedded in \({\mathcal{J}}_{L}^{-}(p,\widehat{g})\), has the surface \((\widehat{\Sigma}_{z},\,\widehat{h})\) as its boundary. The FLRW observer \(\widehat{\gamma}(\widehat{\tau})\) will intersect \(\widehat{V}_{L}^{3}(p)\) at the point \(\widehat{q}=\widehat{\gamma}(\widehat{\tau}_{0}\,=\,-\,L(z))\). By introducing geodesic normal coordinates \(\{X^{i}\}\) in \({\mathcal{J}}_{L}^{-}(p,g)\) and \(\{Y^{k}\}\) in \({\mathcal{J}}_{L}^{-}(p,\widehat{g})\), respectively based at the point \(q\) and \(\widehat{q}\), we can pull back the metric tensors \(g\) and \(\widehat{g}\) to \(T_{q}M\) and \(T_{\widehat{q}}M\), and obtain the classical normal coordinate development of the metrics
and \(\widehat{g}\) valid in a sufficiently small convex neighborhood of \(q\) and \(\widehat{q}\). Explicitly, for the (more relevant case of the) metric \(g\), we have (see _e. g._ Lemma 3.4 (p. 210) of [49] or [47])
\[\left((\exp_{q})^{*}\,g\right)_{ef}\,=\,\eta_{ef}\,-\,\frac{1}{3}\,{\rm R}_{eabf }|_{q}X^{a}X^{b}\,-\,\frac{1}{6}\,\nabla_{c}{\rm R}_{eabf}|_{q}X^{a}X^{b}X^{c}\]
\[+\,\left(-\,\frac{1}{20}\,\nabla_{c}\nabla_{d}{\rm R}_{eabf}\,+\,\frac{2}{45} \,{\rm R}_{eabm}\,{\rm R}^{m}_{fcd}\right)_{q}\,X^{a}X^{b}X^{c}X^{d}\,+\,\ldots\,\]
where \({\rm R}_{abcd}\) is the Riemann tensor of the metric \(g\) (evaluated at the point \(q\)). The induced expansion in the pulled-back measure \(\left((\exp_{s(\eta)})^{*}d\mu_{g}\right)\) provides the Lorentzian analog of the familiar Bertrand-Puiseux formulas associated with the geometrical interpretation of the sectional, Ricci and scalar curvature for a Riemannian manifold in terms of the length, area, and volume measures of small geodesic balls. In the Lorentzian case the relevant formulas are more delicate to derive, [3], [25], [26], [43]. This asymptotics provides [25], to leading order in \(L(z)\), the following expressions for the area of \((\Sigma_{z},\,g_{(z)})\) and \((\widehat{\Sigma}_{z},\,\widehat{g}_{(z)})\),
\[A\,(\Sigma_{z})\,=\,\pi\,L^{2}(z)\,\left(1\,-\,\frac{1}{72}\,L^{2}(z)\,{\rm R} (q)\,+\,\ldots\right)\, \tag{134}\]
and
\[A\left(\widehat{\Sigma}_{z}\right)\,=\,\pi\,L^{2}(z)\,\left(1\,-\,\frac{1}{7 2}\,L^{2}(z)\,\widehat{\rm R}(\widehat{q})\,+\,\ldots\right)\, \tag{135}\]
Introducing these expressions in (133) we eventually get
\[\widehat{\rm R}(\hat{q})\,=\,{\rm R}(q)\,+\,\frac{72}{\pi}\frac{d_{(z)}\, \Big{[}\widehat{\Sigma}_{z},\,\Sigma_{z}\Big{]}}{L^{4}(z)}\,+\,\frac{144}{\pi L ^{4}(z)}\,\int_{\widehat{\Sigma}_{z}}\,\left[\frac{\big{|}{\rm Jac}\left( \zeta_{(z)}\right)\big{|}^{\frac{1}{2}}\,D(\zeta_{(z)})\,-\,\widehat{D}( \widehat{r}(z))}{\widehat{D}(\widehat{r}(z))}\right]\,d\mu_{\widehat{g}_{(z)}} \,+\,\ldots. \tag{136}\]
Notice that the integral is the average value over the sky section \((\widehat{\Sigma}_{z},\widehat{g}_{(z)})\), of the fluctuations of \(\big{|}{\rm Jac}\left(\zeta_{(z)}\right)\big{|}^{\frac{1}{2}}\,D(\zeta_{(z)})\) with respect to \(\widehat{D}(\widehat{r}(z))\), average that for notational ease we write as
\[\Big{\langle}\,D(\zeta_{(z)})\big{|}\,\widehat{D}(\widehat{r}(z))\Big{\rangle} _{\widehat{\Sigma}_{z}}\ :=\,A^{-1}(\widehat{\Sigma}_{z})\,\int_{\widehat{\Sigma}_{z}}\, \left[\frac{\big{|}{\rm Jac}\left(\zeta_{(z)}\right)\big{|}^{\frac{1}{2}}\,D( \zeta_{(z)})\,-\,\widehat{D}(\widehat{r}(z))}{\widehat{D}(\widehat{r}(z))} \right]\,d\mu_{\widehat{g}_{(z)}}\,, \tag{137}\]
while, as we have already stressed, the distance functional is (up to the \(A(\widehat{\Sigma}_{z})\) normalization) the square mean deviation of this average, _i. e._,
\[\Big{\langle}\Big{(}\,D(\zeta_{(z)})\big{|}\,\widehat{D}(\widehat{r}(z)) \Big{)}^{2}\Big{\rangle}_{\widehat{\Sigma}_{z}}\ : =\] \[= \tag{138}\]
To put these results at work, let us assume the conservative and quite a reasonable scenario where the fluctuations in the area distance \(D(\zeta_{(z)})\), even if locally large in the various celestial coordinates bins, average out to zero over \(\widehat{\Sigma}_{z}\). However, the corresponding square mean deviation of the fluctuations \(\left\langle\Big{(}\,D(\zeta_{(z)})\big{|}\,\widehat{D}(\widehat{r}(z))\Big{)} ^{2}\right\rangle_{\widehat{\Sigma}_{z}}\,=\,A^{-1}(\widehat{\Sigma}_{z})\,d_ {(z)}\,\Big{[}\widehat{\Sigma}_{z},\,\Sigma_{z}\Big{]}\) can be significantly different from
zero, and from (136) we get
\[\widehat{\mathrm{R}}(\hat{q})\,=\,\mathrm{R}(q)\,+\,\frac{72}{\pi}\frac{d_{(z)} \left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]}{L^{4}(z)}\,+\,\ldots\,. \tag{139}\]
The physical scalar curvature we measure (hard to!) in such a scenario is \(\mathrm{R}(q)\), and if we decide to modeling with a FLRW solution a cosmological spacetime, homogeneous on large scale but highly inhomogeneous at smaller scale, then (139) shows that we cannot identify \(\mathrm{R}(q)\) with the corresponding FLRW scalar curvature \(\widehat{\mathrm{R}}(\hat{q})\). Such an identification is possible, with a rigorous level of scale dependence precision, only if we take into account the term
\[\frac{72}{\pi}\frac{d_{(z)}\left[\widehat{\Sigma}_{z},\,\Sigma_{z}\right]}{L^{ 4}(z)}\,. \tag{140}\]
According to Theorem 1, this term vanishes once \(L(z)\) probes the homogeneity scales, conversely, it is clear from (139) that in pre-homogeneity region its presence is forced on us and plays the role of a scale-dependent effective positive contribution to the cosmological constant. As long as the local inhomogeneities give rise to significant fluctuations in the area distance \(D(\zeta_{(z)})\), this contribution cannot be considered a priori negligible in high-precision cosmology.
|
2308.10039 | Do We Price Happiness? Evidence from Korean Stock Market | This study explores the potential of internet search volume data,
specifically Google Trends, as an indicator for cross-sectional stock returns.
Unlike previous studies, our research specifically investigates the search
volume of the topic 'happiness' and its impact on stock returns in the aspect
of risk pricing rather than as sentiment measurement. Empirical results
indicate that this 'happiness' search exposure (HSE) can explain future
returns, particularly for big and value firms. This suggests that HSE might be
a reflection of a firm's ability to produce goods or services that meet
societal utility needs. Our findings have significant implications for
institutional investors seeking to leverage HSE-based strategies for
outperformance. Additionally, our research suggests that, when selected
judiciously, some search topics on Google Trends can be related to risks that
impact stock prices. | HyeonJun Kim | 2023-08-19T14:55:49Z | http://arxiv.org/abs/2308.10039v1 | # Do We Price Happiness? Evidence from Korean Stock Market
###### Abstract
This study explores the potential of internet search volume data, specifically Google Trends, as an indicator for cross-sectional stock returns. Unlike previous studies, our research specifically investigates the search volume of the topic 'happiness' and its impact on stock returns in the aspect of risk pricing rather than as sentiment measurement. Empirical results indicate that this 'happiness' search exposure (HSE) can explain future returns, particularly for big and value firms. This suggests that HSE might be a reflection of a firm's ability to produce goods or services that meet societal utility needs. Our findings have significant implications for institutional investors seeking to leverage HSE-based strategies for outperformance. Additionally, our research suggests that, when selected judiciously, some search topics on Google Trends can be related to risks that impact stock prices.
**keywords:** [Asset Pricing], [Sentiment], [Google Trend]
## 1 Introduction
Market sentiment is shown to be regraded as the predictor for cross-sectional stock return, even when adjusted by size, value, and momentum effect. [1] This initial way of measuring market sentiment by Baker et al. (2006) was based on market-based measures, while others used suvey-based indicies. [2] Later an alternative is suggested that use internet search volume such as Google Trends [5]. This method is quite appealing for financial economists because of its transparency of models built upon it, while also obtaining high frequency data. There was some concerns about the quality of Google Trend data [3], however it is certain that Google Trend data could be used as "early warning signs of stock market moves" [8]. Here the natural question arises; could google search queries detect fundamentals and risk associated with it?
Usually, search volume or related metrics (like return sensitivity) is far from fundamental information, and rather a sentiment proxy. This is because search volume, by its nature, is about collective social attention. However, it is shown that exposure, or factor loading to this sentiment proxy can be used for variety of asset pricing application. There are attempts to use this exposure by Chen, Kumar and Zhang (2020), but the study focused on relationship between the factor loading of a search volume and abnormal return[4], which is within the area of mispricing or irrational asset pricing. In this paper we suggest the possibility that internet search
volume data could be used for rational asset pricing by extracting the common risk across assets. Empirical analysis show that stock return's exposure to the search volume of topic 'happiness'(or HSE) have explanatory power to future return even when BE/ME and size is controlled. By subsampling the firms by size and BE/ME, the predictive power of HSE is higher in big and value firms, supporting the theory that HSE is a proxy for firm's capacity to generate goods or service that satisfies society's utility, or happiness.
## 2 What is Fundamental about Sentiment?
As any search query could be chosen to be used as demand pressure, at least it will be interesting, if not useful, to measure the demand pressure regarding abstract subject. Fundamentally, the search engines are for searching information or solution of some needs. For example, query volume for 'jobs' will have thorough economic explanation. Whereas search queries like "Happiness" do not have clear economic meaning. Usually users could not use the search engine to relief its demand about abstract concepts. Rather it should be interpreted as general sentiment about those concepts. Then we can intuitively interpret the high or low sentiment for these concepts in its own context of society, and check whether or not the sentiment proxies to certain economic activities. High sentiment for "Happiness", for instance, means there is overall lack of happiness in the society. Therefore, search volume for "Happiness" could be regarded as demand for happiness.
The time series format of this search volume intensity enables to measure quantity of exposure for these sentiments using Ross (1976)'s method. [9] The discussion that should be address is whether or not this this exposure (or Happiness Sentiment Exposure, HSE) could be considered as risk. This question could be solved by empirical evidence that will be presented later in this paper, but that do not give confidence in theoretical level. Here we focus on the fact that as customer utility is generated by happiness, the overall demand of happiness in the society influence the customer demand of the product that the some company serves, thus influence the fundamental of the company. Therefore, we hypothesized that Happiness Sentiment Exposure measured as the factor loading for search volume intensity for "Happiness(\(\frac{\text{\#M}}{\text{\#M}}\))" is somewhat theoretically adequate to be regarded as risk variables. We used a exposure to the search volume intensity of the subject 'happiness' that Google Trend provide to estimate the risk proxy for each stock. This risk proxy will quantify the tendency of the firm to be more related to society's utility supply processing.
## 3 Empirical Analysis
We use the cross-sectional analysis method of Fama and French (1992) [7] for identifying cross-sectional excess return pattern related to HSE. Although later studies did identified few more market anomalies like momentum, profitability and investment,
### Data Preparation
Daily return including dividends, daily market cap of common stocks, equity, preferred stock data was collected from DataGuide. Return data is changed from daily to monthly frequency, and BE/ME at year \(t\) was calculated by dividing equity minus preferred stock last trading day market cap of December of year \(t\)
The model for extracting the exposure \(\beta_{SVI}\) is presented in equation 1.
\[R_{t}^{(i)}=\alpha+\beta_{SVI}\Delta{SVI_{t}}+\epsilon_{t} \tag{1}\]
Where \(R_{t}^{(i)}\) is monthly return of a stock \(i\) at time \(t\) and \(\Delta{SVI_{t}}\) is log difference between monthly search volume intensity of query related to "happiness" between time \(t\) and \(t-1\). Also, although we mentioned the model of Ross (1976), equation 1 does not include market proxy for a few reasons. First, as we assumed that Happiness Sentiment Exposure is systematical but noisy, we had a concern that Happiness Sentiment Exposure will be misestimated due to market proxy, which have stronger relationship with the returns and is noise-free independent variable. Second, if the market proxy is added to the analysis, then the factor loading for Happiness Sentiment will measure the relationship with Happiness Sentiment with abnormal returns, which is not this paper's topic. The returns are winsorized by 99percentile and 1percentile threshold values, and estimation data range is 72 months, but data length down to 24 month was also accepted.
For portfolio analysis and Fama-Macbeth regression, the return was symmetrically trimmed by one observation each month to exclude extreme returns.
### Preliminary
Here we will address the local characteristics of Korean stock market. While it is not thoroughly researched, the two main stock market in Korea, KOSPI market and KOSDAQ seem less likely to be stochastically segmented or equally valuated. Unlike NYSE and NASDAQ, KOSDAQ stocks tends to transfer to KOSPI if possible, and index value of KOSDAQ, which started at 1000 in year 1997, still didn't fully recovered its initial value, while KOSPI market grow more than 200%. Considering these facts, it will be wise to choose one of the two markets for researching asset pricing that will avoid inconsistent results. We choose KOSPI, which has more larger market capitalization.
### Informal Tests
Here we first report portfolio analysis results for presenting non-parametric results and finding some discussion points. The result is shown in Table 3
Here we can see some expected, and also interesting results. First, as expected we see a somewhat monotonic increase in average excess return when the HSE value increases. T-values is not statistically significant (the highest decile being 1.67 standard deviation away from 0), but we can argue that this is because of noisy return, not because of the true average return value. After 8th decile, annualized expected return is over 9.3%, and the annualized excess return difference between 10th decile(Happy) and 1st decile(Unhappy) is over 17%. We
did not report the equally weighted portfolio excess return, but we want to note that for the equally weighted portfolio, the excess return for high decile portfolio is robust and statistically significant, but the monotonic increase of return in higher decile became less apparent. We assume that this is due to size effect and value effect that is observed in the Korean stock market, i.e. the high deciles have low BE/ME and low decile is small in size thus the monotonic pattern are flatten.
We also can observe unusual results for average stock market cap for each decile, which is non-linear. Actually the middle deciles (around 4-7th) has the highest BE/ME and market cap. This leads to interesting interpretation; low HSE stocks and high HSE stocks are both small cap and low value firms, while middle HSE stocks are big cap and high value firms. According to previous studies that argues that KOSPI exchange stocks have little or reversed size effect[6], the middle HSE portfolio should be the most high performing portfolios. However the sorting result shows that HSE does have its own market anomaly phenomenon regardless of pre-existing anomalies. Because of the previously reported non-linear relationship between BE/ME or size and HSE, We also report the 5 by 5 size-HSE and BE/ME-HSE double sort result to show HSE's predictive power of excess returns when size or BE/ME is controlled. Specific method is same as beta-size sort of Fama&French(1992) [7]. Stocks in each size or BE/ME decile portfolio is sorted again according to HSE's decile breakpoints. Here, we found that in big firm or high value firms subsample, the result more strongly supports our hypothesis that
\begin{table}
\begin{tabular}{l|c c c c} \hline & **HSE** & **BE/ME(+)** & **Size** & **Excess Return** \\ \hline
**Unhappy** & -0.19 & 1.25 & 847.3 & -1.04 \\ & & & & (-1.81) \\
**2** & -0.11 & 1.76 & 1094.46 & 0.25 \\ & & & & (0.54) \\
**3** & -0.07 & 1.81 & 1464.26 & 0.26 \\ & & & & (0.62) \\
**4** & -0.05 & 1.94 & 1931.27 & 0.47 \\
**5** & -0.03 & 1.76 & 1839.11 & 0.37 \\ & & & & (0.78) \\
**6** & -0.01 & 1.75 & 1362.77 & -0.10 \\ & & & & (-0.23) \\
**7** & 0 & 1.71 & 2167.74 & 0.65 \\
**8** & 0.03 & 1.69 & 1576.52 & 0.27 \\
**9** & 0.05 & 1.47 & 1262.08 & 0.89 \\
**Happy** & 0.12 & 1.37 & 1446.29 & 0.86 \\ & & & & (1.84) \\
**Happy-Unhappy** & 0.31 & 0.12 & 598.99 & 0.90 \\ \hline \end{tabular}
\end{table}
Table 1: Univariate sort result of Happiness Sentiment Exposure(HSE). The portfolio is formed by sorting the stocks with cross-sectional decile of HSE for each year. we report time-series average for average BE/ME and market capitalization(market cap) of each stock in the portfolio, in which market cap is reported in 1 billion Korean won. Portfolio return is calculated by value-weighting the returns with market cap and subtracting with risk-free rate. We used monthly investment return of 1-year Korean Government treasury bill for risk-free rate. The portfolio is formed in June of year \(t\) using the HSE data from July of year \(t-4\) to June of year \(t\)(with at least 24 observation) and rebalanced that July of year \(t+1\).
HSE is a risk proxy, or at least a persisting anomaly unrelated to size or BE/ME.
### Regression Analysis
Table 4 show the Fama-Macbeth Regression result for several variables. The regression result will show some insights about the HSE's predictability, and also the conditional predictability when the other variables are controlled.
Here we can see that HSE does explain some cross-sectional return in univariate condition. The t-value of HSE coefficient is statistically significant by a margin, but considering that HSE has estimation error, this result can be seen as robust. bivariate regression of HSE and \(log(BE/ME)\) (more precisely \(log(BE/ME)_{i}^{+}\), HSE, and \(BE_{Dummy,i}\)) show the decrease in HSE's explanatory power by a lot.
Interesting observation is shown when we consider the relationship between HSE and \(log(ME)\). When comparing 4th and 7th regression, the significance of ME change dramatically, from 2.18 standard deviation away from 0 to 3.17 standard deviation away from 0, with increase in risk premium as well. This is also shown in 1st amd 5th regression. This result could be understood by HSE-size sort result that shown previously. If the HSE can
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & **Unhappy** & **2** & **3** & **4** & **Happy** & **Happy-Unhappy** \\ \hline \multirow{2}{*}{**Small**} & 1.74 & 2.21 & 2.15 & 1.69 & 1.84 & 0.10 \\ & (3.24) & (4.22) & (4.37) & (3.12) & (3.51) & (0.24) \\ & 1.22 & 1.33 & 0.97 & 1.45 & 1.15 & -0.07 \\ & (2.46) & (2.91) & (2.11) & (3.12) & (2.24) & (-0.24) \\ & 0.71 & 0.63 & 0.88 & 1.12 & 0.90 & 0.20 \\ & (1.41) & (1.37) & (1.88) & (2.34) & (1.82) & (0.73) \\ & -0.03 & 0.53 & 0.86 & 0.73 & 0.93 & 0.96 \\ & (-0.05) & (1.12) & (2.01) & (1.81) & (1.94) & (3.12) \\ & 0.09 & 0.45 & 0.91 & 0.64 & 0.85 & 0.76 \\
**Big** & (0.18) & (1.01) & (2.28) & (1.59) & (2.07) & (2.23) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sort result of stocks in KOSPI with quintiles of Size and HSE variable. For each size quintile, HSE quintile breakpoint is used to form HSE quintile portfolios inside the same size quintile. Size and HSE are observed in June of year \(t\). The portfolio is formed and rebalanced on the last day of June of year \(t\).
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & **Unhappy** & **2** & **3** & **4** & **Happy** & **Happy-Unhappy** \\ \hline \multirow{2}{*}{**Low**} & 0.16 & 0.54 & 0.34 & 0.57 & 0.57 & 0.41 \\ & (0.31) & (1.23) & (0.78) & (1.26) & (1.22) & (1.18) \\ & 0.69 & 0.79 & 0.95 & 0.88 & 0.73 & 0.03 \\ & (1.48) & (1.77) & (2.07) & (1.98) & (1.59) & (0.10) \\ & 1.11 & 1.24 & 1.17 & 0.91 & 1.44 & 0.32 \\ & (2.30) & (2.80) & (2.80) & (1.89) & (2.89) & (1.09) \\ & 1.09 & 1.28 & 1.48 & 1.52 & 1.44 & 0.36 \\ & (2.28) & (2.71) & (3.08) & (3.25) & (2.93) & (1.25) \\ & 1.20 & 1.39 & 1.36 & 1.84 & 1.66 & 0.46 \\ & (2.51) & (3.05) & (3.05) & (3.80) & (3.28) & (1.51) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sort result of stocks in KOSPI with quintiles of BE/ME and HSE variable. For each BE/ME quintile, HSE quintile breakpoint is used to form HSE quintile portfolios inside the same BE/ME quintile. Size and HSE are observed at June of year \(t\),-. The portfolio is formed at June of year \(t\) and rebalanced at June of year \(t+1\).
explain big firm's abnormal high performance in Korean stock market [6], the size effect become more apparent. Whereas, HSE seems to not sufficiently explain high value firms' performance, though the risk premium for BE/ME lessen. The regression that includes all variables show that HSE does have explanatory power when size and BE/ME is controlled.
\begin{table}
\begin{tabular}{l l l l l} \hline
**a** & **s** & **h** & \(h_{dummy}\) & **hs** \\ \hline
3.69 & -0.22 & & & \\
3.49 & -2.79 & & & \\
0.77 & & 0.70 & -2.22 & \\
1.84 & & 6.75 & -2.43 & \\
0.88 & & & & 2.20 \\
2.13 & & & & 2.01 \\
2.83 & -0.17 & 0.60 & -2.47 & \\
0.92 & -2.18 & 5.34 & -2.77 & \\
4.19 & -0.26 & & & 2.08 \\
4.05 & -3.94 & & & 1.97 \\
0.77 & & 0.45 & -2.61 & 2.02 \\
1.90 & & 3.75 & -1.92 & 1.82 \\
3.52 & -0.21 & 0.32 & -2.83 & 2.12 \\
3.41 & -3.17 & 2.68 & -2.07 & 2.00 \\ \hline \end{tabular}
\end{table}
Table 4: On each month, we conducted the following regression.
\(R_{i}-RF=a+s\cdot log(ME_{i})+h\cdot log(BE/ME)_{i}^{+}+h_{dummy}\cdot BE_{ Dummy,i}+hs\cdot HSE_{i}+\epsilon_{i}\).
\(log(BE/ME)_{i}^{+}\) is BE/ME value that fills BE/ME with BE less than 0. \(BE_{Dummy,i}\) is 1 when BE is less than 0, else 1. Then we calculated time-series average and its t-value of coefficients and intercept. Returns are observed at July, year \(t\) to June, year \(t+1\), and BE/ME are observed at December, year \(t-1\) and HSE, ME is observed at June, year \(t\).
\begin{table}
\begin{tabular}{l l l l l l} \hline
**Sub Sample** & **a** & **s** & **h** & \(h_{dummy}\) & **hs** \\ \hline Big & 11.37 & -0.74 & 0.10 & -0.94 & 3.58 \\ & (8.08) & (-8.13) & (0.71) & (-0.60) & (2.42) \\ Small & 23.40 & -2.06 & 0.69 & -4.23 & 2.08 \\ & (13.06) & (-13.45) & (4.96) & (-4.21) & (1.46) \\ \hline \end{tabular}
\end{table}
Table 5: Big and Small subsample of firms are determined by yearly median of size(log value of market cap) value. If the firm’s size at year \(t\) is bigger than the size median at time \(t\), is classified as big firm, and vise versa. On each month, we conducted the following regression.
## 4 Conclusion
Here we conclude the paper by exploring alternative interpretation of the result, and application of the result to investors.
### What is HSE?
Here, we need to considerr other interpretation of HSE other than the proxy of demand pressure sensitivity for economic demand. We can hypothesize two theories. First, HSE could be some sort of behavioral error predictor driven by sentiment, such that the predictor predicts the short-term error or long-term market correction dynamics. Second theory is that HSE is really fundamentally related to individual firm, which captures the company's ability to satisfy societies' utility demand. As big firms consist most of the market and economic activities, the society's happiness demand will be more dependent to to big firms than to small firms. The same logic could be applied to value firms as well. The first hypothesis is questionable because of many reasons. The most prominent one is that if the theory is correct, the coefficient should be negative, not positive. Even if the behavioral error cannot be adjusted immediately, the phenomena is unlikely to persist for a year to make the coefficient positive. Big and Small firm Subsample and Value and Growth firm Subsample Fama-Macbeth regression support the latter hypothesis.
Here we can see that there is nonlinear relationship between size or BE/ME and HSE, where HSE explains big and value firm's cross-sectional return difference more better than small and growth firms. This supported by the fact that most of societies utility demand is processed by big firms and mature industries.
### Application
As HSE shows explanatory power to big and value firms, this have substantial implication to institutional investors. As institutional investors have out-performance demand and liquidity constraints, they are drawn to big and value firms. Here by applying HSE related strategies, institutional investors can seek for out-performance while maintaining its investment portfolio style.
\begin{table}
\begin{tabular}{l l l l l l} \hline
**Sub Sample** & **a** & **s** & **h** & \(h_{dummy}\) & **hs** \\ \hline Value & 5.58 & -0.37 & -0.06 & 0.00 & 3.94 \\ & 4.68 & -4.35 & -0.33 & 1.48 & 2.58 \\ Growth & 2.24 & -0.11 & 0.40 & 0.00 & 0.96 \\ & 1.97 & -1.49 & 2.43 & -0.89 & 0.79 \\ \hline \end{tabular}
\end{table}
Table 6: Value and Growth subsample of firms are determined by yearly median of BE/ME value (including negative BE/ME). If the firm’s BE/ME at year \(t\) is bigger than the BE/ME median at time \(t\), is classified as big firm, and vise versa. On each month, we conducted the following regression. \(R_{i}-RF=a+s\cdot log(ME_{i})+h\cdot log(BE/ME)_{i}^{+}+h_{dummy}\cdot BE_{ Dummy,i}+hs\cdot HSE_{i}+\epsilon_{i}\). \(log(BE/ME)_{i}^{+}\) is BE/ME value that fills BE/ME with BE less than 0. \(BE_{Dummy,i}\) is 1 when BE is less than 0, else 1. Then we calculated time-series average and its t-value of coefficients and intercept. Returns are observed at July, year \(t\) to June, year \(t+1\), and BE/ME are observed at December, year \(t-1\) and HSE, ME is observed at June, year \(t\).
Although because of technical issues, current HSE based on Google Trend data is unstable and have some estimation errors. For applying to real world problems, there should be a robust way of estimating HSE. In academic perspective, this research has shown that google trend could be more than sentiment proxy. When chosen wisely, some search topics could be related to risk that will be priced.
|
2302.12675 | Integrable Quantum Circuits from the Star-Triangle Relation | The star-triangle relation plays an important role in the realm of exactly
solvable models, offering exact results for classical two-dimensional
statistical mechanical models. In this article, we construct integrable quantum
circuits using the star-triangle relation. Our construction relies on families
of mutually commuting two-parameter transfer matrices for statistical
mechanical models solved by the star-triangle relation, and differs from
previously known constructions based on Yang-Baxter integrable vertex models.
At special value of the spectral parameter, the transfer matrices are mapped
into integrable quantum circuits, for which infinite families of local
conserved charges can be derived. We demonstrate the construction by giving two
examples of circuits acting on a chain of $Q-$state qudits: $Q$-state Potts
circuits, whose integrability has been conjectured recently by Lotkov et al.,
and $\mathbb{Z}_Q$ circuits, which are novel to our knowledge. In the first
example, we present for $Q=3$ a connection to the Zamolodchikov-Fateev
19-vertex model. | Yuan Miao, Eric Vernier | 2023-02-24T14:55:52Z | http://arxiv.org/abs/2302.12675v4 | # Integrable Quantum Circuits from the Star-Triangle Relation
###### Abstract
The star-triangle relation plays an important role in the realm of exactly solvable models, offering exact results for classical two-dimensional statistical mechanical models. In this article, we construct integrable quantum circuits using the star-triangle relation. Our construction relies on families of mutually commuting two-parameter transfer matrices for statistical mechanical models solved by the star-triangle relation, and differs from previously known constructions based on Yang-Baxter integrable vertex models. At special value of the spectral parameter, the transfer matrices are mapped into integrable quantum circuits, for which infinite families of local conserved charges can be derived. We demonstrate the construction by giving two examples of circuits acting on a chain of \(Q-\)state qudits: \(Q\)-state Potts circuits, whose integrability has been conjectured recently by Lotkov et al., and \(\mathbb{Z}_{Q}\) circuits, which are novel to our knowledge. In the first example, we present for \(Q=3\) a connection to the Zamolodchikov-Fateev 19-vertex model.
## I Introduction
Quantum circuits, built from a sequence of local operations acting on a system of qubits (or, more generally, qudits), have attracted an increasing interest over the past few years. First, they furnish a new playground for the investigation of many-body quantum physics, in particular for the study of out-of-equilibrium phenomena [1; 2; 3; 4; 5]. Second, they can be implemented in a quantum computer and form the building blocks of digital quantum simulation [6; 7]. They can also be used to generate periodically-driven (Floquet) many-body systems, leading to exotic new phases of matter [8; 9; 10].
For many-body systems governed by continuous Hamiltonian evolution, the existence of integrable models has proven an invaluable tool in order to study physical properties both at equilibrium [11; 12; 13; 14; 15], and out-of-equilibrium [16]. Quantum integrability usually refers to one-dimensional quantum Hamiltonians related to exactly solvable two-dimensional statistical mechanical models through the transfer matrix formalism and the Yang-Baxter equation, whose spectrum or correlation functions can typically be calculated exactly using tools such as the Bethe ansatz [13; 14]. Beyond the possibility of exact results that it offers, integrability also comes with rich physical consequences. The existence of an extensive number of conserved quantities in integrable models constrains their late-time relaxation, yielding new equilibrium states known as Generalized Gibbs Ensembles [17; 18; 19]. For inhomogeneous systems integrability also constrains the transport properties, leading to Generalized Hydrodynamics [20; 21]. It has therefore quickly become a natural question, whether one could similarly construct and study integrable models of quantum circuits, corresponding to dynamical models for one-dimensional quantum systems with discrete space and time.
It has long been known how to adapt the transfer matrix-mediated correspondence between integrable two-dimensional vertex models and quantum Hamiltonians to a circuit-like geometry [22; 23; 24; 25], in relation with the lattice regularisation of (1+1)-dimensional integrable quantum field theories. In the recent years this fact has been used to construct integrable Floquet dynamics [26; 27; 28; 29], and recently the effect of integrability on the late-time relaxation of digital quantum simulations has also been investigated [30]. However, a systematic understanding of the condition
when quantum circuits can be solved using quantum integrability is still missing. It is worth noting that most of the exact results obtained lately in fact concern quantum circuits which are solvable while escaping the traditional framework of Yang-Baxter integrability, namely, random [1, 2, 3] and dual-unitary circuits [4, 5]. There are also other examples on how to use quantum circuits to study quantum integrability that are different from our approach, see [31, 32, 33, 34, 35, 36].
In this work, we describe the construction of integrable quantum circuits based on \(Q\)-states spins with \(\mathbb{Z}_{Q}\) symmetry. Those arise as generalizations of the Ising model (corresponding to \(Q=2\)), and can be realized with Rydberg atoms [37, 38]. Furthermore, they have very rich physical properties, relating to quantum phase transitions and parafermions [39, 40, 41]. Our construction uses a framework analogous to that of [23], namely inhomogeneous transfer matrices are used to generate a circuit-like dynamics, however in contrast with previous constructions the primary role for integrability is played here, rather than the Yang-Baxter equation, by the closely related Star-Triangle Relation (STR) [42, 12, 43]. Using known solutions of the star-triangle relation for \(Q\)-state spins, we construct two-parameter families of mutually commuting transfer matrices acting on a chain of \(L\) spins. At some special value of their parameters the transfer matrices become the generator of the circuit dynamics, while varying the parameters around their special value allows to construct local charges which are conserved by the dynamics.
In practice, we focus in this work on two families of \(Q\)-states circuits, associated with two families of solutions of the STR: the so-called Potts circuits, whose integrability was conjectured in [44] (and for which the first few conserved charges were constructed by hand), and the so-called \(\mathbb{Z}_{Q}\) circuits. The constructed circuits are in general interacting yet solvable, as guaranteed by the STR, and therefore go beyond some known results for driven Ising models that are solved using free fermionic techniques [45, 46, 47, 48]. We would like to emphasize that, while most of this work is concerned with some particular \(Q\)-states models, our procedure works in principle for any solution of the Star-Triangle relation, and could be used to construct more generic integrable quantum circuits.
The paper is organized as follows. In Section II, we present some generic properties of the \(Q\)-states quantum circuits constructed in this work, and how they can be seen as emerging from the stroboscopic evolution of periodically driven (Floquet) systems. In Section III, we present a generic procedure to construct quantum circuits from two-dimensional statistical mechanics model satisfying the Star-Triangle Relation. While this construction is not specific to \(Q\)-states systems and could in principle be applied more generically, in the rest of the paper we specify again to \(Q\)-states systems and construct two families of integrable quantum circuits. The first family, studied in Section IV, is that of \(Q\)-states Potts circuits, where the \(\mathbb{Z}_{Q}\) symmetry is enhanced to the symmetric group \(S_{Q}\). We construct integrable circuits from previously known \(S_{Q}\)-symmetric solutions of the star-triangle relation [43], and express the discrete time evolution operator as well as the conserved charges in terms of generators of the affine Temperley-Lieb algebra [49]. The resulting dynamics is unitary, and can be thought of as the Floquet dynamics of a quantum Potts Hamiltonian. It recovers the circuit considered in [44], and we also point out an interesting connection with the Zamolodchikov-Fateev 19-vertex model [50] and the Onsager algebra [51]. The second family of models, which is the object of Section V, is based on \(\mathbb{Z}_{Q}\)-symmetric solutions of the star-triangle relation [52]. For \(Q=3\), the resulting circuit coincides with the \(S_{3}\)-symmetric circuit of the first family. For general \(Q>3\) however the constructed models differ from the previous ones, in particular they are not unitary. For \(Q=4\), in particular, a relation is found with the critical Ashkin-Teller model [53, 54, 55].
\(Q\)-states quantum circuits
Before discussing the general framework for constructing integrable quantum circuits through the STR, which will be presented in Section III, we start with a brief overview of the \(Q\)-states circuits which will be constructed from explicit solutions in Sections IV and V.
One way to view those circuits is as stroboscopic (Floquet) evolution operators, motivated by the known results on periodically driven Ising models [45; 46; 47; 48]. Such circuits were solved exactly by free fermionic techniques, and we consider in this work more generic cases which are intrinsically interacting. We therefore consider a chain of \(L\) consecutive \(Q\)-level spins ("qudits"), where \(Q\) is some integer \(\geq 2\). The total Hilbert space is the tensor product of the local \(Q\)-level spins, i.e. \((\mathbb{C}^{Q})^{\otimes L}\). The quantum circuits that we study in this paper can be seen as a stroboscopic (Floquet) evolution of time-dependent quantum Hamiltonian \(\mathbf{H}(t)\) such that
\[\mathbf{H}(t)=\left\{\begin{aligned} &\mathbf{H}_{2},\quad 0\leq t< \tau,\\ &\mathbf{H}_{1},\quad\tau\leq t<2\tau,\end{aligned}\right. \tag{1}\]
which is periodic in time, i.e. \(\mathbf{H}(t+2n\tau)=\mathbf{H}(t)\), \(n\in\mathbb{Z}\). Furthermore, we assume that two parts \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) consist of terms acting on one or two consecutive sites of the \(Q\)-level spins respectively,
\[\mathbf{H}_{1}=\sum_{m=1}^{L}\mathbf{h}_{m}^{(1)},\quad\mathbf{H}_{2}=\sum_{m =1}^{L}\mathbf{h}_{m,m+1}^{(2)}. \tag{2}\]
Periodic boundary condition is used here (\(\mathbf{h}_{L,L+1}^{(2)}=\mathbf{h}_{L,1}^{(2)}\)). We also assume that
\[\left[\mathbf{h}_{m}^{(1)},\mathbf{h}_{n}^{(1)}\right]=0,\quad\left[\mathbf{h }_{m,m+1}^{(2)},\mathbf{h}_{n,n+1}^{(2)}\right]=0,\quad\forall m,n. \tag{3}\]
In this case, the Floquet evolution operator \(\mathbf{U}_{\mathrm{F}}(\tau)=\mathcal{P}\exp[\int_{0}^{2\tau}\mathrm{d}t \mathbf{H}(t)]\), describing the stroboscopic time evolution of the time-dependent Hamiltonian \(\mathbf{H}(t)\)1, becomes
Footnote 1: We can equivalently use a “kicked” time dependent Hamiltonian that gives the same stroboscopic time evolution. This will not change the quantum circuits that we study.
\[\mathbf{U}_{\mathrm{F}}(\tau)=\exp\left(-\mathrm{i}\mathbf{H}_{1}\tau\right) \exp\left(-\mathrm{i}\mathbf{H}_{2}\tau\right)=\mathbf{U}_{1}\mathbf{U}_{2}, \tag{4}\]
Hence we can rewrite the stroboscopic time evolution \(\mathbf{U}_{\mathrm{F}}^{M}(\tau)\), for an integer \(M\in\mathbb{Z}_{>0}\), as a quantum circuit, as shown in Fig. 1. In particular, the stroboscopic time evolution of the kicked Ising model [44; 27; 48] is of this type.
Moreover, we would like to concentrate on models with a \(\mathbb{Z}_{Q}\) "clock" symmetry, which generalizes the \(\mathbb{Z}_{2}\) symmetry of the Ising model and connects with a number of interesting physical realizations [37; 38; 39; 40; 41]. For this sake we introduce the local operators \(\mathbf{X}_{m}\), \(\mathbf{Z}_{m}\) satisfying the following algebra
\[\mathbf{X}_{m}^{\dagger}=\mathbf{X}_{m}^{Q-1},\mathbf{Z}_{m}^{\dagger}= \mathbf{Z}_{m}^{Q-1}\quad\mathbf{X}_{m}^{Q}=\mathbf{Z}_{m}^{Q}=\mathbb{1}, \quad\mathbf{X}_{m}\mathbf{Z}_{m}=\omega\mathbf{Z}_{m}\mathbf{X}_{m}\,, \tag{5}\]
where the \(Q\)-th root of unity \(\omega=\exp\left(\frac{2\mathrm{i}\pi}{Q}\right)\), while operators acting on different spins commute : \(\mathbf{X}_{m}\mathbf{Z}_{n}=\mathbf{X}_{n}\mathbf{Z}_{m}\) for \(m\neq n\) (see Eq. (50) for an explicit representation). Requiring the assumption (3), we focus on the cases where the Floquet evolution operator \(\mathbf{U}_{\mathrm{F}}=\mathbf{U}_{1}\mathbf{U}_{2}\) is decomposed as
\[\mathbf{U}_{1} =\prod_{j=1}^{L}\left(\sum_{a=1}^{Q-1}u_{a}(\mathbf{X}_{j})^{a}\right) \tag{6}\] \[\mathbf{U}_{2} =\prod_{j=1}^{L}\left(\sum_{a=1}^{Q-1}v_{a}(\mathbf{Z}_{j}^{ \dagger}\mathbf{Z}_{j+1})^{a}\right)\,.\]
Written in the above form, the evolution generators \(\mathbf{U}_{1}\) and \(\mathbf{U}_{2}\) are manifestly \(\mathbb{Z}_{Q}\)-symmetric, namely invariant under the operation \(\mathbf{Z}_{j}\rightarrow\omega\mathbf{Z}_{j}\), \(\mathbf{X}_{j}\rightarrow\mathbf{X}_{j}\) applied simultaneously on all spins. Moreover, in all examples considered in the following they will turn out to enjoy another symmetry encoded in the fact that \(u_{Q-a}=u_{a}\) and \(v_{Q-a}=v_{a}\) for all \(a\), namely they are invariant under the charge conjugation operation \(\mathbf{Z}_{j}\leftrightarrow\mathbf{Z}_{j}^{\dagger}\), \(\mathbf{X}_{j}\leftrightarrow\mathbf{X}_{j}^{\dagger}\). For \(Q=3\), the \(\mathbb{Z}_{3}\) symmetry and charge conjugation together generate a \(S_{3}\) symmetry group. For \(Q\geq 4\) the \(\mathbb{Z}_{Q}\) (\(+\) charge conjugation) and \(S_{Q}\) symmetries cease to be equivalent, and we will consider both types of models, invariant under the \(S_{Q}\) and \(\mathbb{Z}_{Q}\) symmetry respectively.
**Remarks.** For generic choices of the parameters \(u_{a}\) and \(v_{a}\), the resulting quantum circuits are not integrable (or exactly solvable). As we shall explain in the latter sections, certain choices of the parameters \(u_{a}\) and \(v_{a}\) will lead to the integrable quantum circuits that commute with transfer matrices. One notable example is when \(u_{a}=v_{b}\) for arbitrary \(a,b\in\mathbb{Z}_{Q}\), which has been conjectured in [44]. We shall prove the conjecture in Sec. IV and provide a different example in Sec. V. Another crucial remark is about the unitarity of the Floquet evolution operator \(\mathbf{U}_{\mathrm{F}}\) (or subsequently the operators \(\mathbf{U}_{1}\) and \(\mathbf{U}_{2}\)). In fact, arbitrary choices of the parameters \(u_{a}\) and \(v_{a}\) will not lead to a unitary time evolution. An exception occurs with the Potts circuits explained in Sec. IV, cf. (53).
## III Two-parameter transfer matrices from the star-triangle relation
### The Star-triangle relation
The star-triangle relation (STR) [42; 43; 12] is a powerful tool to solve 2-dimensional statistical mechanical models exactly. Several renowned statistical mechanical models can be solved by the STR, such as classical Ising model, classical (chiral) Potts models on a square lattice, etc...
Figure 1: Generic structure of the circuits considered in this paper. The discrete time evolution is comprised of two steps, \(\mathbf{U}_{1}\) which is the product of local one-site operations, and \(\mathbf{U}_{2}\) which is the product of two-site gates. The two-site gates commute with one another and can be multiplied in arbitrary order. However, the two steps do not commute with each other, hence generating a non-trivial dynamics.
Generically, the star-triangle relation is defined for a statistical model of "heights", or "spins" taking values in some set \(\mathcal{S}\subset\mathbb{Z}\). For the moment we do not need to specify further the nature of \(\mathcal{S}\), but turning to explicit solutions of the star-triangle relation in Sections IV and V, it will taken to be \(\{1,\ldots Q\}\), with \(Q\) some positive integer (in other terms the heights are defined modulo \(Q\)). The heights sit at the vertices of a two-dimensional lattice and the weight of a given height configuration is the product over all edges of a function \(K(\theta;i,j)\) of the adjacent heights \(i,j\), where \(\theta\in\mathbb{C}\) is an additional parameter called spectral parameter. The star-triangle relation then reads [12]
\[\begin{split}\sum_{m\in\mathcal{S}}K(\theta_{1};i,m)K(\theta_{2}; j,m)K(\theta_{3};k,m)\\ =f(\theta_{1},\theta_{2},\theta_{3})K(\pi-\theta_{1};j,k)K(\pi- \theta_{2};k,i)K(\pi-\theta_{3};i,j),\\ \theta_{1}+\theta_{2}+\theta_{3}=\pi\,,\end{split} \tag{7}\]
where \(f(\theta_{1},\theta_{2},\theta_{3})\) is some normalization function which does not depend on the heights \(i,j,k\). A pictorial illustration of (7) is given in Fig. 2.
In the following we will assume that the function \(K(\theta;i,j)\) satisfies the following additional properties:
\[K(\theta;i,j)=K(\theta;i-j)=K(\theta;j-i)\,, \tag{8}\]
While there exist solutions of the star-triangle relation which do not verify Eq. (8) the latter is verified in many cases of physical relevance, and will be in particular for the solutions of considered in this work. Furthermore, all solutions of the star-triangle relation considered in this work allow for two special values of the spectral parameter, \(\theta=0,\pi\), for which the function \(K(\theta,\alpha,\beta)\) takes a particularly simple form :
\[K(0;i,j)=\delta_{i,j}\,,\qquad K(\pi;i,j)=\kappa\,,\qquad\forall i,j\in \mathcal{S}\,, \tag{9}\]
where the parameter \(\kappa\) entering the second equation is independent of the indices \(\alpha,\beta\).
### Two-parameter transfer matrices
From the star-triangle relation (7), we can construct a set of mutually commuting transfer matrices, which can conveniently be recast as the row-to-row transfer matrices of a vertex model.
Figure 2: Graphical illustration of the star-triangle relation (7).
To achieve this, we follow the route of [56]. We start by grouping the interactions along the edges surrounding a given "plaquette" into the following R matrix (see Figure 3)
\[\mathbf{R}_{ab}(\lambda,\mu,\phi)=\sum_{i,j,k,l\in\mathcal{S}}K(\lambda;i,k)K( \pi-\lambda-\phi;k,j)K(\mu;j,l)K(\pi-\mu+\phi;l,i)\mathbf{E}_{a}^{i,j}\otimes \mathbf{E}_{b}^{k,l}\,, \tag{10}\]
where the Kronecker matrices \(\mathbf{E}_{a}^{i,j}\), \(\mathbf{E}_{b}^{k,l}\) act in vector spaces \(a\) and \(b\) whose basis states are indexed by the states in \(\mathcal{S}\).
As detailed in App. A.1, it can be shown using the star-triangle relation that the R matrix obeys the Yang-Baxter equation
\[\mathbf{R}_{ab}(\lambda_{12},\mu_{12},\phi^{\prime})\mathbf{R}_{ac}(\lambda_{ 1},\mu_{1},\phi)\mathbf{R}_{bc}(\lambda_{2},\mu_{2},\phi)=\mathbf{R}_{bc}( \lambda_{2},\mu_{2},\phi)\mathbf{R}_{ac}(\lambda_{1},\mu_{1},\phi)\mathbf{R}_ {ab}(\lambda_{12},\mu_{12},\phi^{\prime})\,, \tag{11}\]
where
\[\lambda_{12}=\lambda_{1}-\lambda_{2},\quad\mu_{12}=\mu_{1}-\mu_{2},\quad\phi^{ \prime}=\phi+\lambda_{1}-\mu_{1}. \tag{12}\]
The pictorial interpretation of the Yang-Baxter equation in terms of plaquettes is given in Fig. 4.
Figure 3: Pictorial illustration of the R matrix of Eq. (10).
Using the R matrix, we can group the weights of all plaquettes along a horizontal row of the rotated square lattice into the following matrix product operator called transfer matrix
\[\mathbf{T}(\lambda,\mu,\phi,\{\zeta_{j}\})=\mathbf{T}\mathbf{r}_{a}\left[\prod_ {j=1}^{L}\mathbf{R}_{aj}(\lambda-\zeta_{j},\mu-\zeta_{j},\phi)\right], \tag{13}\]
where the trace \(\mathbf{T}\mathbf{r}_{a}\) follows from the choice of periodic boundary conditions in the horizontal direction, and where \(\{\zeta_{j}\}\) are arbitrary spectral parameters, which can generically taken to be inhomogeneous. In the literature, people usually consider the case with \(\phi=0\) and inhomogeneities \(\zeta_{j}=0\), which has been used as the transfer matrix for quantum Potts chain or clock Hamiltonians [56]. In contrast, in the present case, we will need the parameter \(\phi\neq 0\) to establish a connection with integrable quantum circuits. The transfer matrix is depicted pictorially in Fig. 5, where our convention is that it transfer the heights of the top row to the bottom row.
From the Yang-Baxter equation (11), it can be shown that the transfer matrices with the same \(\phi\) and inhomogeneities \(\{\zeta_{j}\}\) but different horizontal spectral parameters \(\lambda,\mu\) commute :
\[[\mathbf{T}(\lambda_{1},\mu_{1},\phi,\{\zeta_{j}\}),\mathbf{T}(\lambda_{2}, \mu_{2},\phi,\{\zeta_{j}\})]=0,\quad\lambda_{1},\lambda_{2},\mu_{1},\mu_{2} \in\mathbb{C}\,. \tag{14}\]
Therefore, we will often call these "two-parameter transfer matrices", meaning that for a given model \(\phi\) and \(\{\xi_{j}\}\) are fixed while \(\lambda\) and \(\mu\) are allowed to vary. In the remaining part of the article, we will focus on the homogeneous case where all the \(\zeta_{j}\to 0\), and will therefore omit the latter from our notations.
From the star-triangle relation (7), the two-parameter transfer matrix satisfies a "self-duality" relation, i.e.
\[\mathbf{T}(\lambda,\mu,\phi)=\mathbf{T}(\mu-\phi,\lambda+\phi,\phi). \tag{15}\]
A diagrammatic derivation of the self-dual relation is demonstrated in Fig. 8 in App. A.2.
In addition, considering the product of two transfer matrices, and applying the star-triangle relation (7), we have
\[\mathbf{T}(\lambda_{1},\mu_{1},\phi)\mathbf{T}(\lambda_{2},\mu_{2},\phi)= \mathbf{T}(\mu_{2}-\phi,\mu_{1},\phi)\mathbf{T}(\lambda_{1},\lambda_{2}+\phi, \phi). \tag{16}\]
The proof is analogous to the "self-dual" property and the diagrammatic demonstration is shown in Fig. 9 in App. A.3.
Combining with the "self-duality" of the transfer matrix (15), we show the factorisation of the two-parameter transfer matrix,
\[\begin{split}\mathbf{T}(\lambda_{1},\mu_{1},\phi)\mathbf{T}( \lambda_{2},\mu_{2},\phi)&=\mathbf{T}(\mu_{2}-\phi,\mu_{1},\phi) \mathbf{T}(\lambda_{1},\lambda_{2}+\phi,\phi)\\ &=\mathbf{T}(\mu_{1}-\phi,\mu_{2},\phi)\mathbf{T}(\lambda_{2}, \lambda_{1}+\phi,\phi)\\ &=\mathbf{T}(\lambda_{1},\mu_{2},\phi)\mathbf{T}(\lambda_{2},\mu_ {1},\phi).\end{split} \tag{17}\]
Therefore, we define two operators \(\mathbf{Q}(\lambda)\) and \(\mathbf{P}(\mu)\), such that
\[\mathbf{Q}(\lambda)=\mathbf{T}(\lambda,0,\phi),\quad\mathbf{P}(\mu)=\mathbf{ T}(0,\mu,\phi)\mathbf{T}^{-1}(0,0,\phi). \tag{18}\]
We have assumed that \(\mathbf{T}(0,0)\) is invertible, which is the case for the examples below. The two operators commute, i.e.
\[[\mathbf{Q}(\lambda),\mathbf{Q}(\mu)]=[\mathbf{P}(\lambda),\mathbf{P}(\mu)]=[ \mathbf{Q}(\lambda),\mathbf{P}(\mu)]=0,\quad\forall\lambda,\mu\in\mathbb{C}. \tag{19}\]
In this way, the two-parameter transfer matrix is factorised into two parts,
\[\mathbf{T}(\lambda,\mu,\phi)=\mathbf{Q}(\lambda)\mathbf{P}(\mu), \tag{20}\]
by using the factorisation property (17).
In the meantime, the self-duality implies
\[\mathbf{T}(\lambda,\mu,\phi)=\mathbf{Q}(\mu-\phi)\mathbf{P}(\lambda+\phi). \tag{21}\]
We notice the resemblance to the two-parameter transfer matrix of the 6-vertex model at root of unity, which can be used to construct Baxter's Q operator [57].
### Derivation of local commuting charges
When the function \(K(\theta;i,j)\) satisfies
\[\begin{split}& K(0;i,j)=\delta_{i,j},\\ & K(\pi-\phi;i,j)K(\pi+\phi;i,j)=f(\phi),\quad\forall i,j\in \mathcal{S},\end{split} \tag{22}\]
as in the case of all examples considered below in Sections IV and V, we have
\[\mathbf{R}_{a,b}(0,0,\phi)=f(\phi)\mathbf{P}_{a,b}, \tag{23}\]
where the operator \(\mathbf{P}_{a,b}\) is the permutation operator such that \(\mathbf{P}_{a,b}\mathbf{O}_{a}\mathbf{P}_{a,b}=\mathbf{O}_{b}\).
In this case, the two-parameter transfer matrix becomes
\[\mathbf{T}(0,0,\phi)=\mathrm{Tr}_{a}\left(\prod_{j=1}^{L}\mathbf{P}_{a,j} \right)=\prod_{j=L-1}^{1}\mathbf{P}_{j,j+1}=\mathbf{G}^{-1}, \tag{24}\]
where the operator \(\mathbf{G}=\prod_{j=1}^{L-1}\mathbf{P}_{j,j+1}\) is the one site translation operator.
In this scenario,
\[\mathbf{Q}(0)=\mathbf{G}^{-1},\quad\mathbf{P}(0)=\mathbb{1}. \tag{25}\]
and a family of mutually commuting local conserved charges can be constructed by taking the logarithmic derivatives of the transfer matrix around the point \(\lambda=0\), \(\mu=0\),
\[\mathbf{I}_{m,n}=\left.\partial_{\lambda}^{m}\partial_{\mu}^{n}\log\mathbf{T}( \lambda,\mu,\phi)\right|_{\lambda=0,\mu=0},\quad m,n\in\mathbb{Z}_{>0}. \tag{26}\]
Due to the factorised form of the two-parameter transfer matrix (20), we have
\[\mathbf{I}_{m,n}=0,\quad m\neq 0,\,n\neq 0. \tag{27}\]
There are therefore two sets of independent conserved quantities (when \(\phi\neq 0\)), namely
\[\mathbf{I}_{m,0},\ \mathbf{I}_{0,n},\quad m,n\in\mathbb{Z}_{>0}. \tag{28}\]
Note that when \(\phi=0\), \(\mathbf{I}_{m,0}=\mathbf{I}_{0,m}\).
### Circuit geometry
In order to recover a circuit-like geometry, we introduce another way of decomposing the two-parameter transfer matrix,
\[\mathbf{T}(\lambda,\mu,\phi)=\mathbf{V}(\mu,\phi)\mathbf{W}(\lambda,\phi)\,, \tag{29}\]
where the matrices \(\mathbf{V}(\mu,\phi)\) and \(\mathbf{W}(\lambda,\phi)\) encode the weights of the two lower (resp. upper) edges of each plaquette, as illustrated in Fig. 5. More precisely, they have the following matrix elements
\[\mathbf{V}^{c_{1},c_{2},\cdots c_{L}}_{b_{1},b_{2},\cdots b_{L}}( \mu,\phi) =K(\mu;c_{1},b_{2})K(\pi-\mu+\phi;c_{2},b_{2})\ldots K(\mu;c_{L},b _{1})K(\pi-\mu+\phi;c_{1},b_{1})\,, \tag{30}\] \[\mathbf{W}^{b_{1},b_{2},\cdots b_{L}}_{a_{1},a_{2},\cdots a_{L}}( \lambda,\phi) =K(\pi-\lambda-\phi;b_{2},a_{1})K(\lambda;b_{2},a_{2})\ldots K(\pi -\lambda-\phi;b_{1},a_{L})K(\lambda;b_{1},a_{1})\,. \tag{31}\]
This decomposition is different from the factorisation (17), in particular
\[[\mathbf{W}(\lambda_{1},\phi),\mathbf{W}(\lambda_{2},\phi)]\neq 0, \quad[\mathbf{V}(\lambda_{1},\phi),\mathbf{V}(\lambda_{2},\phi)]\neq 0, \tag{32}\]
for generic \(\lambda_{1}\), \(\lambda_{2}\). We can therefore rewrite
\[\mathbf{V}(\phi,\phi)=\mathbf{G}^{-1}\mathbf{U}_{1}(\phi),\quad\mathbf{W}(0, \phi)=\mathbf{U}_{2}(\phi), \tag{33}\]
where \(\mathbf{U}_{1}(\phi)\) and \(\mathbf{U}_{2}(\phi)\) are products of single-site operators and double-site operators, respectively.
Let us now specify the spectral parameters to \(\lambda,\mu=0,\phi\). In this case, using the special values (9) of the function \(K(\ldots,a,b)\), we find :
\[\mathbf{V}^{c_{1},c_{2},\cdots c_{L}}_{b_{1},b_{2},\cdots b_{L}}( \phi,\phi) =\kappa^{L}K(\phi;c_{1},b_{2})\ldots K(\phi;c_{L},b_{1})\,, \tag{34}\] \[\mathbf{W}^{b_{1},b_{2},\cdots b_{L}}_{a_{1},a_{2},\cdots a_{L}}( 0,\phi) =\delta_{b_{1},a_{1}}\ldots\delta_{b_{L},a_{L}}K(\pi-\phi;a_{2},a_{ 1})\ldots K(\pi-\phi;a_{1},a_{L})\,. \tag{35}\]
We can therefore rewrite :
\[\mathbf{V}(\phi,\phi)=\mathbf{G}^{-1}\mathbf{U}_{1}(\phi),\quad\mathbf{W}(0, \phi)=\mathbf{U}_{2}(\phi), \tag{36}\]
where \(\mathbf{G}^{-1}\) is the inverse translation operator introduced in the previous section, and \(\mathbf{U}_{1}(\phi)\) and \(\mathbf{U}_{2}(\phi)\) are products of single-site operators and double-site operators, respectively. The transfer matrix can therefore be expressed as the generator of a discrete quantum circuit dynamics,
\[\mathbf{T}(0,\phi,\phi)=\mathbf{G}^{-1}\mathbf{U}_{1}(\phi)\mathbf{U}_{2}( \phi), \tag{37}\]
Figure 5: The two-parameter transfer matrix \(T(\lambda,\mu,\phi)\), transfering the heights of the top row \((a_{1},a_{2},\ldots)\) to the bottom row \((c_{1},c_{2},\ldots)\).
with
\[\left[\mathbf{G}^{-1},\mathbf{U}_{1}(\phi)\right]=\left[\mathbf{G}^{-1},\mathbf{U} _{2}(\phi)\right]=0, \tag{38}\]
as shown in Fig. 6.
Defining the discrete time evolution operator
\[\mathbf{U}_{\mathrm{F}}(\phi)=\mathbf{U}_{1}(\phi)\mathbf{U}_{2}(\phi)= \mathbf{GT}(0,\phi,\phi)\,, \tag{39}\]
and using the fact that \([\mathbf{G},\mathbf{T}(\lambda,\mu,\phi)]=0\) for all \(\lambda,\mu\), we therefore see that \(\mathbf{U}_{\mathrm{F}}(\phi)\) commutes with the two-parameter family of transfer matrices,
\[[\mathbf{U}_{\mathrm{F}}(\phi),\mathbf{T}(\lambda,\mu,\phi)]=0,\ \lambda,\mu\in\mathbb{C}, \tag{40}\]
and therefore with the charges \(\mathbf{I}_{m,0}\) and \(\mathbf{I}_{0,n}\) constructed in the previous section. In this sense, it defines an integrable discrete dynamics. In the following two sections we will demonstrate this construction using known families of solutions of the star-triangle relation, associated respectively with the \(Q\)-state Potts model and the Fateev-Zamolodchikov \(\mathbb{Z}_{Q}\) model.
**Remark.** Alternatively, if we set the inhomogeneities \(\{\zeta_{j}\}\) to be staggered
\[\zeta_{2m-1}=\zeta_{1},\quad\zeta_{2m}=\zeta_{2},\quad\forall m\in\mathbb{Z}_ {+}, \tag{41}\]
we can construct a different integrable quantum circuits with brick-wall structure, cf. Fig. 3 of [28], via the "Floquet Baxterisation" [28]. The procedure is described in details in Sec. 4 of [28].
## IV Example: \(Q\)-state Potts circuits
We now move on to \(Q\)-states model, with \(Q\) some positive integer. Namely, we now specify the generic exposition of Section III to statistical models where the set of allowed heights at each site is \(\mathcal{S}=\{1,\ldots Q\}\), and will derive from there quantum circuits of the form discussed in Section II. In this Section we focus on one of the most renowned examples, that of the \(Q\)-state Potts model [12]. To begin with, we define the parameter \(\eta\) as
\[\sqrt{Q}=2\cosh\eta. \tag{42}\]
For instance, for \(Q=2\), \(\eta=\frac{\mathrm{i}\pi}{4}\), and \(Q=3\), \(\eta=\frac{\mathrm{i}\pi}{6}\). For \(Q=4\), \(\eta=0\), in which case the the star-triangle relation becomes rational instead of trigonometric (cf. (46)), while for \(Q\geq 5\), \(\eta\in\mathbb{R}\).
Figure 6: Relation between the two-parameter transfer matrix and the integrable quantum circuit.
### Star-Triangle relation and two-parameter R matrix
The specificity of the Potts model is that it is invariant under the permutation group \(S_{Q}\) of internal indices, and a corresponding solution of the star-triangle equation has been found under the form [12; 43]
\[K_{\text{Potts}}(\theta;a,b)=\frac{1}{\sqrt{Q}\sin(\eta/\text{i})}\sin\left( \frac{\eta\theta}{\text{i}\pi}\right)+\frac{1}{\sin(\eta/\text{i})}\sin\left( \frac{\eta(\pi-\theta)}{\text{i}\pi}\right)\delta_{a,b}. \tag{43}\]
To be specific, we write down the explicit expressions of the solution (43) with \(Q=2,3,4\),
\[K_{\text{Potts}}(\theta;a,b)=\sin\left(\frac{\theta}{4}\right)+\sqrt{2}\sin \left(\frac{\pi-\theta}{4}\right)\delta_{a,b},\quad Q=2; \tag{44}\]
\[K_{\text{Potts}}(\theta;a,b)=\frac{2}{\sqrt{3}}\sin\left(\frac{\theta}{6} \right)+2\sin\left(\frac{\pi-\theta}{6}\right)\delta_{a,b},\quad Q=3; \tag{45}\]
\[K_{\text{Potts}}(\theta;a,b)=\frac{\theta}{2\pi}+\left(1-\frac{\theta}{\pi} \right)\delta_{a,b},\quad Q=4. \tag{46}\]
Using the star-triangle relation, we construct the two-parameter R matrix in the manner of (10), satisfying the Yang-Baxter relation (11). The two-parameter transfer matrix can be constructed using (13). The solution (43) satisfies properties of the form (22), where
\[f(\phi)=\begin{cases}\frac{4}{Q(4-Q)}\sin\left(\frac{\eta(\pi-\phi)}{\text{i }\pi}\right)\sin\left(\frac{\eta(\pi+\phi)}{\text{i}\pi}\right)\qquad\text{ for }Q\neq 4\\ \frac{(\pi-\phi)(\pi+\phi)}{4\pi^{2}}\qquad\text{ for }Q=4\end{cases} \tag{47}\]
Therefore, the R matrix (10) satisfies
\[\mathbf{R}_{a,b}(0,0,\phi)=f(\phi)\mathbf{P}_{a,b}. \tag{48}\]
When \(Q=3\), the normalisation factor becomes
\[\mathbf{R}_{a,b}(0,0,\phi)=\frac{2\cos(\phi/3)-1}{3}\mathbf{P}_{a,b}, \tag{49}\]
which we will focus on later.
**Remark.** When parameter \(\phi=0\), the two-parameter transfer matrix becomes the transfer matrix of the 3-state Potts model [12; 58; 59].
### Quantum circuit
As anticipated in Section II, a convenient way to express the circuit operators obtained from the two-parameter transfer matrices is to introduce the Potts operators acting on the physical Hilbert space \(\left(\mathbb{C}^{Q}\right)^{\otimes L}\),
\[\begin{split}\mathbf{X}_{m}=\mathbb{1}^{\,\otimes(m-1)}\otimes \left(\mathbf{E}_{m}^{Q,1}+\sum_{j=1}^{Q-1}\mathbf{E}_{m}^{j,j+1}\right) \otimes\mathbb{1}^{\,\otimes(L-m)},\\ \mathbf{Z}_{m}=\mathbb{1}^{\,\otimes(m-1)}\otimes\left(\sum_{j= 1}^{Q}\omega^{j-1}\mathbf{E}_{m}^{j,j}\right)\otimes\mathbb{1}^{\,\otimes(L-m )},\end{split} \tag{50}\]
where the \(Q\)-th root of unity \(\omega=\exp\left(\frac{2\mathrm{i}\pi}{Q}\right)\). Those can be easily checked to satisfy the algebra (5).
Another sets of useful operators are the Potts representation of the affine Temperley-Lieb algebra [12, 49, 60],
\[\mathbf{e}_{2m-1}=\frac{1}{\sqrt{Q}}\sum_{a=0}^{Q-1}\mathbf{X}_{m}^{a},\quad \mathbf{e}_{2m}=\frac{1}{\sqrt{Q}}\sum_{a=0}^{Q-1}\left(\mathbf{Z}_{m}^{\dagger }\mathbf{Z}_{m+1}\right)^{a}, \tag{51}\]
which satisfy the following relations,
\[\mathbf{e}_{m}^{2}=\sqrt{Q}\mathbf{e}_{m},\quad\mathbf{e}_{m}\mathbf{e}_{m\pm 1 }\mathbf{e}_{m}=\mathbf{e}_{m},\quad\mathbf{e}_{m}\mathbf{e}_{n}=\mathbf{e}_{n }\mathbf{e}_{m},\ |m-n|\geq 2, \tag{52}\]
with periodic boundary condition \(\mathbf{e}_{2L+1}=\mathbf{e}_{1}\). Furthermore, these are manifestly hermitian, \(\mathbf{e}_{m}^{\dagger}=\mathbf{e}_{m}\).
Following the circuit construction of Section III, it can be checked that in the present case the operators \(\mathbf{U}_{1}(\phi)\), \(\mathbf{U}_{2}(\phi)\) take the form
\[\begin{split}\mathbf{U}_{\mathrm{F}}(\phi)&= \mathbf{U}_{1}(\phi)\mathbf{U}_{2}(\phi),\\ \mathbf{U}_{1}(\phi)&=\prod_{m=1}^{L}\exp\left(- \mathrm{i}\tau\mathbf{e}_{2m-1}\right)=\mathbb{1}+\frac{\exp(-\mathrm{i}\sqrt{ Q}\tau)-1}{\sqrt{Q}}\mathbf{e}_{2m-1}\\ &=\frac{\exp(-\mathrm{i}\sqrt{Q}\tau)+Q-1}{Q}+\frac{\exp(- \mathrm{i}\sqrt{Q}\tau)-1}{Q}\sum_{a=1}^{Q-1}\mathbf{X}_{m}^{a},\\ \mathbf{U}_{2}(\phi)&=\prod_{m=1}^{L}\exp\left(- \mathrm{i}\tau\mathbf{e}_{2m}\right)=\mathbb{1}+\frac{\exp(-\mathrm{i}\sqrt{ Q}\tau)-1}{\sqrt{Q}}\mathbf{e}_{2m}\\ &=\frac{\exp(-\mathrm{i}\sqrt{Q}\tau)+Q-1}{Q}+\frac{\exp(- \mathrm{i}\sqrt{Q}\tau)-1}{Q}\sum_{a=1}^{Q-1}\left(\mathbf{Z}_{m}^{\dagger} \mathbf{Z}_{m+1}\right)^{a},\end{split} \tag{53}\]
where the spectral parameter \(\phi\) is related to the "period" \(\tau\) by
\[\exp(-\mathrm{i}\sqrt{Q}\tau)=1+\sqrt{Q}\frac{\sinh(\eta\phi/\pi)}{\sinh\left( \eta(\pi-\phi)/\pi\right)}. \tag{54}\]
Note in particular that the Floquet evolution operator \(\mathbf{U}_{\mathrm{F}}(\phi)\) is of the same form as given in eq. (6). It is uniquely defined by the value of \(\phi\) modulo arbitrary shifts by \(\frac{2i\pi}{\eta}\), or by the value of \(\tau\) modulo arbitrary shifts by \(\frac{2\pi}{\delta\pi tQ}\). Furthermore, because of the hermiticity of the generators \(\mathbf{e}_{m}\), the dynamics is unitary whenever \(\tau\in\mathbb{R}\),
\[\mathbf{U}_{\mathrm{F}}(\phi)\mathbf{U}_{\mathrm{F}}^{\dagger}(\phi)=\mathbb{1 },\quad\tau\in\mathbb{R}, \tag{55}\]
or equivalently the parameter \(\phi\) must satisfy the following identity
\[\left|1+\sqrt{Q}\frac{\sinh(\eta\phi/\pi)}{\sinh\left(\eta(\pi-\phi)/\pi\right) }\right|=1. \tag{56}\]
The values of \(\phi\) solving (56) are generally complex. However, for \(Q=2\) or \(Q=3\), some real solutions of are of particular interest as they connect to known models.
For \(Q=2\), nontrivial real solutions to (56) are found as \(\phi=\pm 2\pi\), corresponding to \(\tau=\frac{\pi}{\sqrt{2}}\). In this case, the evolution operator \(\mathbf{U}_{\mathrm{F}}(\phi)\) commutes with the Hamiltonian
\[\mathbf{H}=\sum_{j=1}^{2L}\mathbf{e}_{j}\,, \tag{57}\]
which coincides with the Hamiltonian of the spin-1/2 XX model up to unitary transformation.
Similarly, for \(Q=3\) nontrivial real solutions of (56) are found as \(\phi=\pm 3\pi\), corresponding to \(\tau=\frac{\pi}{\sqrt{3}}\). At this value of \(\tau\), the circuit dynamics can be related to the Zamolodchikov-Fateev 19-vertex model [50], as will be discussed in Sec. IV.4.
### Local conserved charges
We follow the way of Section III to construct two sets of local charges commuting with the circuit dynamics, \(\mathbf{I}_{m,0}\) and \(\mathbf{I}_{0,n}\), \(m,n\in\mathbb{Z}_{>0}\). Using the normalisation of the R matrix (48), we can express the first two charges as
\[\mathbf{I}_{1,0}=\frac{1}{f(\phi)}\sum_{j=1}^{L}\partial_{\lambda}\mathbf{R}_ {j,j+1}(\lambda,0,\phi)\mathbf{P}_{j,j+1},\quad\mathbf{I}_{0,1}=\frac{1}{f( \phi)}\sum_{j=1}^{L}\partial_{\mu}\mathbf{R}_{j,j+1}(0,\mu,\phi)\mathbf{P}_{j,j+1}. \tag{58}\]
We find (see Appendix C for details)
\[\mathbf{I}_{1,0}+\mathbf{I}_{0,1}+c_{1}=\frac{2}{Q}\big{(}\mathbf{Q}_{1}+c_{1 }\big{)}, \tag{59}\]
and
\[\mathbf{I}_{1,0}-\mathbf{I}_{0,1}=\frac{2}{Q}\big{(}\mathbf{Q}_{1}^{\prime}+c _{2}\big{)}, \tag{60}\]
where \(c_{1}\), \(c_{2}\) are constant and
\[\mathbf{Q}_{1}=\sum_{j=1}^{2L}\mathbf{e}_{j}+\frac{\mathrm{i}}{2\sqrt{Q}}\sin( \sqrt{Q}\tau)(-1)^{j}\left[\mathbf{e}_{j},\mathbf{e}_{j+1}\right]-\frac{1}{ \sqrt{Q}}\sin^{2}\big{(}\frac{\sqrt{Q}\tau}{2}\big{)}\left\{\mathbf{e}_{j}, \mathbf{e}_{j+1}\right\}, \tag{61}\]
and
\[\mathbf{Q}_{1}^{\prime}=\sum_{j=1}^{2L}-\frac{\mathrm{i}\sin(\sqrt{Q}\tau)}{2 \sqrt{Q}}\left[\mathbf{e}_{j},\mathbf{e}_{j+1}\right]+\frac{(-1)^{j}\sin^{2}( \sqrt{Q}\tau/2)}{\sqrt{Q}}\left\{\mathbf{e}_{j},\mathbf{e}_{j+1}\right\}\,. \tag{62}\]
In [44], a set of conserved charges \(\mathbf{Q}_{1}\), \(\mathbf{Q}_{2}\), \(\mathbf{Q}_{3}\) commuting with the dynamics (53) was constructed in terms of the generators \(\mathbf{e}_{j}\), by explicitly computing the commutation with the evolution operator \(\mathbf{U}_{F}\). Explicit expressions were given for \(\mathbf{Q}_{1}\) and \(\mathbf{Q}_{2}\), while the expression of \(\mathbf{Q}_{3}\) is more involved. It is easy to check that our charge \(\mathbf{Q}_{1}\) given by (61) coincides with the one given in [44]. Furthermore, we check that the charge \((\mathbf{I}_{2,0}-\mathbf{I}_{0,2})\) coincides with the charge \(\mathbf{Q}_{2}\) of [44], up to a proportionality factor and constant. We believe that, similarly, we could recover the charge \(\mathbf{Q}_{3}\) of [44]. Therefore, our construction recovers and extends the family of charges \(\mathbf{Q}_{m}\) proposed in [44], together with an additional family \(\mathbf{Q}_{m}^{\prime}\), given by the linear combination of the charges \(\mathbf{I}_{m,0}\) and \(\mathbf{I}_{0,m}\).
### 3-state Potts case and 19-vertex model
We now come back to the connection mentioned at the end of Section IV.2, between the 3-state Potts circuit with \(\phi=3\pi\) and the Zamolodchikov-Fateev 19-vertex model at root of unity \(q=\exp\left(\frac{\mathrm{i}\pi}{3}\right)\)[50; 61].
The Zamolodchikov-Fateev 19-vertex model [62; 50] can be obtained via transfer matrix fusion of the 6-vertex model [63]. One of the conserved quantities (obtained via the logarithmic derivative of the transfer matrix) is a spin-1 Hamiltonian, which can be considered as the integrable spin-1 generalisation of the spin-1/2 XXZ model. As in the spin-1/2 case the model is defined in terms of a complex parameter \(q\) relating to the underlying quantum group \(U_{q}(sl_{2})\). At the "root of unity" points \(q^{N}=\pm 1\) it is conjectured to have a hidden Onsager algebra symmetry [64; 57], which can be shown explicitly for \(q=\exp\left(\frac{\mathrm{i}\pi}{3}\right)\)[61; 64].
Interestingly, the conserved quantities obtained from two-parameter transfer matrix (20) consist of a subset of the generators of the Onsager algebra (up to a unitary transformation), which is not obvious at first sight.
To begin with, let us consider the following unitary transformation carried out by the operator
\[\mathcal{U}_{m}^{(3)}=\frac{1}{\sqrt{3}}\begin{pmatrix}1&1&1\\ 1&\omega&\omega^{2}\\ 1&\omega^{2}&\omega\end{pmatrix}_{m}, \tag{63}\]
with the third root of unity \(\omega=\exp(2\pi\mathrm{i}/3)\). The operator \(\mathcal{U}_{m}^{(3)}\) transfers the 3-state Potts spin as follows,
\[\mathcal{U}_{m}^{(3)}\mathbf{X}_{m}\mathcal{U}_{m}^{(3)}{}^{ \dagger}=\mathbf{Z}_{m}^{\dagger},\quad\mathcal{U}_{m}^{(3)}\mathbf{Z}_{m} \mathcal{U}_{m}^{(3)}{}^{\dagger}=\mathbf{X}_{m}. \tag{64}\]
In addition, we need another unitary operator
\[\mathcal{V}_{m}=\begin{pmatrix}0&1&0\\ 1&0&0\\ 0&0&1\end{pmatrix}=\mathcal{V}_{m}^{\dagger},\quad\mathcal{V}_{m}^{2}= \mathbb{1}_{m}. \tag{65}\]
The 19-vertex model R matrix with \(q=\exp\left(\frac{\mathrm{i}\pi}{3}\right)\) is obtained as a special case of the two-parameter R matrix with \(\phi=3\pi\) depicted in Fig. 3 after the unitary transformation,
\[\tilde{\mathbf{R}}_{a,b}(\lambda,\mu)=-\mathcal{V}_{a}\mathcal{V}_{b} \mathcal{U}_{a}^{(3)}\mathcal{U}_{b}^{(3)}\mathbf{R}_{a,b}(\lambda,\mu,\phi =3\pi)\mathcal{U}_{a}^{(3)}{}^{\dagger}\mathcal{U}_{b}^{(3)}{}^{\dagger} \mathcal{V}_{a}\mathcal{V}_{b}. \tag{66}\]
When \(\mu=\lambda\), we recover the renowned 19-vertex R matrix at root of unity \(q=\exp(\mathrm{i}\pi/3)\)[50],
\[\tilde{\mathbf{R}}_{a,b}(-\lambda,-\lambda)=\begin{pmatrix}a( \lambda)&0&0&0&0&0&0&0\\ 0&b(\lambda)&0&c(\lambda)&0&0&0&0&0\\ 0&0&d(\lambda)&0&e(\lambda)&0&g&0&0\\ 0&c(\lambda)&0&b(\lambda)&0&0&0&0&0\\ 0&0&e(\lambda)&0&f(\lambda)&0&e(\lambda)&0&0\\ 0&0&0&0&0&b(\lambda)&0&c(\lambda)&0\\ 0&0&g&0&e(\lambda)&0&d(\lambda)&0&0\\ 0&0&0&0&0&c(\lambda)&0&b(\lambda)&0\\ 0&0&0&0&0&0&0&a(\lambda)\end{pmatrix}=\mathcal{R}(\lambda), \tag{67}\]
where the coefficients are defined
\[\begin{split}& a(\lambda)=[u+1][u+2]=\frac{1}{3}\left(1+2\cos\frac{ 2\lambda}{3}\right),\\ & b(\lambda)=[u][u+1]=\frac{1}{3}\left(1-\cos\frac{2\lambda}{3}+ \sqrt{3}\sin\frac{2\lambda}{3}\right),\\ & c(\lambda)=\cos\frac{\lambda}{3}+\frac{1}{\sqrt{3}}\sin\frac{ \lambda}{3},\quad d(\lambda)=[u-1][u]=\frac{2}{3}\sin\frac{\lambda}{3}\left( \sin\frac{\lambda}{3}-\sqrt{3}\cos\frac{\lambda}{3}\right),\\ & e(\lambda)=[2][u]=\frac{2}{\sqrt{3}}\sin\frac{\lambda}{3},\quad f (\lambda)=b(\lambda)+[2]=\frac{1}{3}\left(4-\cos\frac{2\lambda}{3}+\sqrt{3} \sin\frac{2\lambda}{3}\right),\\ & g=[2]=1,\quad u=\frac{\lambda}{\pi},\end{split} \tag{68}\]
with \(q\)-number defined as
\[[u]=\frac{q^{u}-q^{-u}}{q-q^{-1}}. \tag{69}\]
Another intriguing fact is that the conserved quantities of the 19-vertex model at root of unity \(q=\exp(\mathrm{i}\pi/3)\) can be expressed in terms of the Temperley-Lieb algebra generators [65]. To see this, we define the 19-vertex transfer matrix
\[\mathcal{T}(\lambda)=\mathrm{Tr}_{a}\left(\prod_{j=1}^{L}\mathcal{R}_{a,j}( \lambda)\right), \tag{70}\]
and the first local conserved quantity ("the spin-1 ZF Hamiltonian") becomes
\[\mathbf{H}^{\mathrm{ZF}}=\partial_{\lambda}\log\mathcal{T}(\lambda)=-\big{(} \mathbf{I}_{1,0}+\mathbf{I}_{0,1}\big{)}, \tag{71}\]
due to the factorisation property of the two-parameter transfer matrix (17).
\[\mathbf{H}^{\mathrm{ZF}}=-\frac{2}{3}\mathcal{V}\mathcal{U}^{(3)}\left[\sum_{m= 1}^{2L}\Big{(}\mathbf{e}_{m}-\frac{1}{\sqrt{3}}\{\mathbf{e}_{m},\mathbf{e}_{m +1}\}-\frac{1}{2\sqrt{3}}\Big{)}\right]\mathcal{U}^{(3)}{}^{\dagger}\mathcal{V}, \tag{72}\]
where the unitary transformations are
\[\mathcal{U}^{(3)}=\prod_{m=1}^{L}\mathcal{U}^{(3)}_{m},\quad\mathcal{V}=\prod_ {m=1}^{L}\mathcal{V}_{m}. \tag{73}\]
The ZF Hamiltonian at root of unity \(q=\exp(\mathrm{i}\pi/3)\) can therefore be transformed into a special case of (61) with \(Q=3\) and \(\tau=\pi/\sqrt{3}\). The Hamiltonian can be expressed in terms of spin-1 operators as well in a compact way, as shown in App. B. More generally, the local charges \(\mathbf{I}_{0,m}+\mathbf{I}_{m,0}\) generated by \(\mathcal{T}(\lambda)\) recover the local conserved charges of the ZF spin 1 Hamiltonian derived from the usual spin-1 transfer matrix, while the charges \(\mathbf{I}_{0,m}-\mathbf{I}_{m,0}\) form a mutually commuting subset of the Onsager symmetry generators. This connection is in fact part of a more general connection between solutions of the star-triangle equation and higher-spin descendants of the six-vertex model, which is currently under investigation.
Example: \(\mathbb{Z}_{Q}\) circuits
Besides the \(Q\)-state Potts model, which possesses the \(S_{Q}\) symmetry, there exist solutions to the star-triangle relation (7) with \(\mathbb{Z}_{Q}\) symmetry [42]. The most renowned one has been originally derived by Fateev and Zamolodchikov [66; 67; 52], and takes the form
\[\begin{split} K_{\text{FZ}}(\theta;a,b)&=1,\quad a- b=0,\\ K_{\text{FZ}}(\theta;a,b)&=\prod_{m=0}^{|a-b|-1} \frac{\sin(\frac{\pi m}{Q}+\frac{\theta}{2Q})}{\sin(\frac{\pi(m+1)}{Q}-\frac{ \theta}{2Q})},\quad a-b\neq 0.\end{split} \tag{74}\]
For \(Q=3\), (74) coincides with (43) up to normalisation factor. As pointed earlier, this is due to the fact that the \(\mathbb{Z}_{3}\) symmetry together with the charge conjugation symmetry \(K_{\text{FZ}}(\theta;a,b)=K_{\text{FZ}}(\theta;b,a)\) generate the symmetric group \(S_{3}\), which is the symmetry of the 3-states Potts model. In contrast, when \(Q\geq 4\), (74) and (43) become different. More specifically, (74) with \(Q=4\) is related to a critical Ashkin-Teller model [68; 69; 53; 55; 68]. When \(Q=4\),
\[K_{\text{FZ4}}(\theta;a-b)=\begin{cases}1,&a-b=0,\\ \frac{\sin(\theta/8)}{\sin(\pi/4-\theta/8)},&|a-b|=1\,\text{or}\,3,\\ \frac{\tan(\theta/8)}{\tan(\pi/4-\theta/8)},&|a-b|=2.\end{cases} \tag{75}\]
The critical Ashkin-Teller Hamiltonian is obtained by considering the first local conserved charge in the limit \(\phi\to 0\), which is shown in Appendix D.
We focus on the \(\mathbb{Z}_{4}\) circuit now. Similar to the Potts case, the \(\mathbb{Z}_{4}\) circuit is built on the Floquet evolution operator such that
\[\mathbf{U}_{\text{F}}(\phi)=\mathbf{U}_{1}(\phi)\mathbf{U}_{2}(\phi), \tag{76}\]
which is closely related to the two-parameter transfer matrix such that
\[\begin{split}&\mathbf{T}(0,\phi,\phi)=\mathbf{V}(\phi,\phi) \mathbf{W}(0,\phi),\\ &\mathbf{V}(\phi,\phi)=\mathbf{G}^{-1}\mathbf{U}_{1}(\phi)= \mathbf{U}_{1}(\phi)\mathbf{G}^{-1}=\mathbf{G}^{-1}\prod_{m=1}^{L}\mathbf{v} _{m}\\ &\mathbf{W}(0,\phi)=\mathbf{U}_{2}(\phi)=\prod_{m=1}^{L}\mathbf{w} _{m,m+1},\end{split} \tag{77}\]
where the local quantum gates are
\[\mathbf{v}_{m}=\mathbbm{1}+K_{\text{FZ4}}(\phi;1)\big{(}\mathbf{X}_{m}+ \mathbf{X}_{m}^{\dagger}\big{)}+K_{\text{FZ4}}(\phi;2)\mathbf{X}_{m}^{2}, \tag{78}\]
\[\begin{split}\mathbf{w}_{m,m+1}=&\frac{1}{4}\big{(} 1+2K_{\text{FZ4}}(\pi-\phi;1)+K_{\text{FZ4}}(\pi-\phi;2)\big{)}+\\ &\frac{1}{4}\big{(}1-K_{\text{FZ4}}(\pi-\phi;2)\big{)}\big{(} \mathbf{Z}_{m}^{\dagger}\mathbf{Z}_{m+1}+\mathbf{Z}_{m}\mathbf{Z}_{m+1}^{ \dagger}\big{)}\\ &+\frac{1}{4}\big{(}1-2K_{\text{FZ4}}(\pi-\phi;1)+K_{\text{FZ4}}( \pi-\phi;2)\big{)}\mathbf{Z}_{m}^{2}\mathbf{Z}_{m+1}^{2}.\end{split} \tag{79}\]
The evolution operators \(\mathbf{U}_{1}(\phi)\) and \(\mathbf{U}_{2}(\phi)\) are of the generic form (6). However, unlike the the Potts case (53), where there exist sets of \(\phi\) as solutions to (54) that guarantee the quantum circuits
to be unitary, there is no \(\phi\) that makes the quantum circuits (77) unitary, except for the trivial cases when \(\phi=8n\pi\) or \(\phi=4\pi+8n\pi\) after rescaling.
Even though the integrable quantum circuits obtained using the Fateev-Zamolodchikov star-triangle relation are not unitary in general, the integrability has not been shown in previous literature up to our knowledge, which could potentially be intriguing to study the physical properties. Similar non-unitary integrable quantum circuits have been studied in [28; 70], closely related to the non-unitary conformal field theory. It would be interesting to see if the \(\mathbb{Z}_{Q}\) circuits can be understood analogously, which we will not discuss in details here.
## VI Conclusion
In this article we studied the integrable structure of quantum circuits in the form of Fig. 1, which can be considered as the Floquet dynamics of a time-dependent Potts-like quantum Hamiltonian. We used the renowned star-triangle relation to construct families of two-parameter transfer matrices that commute with the Floquet evolution operator, underlying the integrable structure. The quantum circuits are obtained by taking the spectral parameters of the two-parameter transfer matrix to special values.
Compared to the known example of integrable quantum circuits of brick-wall type, whose construction is based on Yang-Baxter integrable vertex models [22; 23; 26; 28], the quantum circuits studied in this article indeed share a certain resemblance. However, even though we have shown that the two-parameter transfer matrices can be formulated as the row-to-row transfer matrices of certain vertex models in Sec. III, the staggering of spectral parameters leading to a circuit geometry takes place in our construction between the internal parameters entering the definition of each R matrix, rather than between odd and even sites of the vertex model as in the case in the brick-wall approach [22; 23; 26; 28]. This difference is what makes our construction new, and allows for a systematic construction of new families of integrable quantum circuits based on solutions to the star-triangle relation.
In this work we focused on two families of \(Q\)-states quantum circuits. The first is associated with the \(Q\)-states Potts model, for which we proved the conjectured integrability using the star-triangle relation of the Potts model [43], and found an additional set of conserved charges expressed in terms of Temperley-Lieb generators. In the case of 3-state Potts, we presented a connection between the integrable quantum circuit and the integrable 19-vertex model [50], which is part of a larger connection currently under investigation. The second family of circuits, dubbed \(\mathbb{Z}_{Q}\) circuits, results from the Fateev-Zamolodchikov \(\mathbb{Z}_{Q}\) solution of the star-triangle relation [52], and yields a different integrable quantum circuit that for \(Q=4\) is closely related to the critical Ashkin-Teller spin chain. Beyond these two examples, our construction should work for more general solutions of the star-triangle equation [42], and we leave the study of the corresponding circuits as an interesting perspective for future investigation.
There are still many aspects of the integrable quantum circuits in the form of Fig. 1 that need to be investigated. One example would be studying the physical properties of quantum quenches in the circuits. The time evolution from certain initial product states could potentially be realised in recent experiments [71; 72] and the quantum integrability that we used can be a useful tool [73; 74]. Moreover, the field theory limit of the quantum circuits is also interesting, since the brick-wall quantum circuits are initially studied as the lattice regularisation of the field theories [23; 25]. The generalisation of the brick-wall quantum circuits has been proposed in [28], while it is not clear how it can be extended to the quantum circuits considered in this article, cf. Fig. 1. All these questions remain to be studied and answered, which we intend to do in future works.
## Acknowledgment
Y.M. acknowledges the support from the GGI BOOST fellowship. Y.M. is grateful to Vladimir Gritsev and Denis Kurlov for collaborations on related topics. We would like to thank Filippo Colomo, Paul Fendley, Jesper Jacobsen, Jules Lamers, Hosho Katsura, Vincent Pasquier for useful discussions, and especially Balazs Pozsgay for early collaboration on closely related topics.
## Appendix A Diagrammatic derivations of some formulae
### Diagrammatic derivation of the Yang-Baxter relation
The Yang-Baxter relation of the R matrix (10) is proven directly from the star-triangle relation (7). By first applying the star-triangle relation in the white triangle in between of the coloured rectangular, the detailed derivation is summarised in Fig. 7.
### Diagrammatic derivation of the self-dual relation
Figure 8: The proof of the “self-dual” property of the two-parameter transfer matrix (15).
Figure 7: The proof of the Yang–Baxter relation (11) by recursively applying the star-triangle relation (7).
### Diagrammatic derivation of Eq. (16)
We follow the example of [61] and use the spin-1 \(\mathfrak{sl}_{2}\) operators
\[\mathcal{S}_{m}^{+}=\begin{pmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{pmatrix}_{m},\quad\mathcal{S}_{m}^{-}=\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&1&0\end{pmatrix}_{m} \tag{17}\]
to rewrite the spin-1 ZF Hamiltonian at root of unity \(q=\exp(\mathrm{i}\pi/3)\), i.e.
\[\mathbf{H}^{\mathrm{ZF}}=\frac{2}{3\sqrt{3}}\sum_{m=1}^{L}\sum_{a=1}^{2}\left[ (-1)^{a}(\mathcal{S}_{m}^{+}\mathcal{S}_{m+1}^{-})^{a}+(-1)^{a}(\mathcal{S}_{ m}^{-}\mathcal{S}_{m+1}^{+})^{a}+\frac{1}{3(1+\omega^{-a})}\mathbf{Z}_{m}^{a}+ \frac{1}{3}\right], \tag{18}\]
where \(\mathbf{Z}_{m}\) are the 3-state Potts operator in (50) and \(\omega=\exp(2\mathrm{i}\pi/3)\) is the third root of unity.
## Appendix C Local density of charges in \(Q\)-state Potts circuits
By directly calculating the local charge densities and expressing them in terms of the affine TL generators, the local charge densities (58) become
\[\begin{split}\frac{1}{f(\phi)}\partial_{\lambda}\mathbf{R}_{j,j+ 1}(\lambda,0,\phi)\mathbf{P}_{j,j+1}&=\frac{1}{Q}\left[\mathbf{e}_{2 j-1}+\mathbf{e}_{2j}+\frac{2\mathrm{i}\sin(\sqrt{Q}\tau)}{2\sqrt{Q}}\left[ \mathbf{e}_{2j-1},\mathbf{e}_{2j}\right]\right.\\ &\left.-\frac{2\sin^{2}(\sqrt{Q}\tau/2)}{\sqrt{Q}}\left\{\mathbf{e }_{2j-1},\mathbf{e}_{2j}\right\}-\frac{2+e^{-\mathrm{i}\sqrt{Q}\tau}}{\sqrt{Q }}\right],\end{split} \tag{19}\]
and
\[\begin{split}\frac{1}{f(\phi)}\partial_{\mu}\mathbf{R}_{j,j+1}(0,\mu,\phi)\mathbf{P}_{j,j+1}&=\frac{1}{Q}\left[\mathbf{e}_{2j}+ \mathbf{e}_{2j+1}+\frac{2\mathrm{i}\sin(\sqrt{Q}\tau)}{2\sqrt{Q}}\left[ \mathbf{e}_{2j},\mathbf{e}_{2j+1}\right]\right.\\ &\left.-\frac{2\sin^{2}(\sqrt{Q}\tau/2)}{\sqrt{Q}}\left\{\mathbf{ e}_{2j},\mathbf{e}_{2j+1}\right\}-\frac{2+e^{\mathrm{i}\sqrt{Q}\tau}}{\sqrt{Q}} \right],\end{split} \tag{20}\]
where we have used the relation between \(\phi\) and \(\tau\) (54).
Figure 9: The proof of (16) in terms of diagrams.
By summing up the local density, and telescoping the sum, we arrive at
\[\begin{split}\mathbf{I}_{1,0}+\mathbf{I}_{0,1}&=\frac{2} {Q}\Bigg{[}\Big{(}\sum_{j=1}^{2L}\mathbf{e}_{j}+(-1)^{j}\frac{\mathrm{i}\sin( \sqrt{Q}\tau)}{2\sqrt{Q}}\left[\mathbf{e}_{j},\mathbf{e}_{j+1}\right]\\ &\qquad\qquad-\frac{\sin^{2}(\sqrt{Q}\tau/2)}{\sqrt{Q}}\left\{ \mathbf{e}_{j},\mathbf{e}_{j+1}\right\}\Big{)}-\frac{2-\cos(\sqrt{Q}\tau)}{ \sqrt{Q}}L\Bigg{]},\end{split} \tag{103}\]
and
\[\begin{split}\mathbf{I}_{1,0}-\mathbf{I}_{0,1}&=\frac {2}{Q}\Bigg{[}\Big{(}\sum_{j=1}^{2L}-\frac{\mathrm{i}\sin(\sqrt{Q}\tau)}{2 \sqrt{Q}}\left[\mathbf{e}_{j},\mathbf{e}_{j+1}\right]\\ &\qquad\qquad+\frac{(-1)^{j}\sin^{2}(\sqrt{Q}\tau/2)}{\sqrt{Q}} \left\{\mathbf{e}_{j},\mathbf{e}_{j+1}\right\}\Big{)}+\frac{\mathrm{i}\sin( \sqrt{Q}\tau)}{\sqrt{Q}}L\Bigg{]}.\end{split} \tag{104}\]
The two constants in (59) and (60) thus are
\[c_{1}=-\frac{2-\cos(\sqrt{Q}\tau)}{\sqrt{Q}}L,\quad c_{2}=\frac{\mathrm{i} \sin(\sqrt{Q}\tau)}{\sqrt{Q}}L. \tag{105}\]
## Appendix D Explicit form of critical Ashkin-Teller model
In the limit \(\phi\to 0\), the two sets of local charges from the two-parameter transfer matrix coincide due to the self-duality (15),
\[\phi=0\quad\Rightarrow\quad\mathbf{I}_{m,0}=\mathbf{I}_{0,m}. \tag{106}\]
In order to compare with the Ashkin-Teller Hamiltonian in the literature [69, 54], we introduce the unitary transformation
\[\begin{split}\mathcal{U}^{(4)}_{m}&=\frac{1}{2} \begin{pmatrix}1&1&1&1\\ 1&\mathrm{i}&-1&-\mathrm{i}\\ 1&-\mathrm{i}&-1&\mathrm{i}\end{pmatrix}_{m},\\ \mathcal{U}^{(4)}&=\prod_{m=1}^{L}\mathcal{U}^{(4)}_{m}.\end{split} \tag{107}\]
Therefore, the critical Ashkin-Teller Hamiltonian becomes
\[\begin{split}\mathbf{H}^{\mathrm{AT}}=& 4\sqrt{2}\mathcal{U}^{(4)}\mathbf{I}_{1,0}\mathcal{U}^{( 4)}{}^{\dagger}=\sum_{m=1}^{L}\left[\mathbf{Z}_{m}+\mathbf{Z}_{m}^{\dagger}+ \frac{1}{\sqrt{2}}\mathbf{Z}_{m}^{2}\right.\\ &\left.+\mathbf{X}_{m}^{\dagger}\mathbf{X}_{m+1}+\mathbf{X}_{m} \mathbf{X}_{m+1}^{\dagger}+\frac{1}{\sqrt{2}}\mathbf{X}_{m}^{2}\mathbf{X}_{m+ 1}^{2}-\left(2+\frac{1}{\sqrt{2}}\right)\right],\end{split} \tag{108}\]
where \(\mathbf{Z}_{m}\) and \(\mathbf{X}_{m}\) are 4-state Potts operators in (50).
The Ashkin-Teller Hamiltonian obtained here (103) belongs to only one point of the self-dual critical line of the phase diagram [69]. In addition, the Hamiltonian might appear in different guises in the literature. For instance, it is also possible to express the Hamiltonian (103) as a
spin-1/2 ladder [75; 76]. A non-Hermitian version of the Ashkin-Teller model has been shown to be equivalent to the dissipative quantum Ising chain [77].
|
2308.07545 | Vision-Language Dataset Distillation | Dataset distillation methods reduce large-scale datasets to smaller sets of
synthetic data, preserving sufficient information to quickly train a new model
from scratch. However, prior work on dataset distillation has focused
exclusively on image classification datasets, whereas modern large-scale
datasets are primarily vision-language datasets. In this work, we design the
first vision-language dataset distillation method, building on the idea of
trajectory matching. A key challenge is that vision-language datasets do not
have a set of discrete classes. To overcome this, our proposed method jointly
distills image-text pairs in a contrastive formulation. Further, we leverage
Low-Rank Adaptation (LoRA) matching to enable more efficient and effective
trajectory matching in complex modern vision-language models. Since there are
no existing baselines, we compare our distillation approach with three adapted
vision-language coreset selection methods. We demonstrate significant
improvements on the challenging Flickr30K and COCO retrieval benchmarks: for
example, on Flickr30K, the best coreset selection method selecting 1000
image-text pairs for training achieves only 5.6% image-to-text retrieval
accuracy (i.e., recall@1); in contrast, our dataset distillation almost doubles
that to 9.9% with just 100 training pairs, an order of magnitude fewer. | Xindi Wu, Byron Zhang, Zhiwei Deng, Olga Russakovsky | 2023-08-15T03:22:40Z | http://arxiv.org/abs/2308.07545v4 | # Vision-Language Dataset Distillation
###### Abstract
Dataset distillation methods promise to reduce large-scale datasets down to significantly smaller sets of (potentially synthetic) training examples, which preserve sufficient information for training a new model from scratch. So far, dataset distillation methods have been developed for image classification. However, with the rise in capabilities of vision-language models (VLMs), and especially given the scale of datasets necessary to train these models, the time is ripe to expand dataset distillation methods beyond image classification. In this work, we take the first steps towards this goal by expanding the idea of trajectory matching to create a distillation method for vision-language datasets. A key challenge is that vision-language datasets do not have a set of discrete classes. To overcome this, our proposed vision-language dataset distillation method jointly distills the image-text pairs in a contrastive formulation. Since there are no existing baselines, we compare our approach to three coreset selection methods (strategic subsampling of the training dataset), which we adapt to the vision-language setting. We demonstrate significant improvements on the challenging Flickr30K and COCO retrieval benchmarks: for example, on Flickr30K, the best coreset selection method selecting 1000 image-text pairs for training achieves only 5.6% image-to-text retrieval accuracy (i.e., recall@1); in contrast, our dataset distillation approach almost doubles that to 9.9% with just 100 (an order of magnitude fewer) training pairs. 1
Footnote 1: Website: princetonvisualai.github.io/multimodal_dataset_distillation.
## 1 Introduction
_Data_ = _Information + Irrelevant Data_ (Wright & Ma, 2022)
Dataset distillation aims to create concise summaries of data that preserve most of the critical information of the entire dataset. It holds paramount importance in the era of big data as it addresses the challenge posed by _"Data_ = _Information + Irrelevant Data_" (Wright & Ma, 2022), where we often need to learn the useful information in an ocean of non-critical data. The recent growth of dataset distillation methods, e.g., (Wang et al., 2018; Cazenavette et al., 2022; Nguyen et al., 2020) is primarily focused on image classification datasets, capturing class-specific information for building discriminative boundaries. Considering the recent progress in multimodal machine learning, where we witness the explosion of vision-language datasets in which the majority of image pixels may belong to irrelevant contextual elements and further may lack corresponding textual descriptions, a significant necessity arises to efficiently distill this vast amount of data. A well-distilled multimodal dataset simplifies complex vision-language interactions and emphasizes the most salient connections, making it more effective for models to learn the cross-modal representations.
**Why is it hard?** The first key challenge and the main difference from prior dataset distillation methods (Wang et al., 2018; Cazenavette et al., 2022; Deng & Russakovsky, 2022) is that vision-language datasets do not contain a discrete set of classes to ground the distillation process. Instead, these datasets contain intricate cross-modal relationships as well as redundancies, requiring a co-distillation approach to capture their interdependencies effectively. Second, the complexity of cross-modal representations and high-resolution images leads to computation challenges. Prior dataset distillation methods operate on low-resolution images (typically 28x28 or 32x32, as in MNIST (LeCun et al., 1998) or CIFAR (Krizhevsky et al., 2009)), and nevertheless suffer from significant computational costs even with simple ConvNet when creating distilled datasets. Vision-language datasets often contain higher-resolution images, and models designed for these datasets are substantially more complex. Lastly, unlike continuous data, text is inherently non-differentiable, making direct gradient-based optimization impossible on discrete text tokens.
**Our work.** We propose the first Vision-language Dataset Distillation method. Concretely, given a dataset of images with corresponding text descriptions, our method creates a much smaller synthetic set of (image, text embedding) pairs which can then be used to efficiently train a model that aims to learn the image-text alignment. Given the infeasibility of direct information extraction, our co-distillation is achieved by implicitly matching the by-products of the target vision-language data and the synthetic ones. In our case, the by-product is the _long-range training bi-trajectory_.
**Contributions.** To the best of our knowledge, this is the first work to tackle vision-language dataset distillation. In doing so, we make the following key contributions.
1. We highlight the challenges of vision-language dataset distillation and establish the first set of baselines for this task by adapting three coreset selection methods (Welling, 2009; Toneva et al., 2018; Farahani and Hekmatfar, 2009; Sener and Savarese, 2017).
2. We propose a method that jointly performs vision-language co-distillation. Our method is not restricted to discrete classes, in contrast to prior image classification dataset distillation methods, and is computationally tractable to operate on the high-resolution images.
3. Our method significantly improves image-text retrieval with training set constraints on the challenging Flickr30K (Plummer et al., 2015) and COCO (Lin et al., 2014) datasets. For example, the best coreset selection method (adapted K-center) achieves 5.6% image-to-text retrieval performance (R@1) after selecting 1000 image-text pairs for training. In contrast, our method almost doubles that performance on the same task to 9.9% with **an order of magnitude** fewer (just 100) distilled image-text pairs.
The growing interest in multimodal datasets makes it even more crucial to develop mechanisms that efficiently and effectively distill insights from different modalities. We hope this work jump-starts further research into the important and challenging space of vision-language dataset distillation.
## 2 Related Works
**Dataset Distillation.** The concept of dataset distillation demonstrated that a handful of synthetic images, although not drawn from the training distribution, can achieve comparable performance to that of the original dataset (Wang et al., 2018). Meta-learning based data distillation approaches (Nguyen et al., 2021; Zhou et al., 2022; Deng and Russakovsky, 2022; Nguyen et al., 2020; Vicol et al., 2022; Zhou et al., 2022) typically use bileved optimization, where the inner loop trains on the distilled data samples and the outer loop optimizes the meta dataset. Several works (Zhao et al., 2020; Zhao and Bilen, 2021; Cazenavetic et al., 2022; Lee et al., 2022; Jiang et al., 2022; Du et al., 2023) explored gradient or trajectory matching methods for dataset distillation, focusing on matching the gradient or trajectory of the gradient with respect to the model trained on the real and distilled data.
Figure 1: **Dataset Distillation Comparison.** (_Left_) Prior dataset distillation methods (Wang et al., 2018; Cazenavetic et al., 2022; Nguyen et al., 2020) are class-specific: they distill the key information for each individual discrete class. (_Center_) Even the recently-developed method of (Deng and Russakovsky, 2022) which enables information sharing between classes through learned bases still assumes a discrete set of classes. (_Right_) In contrast, we set out to distill vision-language datasets with no discrete classes; we do so via a novel method which jointly distills the vision and text.
Our work is mostly inspired by the trajectory matching method (Cazenavette et al., 2022; Cui et al., 2022), which is more efficient for optimization since they mostly do not involve long unrolling of computation graphs. Rather than aligning model gradients, another thread of work (Zhao et al., 2020; Wang et al., 2022; Lee et al., 2022) are developed to align feature distributions between real and distilled data using distribution divergence metric in the latent space. Our work is the first to scale up dataset distillation methods to vision-language tasks, which involves creating distilled data that capture critical features and complex relationships within and between two modalities.
**Cross-modal Retrieval.** Most cross-modal retrieval methods function at the representation level and encourage a joint embedding space by measuring the similarities between learned representations across different modalities (Liang et al., 2022; Zhu et al., 2022; Pokle et al., 2022; Chun et al., 2021; Wu et al., 2023). Image-text retrieval focuses on the retrieval of images given captions, or of captions for a query image (Wang et al., 2020; Wu et al., 2019; Wehrmann et al., 2019). Many techniques are developed to produce representations that are semantically similar for image-text pairs (Huang et al., 2018; Gu et al., 2018). More advanced image-text alignment methods (Li et al., 2022; Lin et al., 2023; Pandey et al., 2022) that incorporate pretraining have shown promising results on image-text retrieval tasks. We evaluate our vision-language dataset distillation method on image-text retrieval.
**Vision-language Knowledge Distillation.** Prior efforts on vision-language distillation are primarily centered around knowledge distillation, which transfers knowledge from a larger teacher model to a smaller student model to improve the latter's performance (Xue et al., 2023; Radenovic et al., 2023; Valverde et al., 2021). Our dataset distillation study focuses on the orthogonal question and is fundamentally a data-centric pragmatic compression problem. The goal is to find equivalent bits that can represent the entire multimodal datasets.
## 3 Method
We propose a vision-language dataset distillation method for distilling a large-scale dataset consisting of (image, text) pairs into a smaller dataset, while maintaining much of the original dataset's information relevant to training vision-language models (VLMs). The detailed method is in Fig. 2.
### Problem Formulation
Consider a large-scale dataset \(\mathbf{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\), where each \(x_{i}\) denotes an image and each \(y_{i}\) denotes its corresponding text descriptions; note that in practice, \(y_{i}\) may be a set \(\{y_{i1},y_{i2},...,y_{iK}\}\) where \(K\) is the number of descriptions associated with each image. Our goal is to learn a smaller dataset \(\mathbf{\hat{D}}=\{(\hat{x}_{j},\hat{y}_{j})\}_{j=1}^{M}\), with significantly fewer data pairs \(M\ll N\) that still captures most of the essential information needed to train a VLM effectively. Concretely, consider a VLM with vision encoder \(f(\cdot;\theta_{img})\) and language encoder \(g(\cdot;\theta_{txt})\). This model can be trained by optimizing the similarity loss which encourages alignment between the image and text embeddings:
\[\theta^{*}\approx\operatorname*{arg\,min}_{\theta}\frac{1}{N}\sum_{i=1}^{N} \ell\left(f(x_{i};\theta_{img}),g(y_{i};\theta_{txt})\right). \tag{1}\]
Our goal is to distill a dataset \(\mathbf{\hat{D}}\) such that the model trained with \(\mathbf{\hat{D}}\) obtains comparable vision-language matching performance as the one trained on \(\mathbf{D}\). More specifically, consider a metric \(\mathbf{m}\) defined to quantify the correlation between the model's representation \(f(x;\theta_{img})\) of a given image \(x\) and the representation \(g(y;\theta_{img})\) of a given text \(y\), this representation should match the actual similarity between the image and text pairs. The correlation calculation is based on whether the image-text pair is a positive (matching) or a negative (non-matching) pair. Given the test dataset \(\mathbf{D}_{test}\), our objective can be defined as follows:
\[\mathbb{E}_{(x,y)\sim\mathbf{D}_{test}}\big{[}\mathbf{m}(f(x;\theta_{img}^{*} ),g(y;\theta_{txt}^{*}))\big{]}\simeq\mathbb{E}_{(x,y)\sim\mathbf{D}_{test}} \big{[}\mathbf{m}(f(x;\hat{\theta}_{img}),g(y;\hat{\theta}_{txt}))\big{]}. \tag{2}\]
Importantly, even when the model is trained on the distilled dataset \(\mathbf{\hat{D}}\), we still evaluate its performance on the original \(\mathbf{D}_{test}\) for a fair measurement. When creating the dataset \(\mathbf{\hat{D}}\), the pairs \((\hat{x},\hat{y})\) can be subsampled from the original set \(\mathbf{D}\), as described in the coreset selection methods below (Sec. 3.2). We propose a much more effective strategy in Sec. 3.3 to learn _synthetic_ image-text pairs \((\hat{x},\hat{y})\), which can be more information-rich.
**Connection with Image-only Dataset Distillation.** Traditionally, dataset distillation is tailored for classification tasks with discrete labels, each of which possesses a distinctive set of distilled data that enables efficient learning while preserving important information. We take this concept a step further to the multimodal scenario, where we distill information from both vision and language data. This involves creating synthetic data that capture critical relationships within and between these two modalities. As opposed to merely classifying discrete labels, we are examining a more complex, interconnected dataset where the relation between modalities is crucial. Our method considers the image-text correlation and how they influence each other. It is worth noting that this would be impossible if we solely optimize a single modality, which is supported by our unimodal distillation results in Tab. 4 which will be discussed later in Sec. 4.4.
### Baselines: Coreset Selection
Since, to the best of our knowledge, there is no pre-existing work in the domain of vision-language dataset distillation, we begin by formulating a set of baselines to construct the smaller dataset \(\hat{\mathbf{D}}\). These baselines are based on coreset selection methods, where a subset of the training pairs \((x_{i},y_{i})\) is chosen, up to a given budget of \(M\) pairs, as to maximize the "informativeness" of the selected subset. We consider three such methods, adapted from prior work.
**Herding**(Welling, 2009) Herding aims to greedily select pairs that are most similar to existing pairs. We use pre-trained encoders to extract features from the image-text pairs, concatenate the features, and calculate the "center" of the dataset in this feature space by averaging all feature vectors. We start with an empty coreset and for each iteration, add the image-text pair that is closest to the current "center" of the coreset in Euclidean distance. This is to minimize the distance between the coreset center and the dataset center. We recalculate the coreset center after adding each data point.
**K-center**(Farahani and Hekmatfar, 2009; Sener and Savarese, 2017) Different from computing a single center in Herding, K-center selects the training examples that are maximally separated. We concatenate the features of image-text pairs and randomly select a single data point. We add a new image-text pair that is _furthest_ in Euclidean distance from the nearest example already selected until selecting K points. The drawback is its high computational cost, especially with large datasets, as it involves heavy distance calculations between data points in each iteration.
**Forgetting**(Toneva et al., 2018) The core idea is to directly identify reliable training examples that the original model consistently learns well. During each training epoch, we check how accurately the models predict every image-text pair for a specific task (i.e., image-text retrieval). A forgetting event is registered for an image-text pair when the model correctly predicts the example in one epoch but fails in the next. Throughout training, we continually track these forgetting events for each pair, to identify the ones with the _fewest_ forgetting events.
Figure 2: **Vision-Language Dataset Distillation. Both the image and text encoders are pretrained and followed by a trainable projection layer, and the text encoder is frozen. We use contrastive loss to measure the distance between the paired image-text embeddings, which influences the trajectory updates during distillation. The right panel shows how the distilled data aligns its training trajectory with the expert’s, from a random starting point on the expert trajectory. The distilled dataset is updated based on bi-trajectory matching loss between the student and expert parameter trajectories.**
### Bi-trajectory Guided Vision-Language Co-Distillation
The coreset selection methods described above, while effective to some extent, demonstrate certain limitations as they only rely on selecting a subset of the training dataset \(\mathbf{D}\). This restriction leads to less effective results compared to our method, as ours provides the flexibility to generate an optimized distilled dataset \(\hat{\mathbf{D}}\), and the learning process efficiently helps extract the most essential information embedded in \(\hat{\mathbf{D}}\). Not only does this lead to decreased storage and computational requirements, but it also optimizes the performance of the model trained on this distilled dataset.
Here we describe our vision-language dataset distillation framework, building off of the idea of matching training trajectories (MTT) (Cazenavette et al., 2022) developed for distilling image classification datasets. The core idea of trajectory matching is that the dataset distillation can be achieved by implicitly matching the by-product, which is the parameter trajectory of the distilled dataset and the original full dataset, given direct information extraction is not feasible. We can compute a loss function on the cumulative discrepancy between the expert parameter trajectory \(\theta^{*}\) obtained from the model trained on the full dataset \(\mathbf{D}\) and the parameters \(\hat{\theta}\) obtained from the model on the distilled dataset \(\hat{\mathbf{D}}\), and use that loss to guide the creation of a better \(\hat{\mathbf{D}}\), one that can match the parameters \(\theta^{*}\) more closely. The approach consists of two stages:
1. Obtaining the expert training trajectories \(\{\tau^{*}\}\), with each trajectory \(\tau^{*}=\{\theta_{t}^{*}\}_{t=0}^{T}\), by training multiple models for \(T\) epochs on the full dataset \(\mathbf{D}\). For our multimodal setting, the models are trained using **bidirectional contrastive loss**, described below.
2. Training a set of student models on the current distilled dataset \(\hat{\mathbf{D}}\) using the same bidirectional contrastive loss, and then updating \(\hat{\mathbf{D}}\) based on the **bi-trajectory matching loss** of the student models' parameter trajectories and the expert trajectories \(\tau^{*}\).
**Bidirectional Contrastive Loss.** We train both expert and student VLMs using the bidirectional contrastive loss, following the formalism of (Radford et al., 2021) as it is effective for learning shared image-text representation. Concretely, given a batch of \(n\) image-text pairs \(\{(x,y)\}\), either from the real dataset \(\mathbf{D}\) or from the synthetic distilled dataset \(\hat{\mathbf{D}}\), we jointly learn the encoders \(f(x;\theta_{img})\) and \(g(y;\theta_{txt})\) such that the cosine similarity of all \(n\) correct image-text pairs is high and that of the \((n^{2}-n)\) incorrect pairs is low. Cosine similarity is defined as: \(\alpha(x,y)=\frac{\langle f(x;\theta_{img}),g(y;\theta_{txt})\rangle}{\|f(x; \theta_{img})\|g(y;\theta_{txt})\|}\). We then compute bidirectional contrastive losses composed of an image-to-text matching loss and a text-to-image matching loss, following the form of the InfoNCE loss (Oord et al., 2018):
\[\ell_{contrastive}=-\frac{1}{2n}\sum_{(x,y)\text{ in batch}}\left(\log\frac{\exp(\alpha(x,y))}{\sum_{y^{\prime}}\exp(\alpha(x,y^{\prime}))}+\log \frac{\exp(\alpha(x,y))}{\sum_{x^{\prime}}\exp(\alpha(x^{\prime},y))}\right). \tag{3}\]
To imitate the effect of training data on parameter trajectories, we use the same objective function to guide the update of parameters \(\theta_{img},\theta_{txt}\) during both expert training (stage 1) and distillation (stage 2). Notably, while hard negative mining is typically used in conjunction with contrastive loss, here we rely fully on the dataset distillation process itself without additional intervention. This process inherently considers hard negatives; it distills samples that are hard negative samples for others, which are eventually effective samples for learning. Dataset distillation can potentially bypass the traditional hard negative mining complexities through the learning process.
**Bi-Trajectory Matching Loss.** Following the MTT (Cazenavette et al., 2022) formulation, we randomly sample \(M\) image-text pairs from \(\mathbf{D}\) to initialize the distilled dataset \(\hat{\mathbf{D}}\) (more details can be found in the Sec 4.2). We sample an expert trajectory \(\tau^{*}=\{\theta_{t}^{*}\}_{t=0}^{T}\) and a random starting epoch \(s\) to initialize \(\hat{\theta}_{s}=\theta_{s}^{*}\). We train the student model on the distilled dataset for \(\hat{R}\) steps to obtain \(\hat{\theta}_{s+\hat{R}}\). We then update the distilled dataset based on bi-trajectory matching loss \(\ell_{trajectory}\) computed on the accumulated difference between student trajectory and expert trajectory:
\[\ell_{trajectory}=\frac{\|\hat{\theta}_{img,s+\hat{R}}-\theta_{img,s+\hat{R}}^{* }\|^{2}_{2}}{\|\theta_{img,s}^{*}-\theta_{img,s+\hat{R}}^{*}\|^{2}_{2}}+\frac {\|\hat{\theta}_{txt,s+\hat{R}}-\theta_{txt,s+\hat{R}}^{*}\|^{2}_{2}}{\| \theta_{txt,s}^{*}-\theta_{txt,s+\hat{R}}^{*}\|^{2}_{2}}. \tag{4}\]
We update the distilled dataset by back-propagating through multiple (\(\hat{R}\)) gradient descent updates to \(\hat{\mathbf{D}}\), specifically, image pixel space and text embedding space with respect to Eqn. 4. We initialize the continuous sentence embeddings using a pretrained BERT model and update the distilled text in the continuous embedding space. For the distilled image optimization, we directly update the pixel values of the distilled images. The full details are described in Algorithm 1.
## 4 Experiments
In this section, we first show the potential and challenges of vision-language distillation in Sec. 4.1 by transitioning the trajectory-matching pipeline from image-only to image-text retrieval. We then describe the cross-modal retrieval test-bed in Sec. 4.2. We use it to evaluate our vision-language dataset co-distillation performance. We then compare our method to baseline coreset selection approaches and provide the key quantitative and qualitative results in Sec. 4.3. We further conduct a set of ablation studies to understand the impact of unimodal distillation vs. co-distillation in Sec. 4.4.
### CIFAR10 Classification vs Retrieval Distillation
Prior work has shown remarkable distillation results on CIFAR10 (Krizhevsky et al., 2009) classification. To move from distilling image-only datasets to vision-language datasets, we first check if our method has potential in simple settings. Concretely, we convert CIFAR10 labels to captions that pair with their corresponding images. Under this formulation, the objective of classification is equivalent to that of image-to-text retrieval (TR): finding the best text given an image.
In Table 1, we compare CIFAR10 distillation performance for dataset size of 1, 10, 50 images per class (IPC), under three different settings: classification, single-caption retrieval, and multi-caption retrieval. For classification, we demonstrate results from MTT (Cazenavette et al., 2022), where they distill an image-only dataset using expert trajectiqre trained on image-label pairs. In single-caption TR, we distill image-caption pairs using expert trajectories trained when each image is paired with a single caption "This is a (label)". In multi-caption TR, we distill image-caption pairs but the expert trajectories are trained when each image is paired with five captions that are generated with varies prompts from (Radford et al., 2021). For consistency, all image trajectories are obtained with the 3-layer ConvNet backbone as specified in (Cazenavette et al., 2022), and text trajectories are from linear projection layers over pretrained BERT (Devlin et al., 2018) embeddings. Although the performance of vision-language distillation trails behind that of image-only distillation, the gap closes at larger IPCs. However, this gap highlights the challenge of the continuous label space in vision-language datasets. Moreover, the performance gap between single and multi-caption retrieval demonstrates the challenge of capturing the variability within human language descriptions.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{IPC} & \multirow{2}{*}{**Classification**} & \multicolumn{2}{c}{Image-to-Text Retrieval} \\ \cline{3-4} & & Single Caption & Multi Caption \\ \hline
1 & \(46.3\pm 0.8\) & \(27.4\pm 1.0\) & \(22.3\pm 1.0\) \\
10 & \(63.3\pm 0.7\) & \(35.9\pm 0.7\) & \(33.2\pm 0.5\) \\
50 & \(71.6\pm 0.2\) & \(66.8\pm 1.1\) & \(62.0\pm 0.8\) \\ Full & \(84.8\pm 0.1\) & \(79.6\pm 0.6\) & \(80.3\pm 0.4\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **CIFAR10 Classification vs Retrieval.** We provided ipc=1/10/50 classification accuracy vs image-to-text retrieval R@1, which both measures whether an image has been matched with the correct class.
### Vision-language Distillation Setup
**Datasets and Tasks.** We evaluate our method on standard vision-language datasets: Flickr30K (Plummer et al., 2015) and COCO (Lin et al., 2014), which are widely used for image-text retrieval tasks. We use them for expert training (stage 1) and distillation (stage 2). We adopt the Karpathy split (Karpathy and Fei-Fei, 2015) for Flickr30K (29k/1k/1k) and COCO (113/5k/5k) for train/validation/test respectively. Each image is paired with five captions. We retrieve the closest matches using cosine distance from one modality based on a query from the other. We use R@K (for K\(\in\{1,5,10\}\)) to compute the fraction of times the correct result appears among the top K items.
**Network Architectures.** We primarily use pretrained and trainable NormalizerFree ResNet (NFNet) (Brock et al., 2021; Wightman, 2019) as the image backbone following Flamingo (Alayrac et al., 2022) as well as pretrained and frozen BERT (Devlin et al., 2018) as the text backbone. While both the encoders are pretrained, they are only pretrained on unimodal data with _no_ exposure to the other modality. Each encoder is followed by a trainable linear projection layer using Kaiming uniform initialization (He et al., 2015). Using a trainable BERT adds additional complexity which is orthogonal to vision-language dataset distillation and is out of the scope of this work. Pretrained models serve as a common foundation and good starting point and see Appendix Sec. B.3 for more discussions. Ablation studies on different backbones are in Appendix Sec. B.2.
**Implementation.** For expert training, we train on a single RTX 3090 GPU for 10 epochs, where a single epoch takes 40 minutes of wall-clock time. Sampling from a set of trajectories encourages the distilled dataset to include diverse information and avoid overfitting to a particular step, thus we save 20 image-text bi-trajectories. For distillation, it takes 6 - 15 GPU hours depending on the settings (e.g. number of distilled pairs) with a 8-GPU A6000 node. We initialize a trainable learning rate \(\alpha\) at 0.1 for student network training. We followed the data augmentation techniques in (Li et al., 2022), including resizing, cropping, flipping, and RandomAugment from transform.randaugment package. We use SGD with momentum=0.5, the learning rate for updating \(\alpha\), distilled image pixels and distilled text embeddings are 1e-02, 1000 and 1000, respectively.
**Initialization.** Following prior studies (Nguyen et al., 2020; Zhou et al., 2022), we initialize the distilled set with randomly selected real samples. We randomly select \(n\in\{100,200,500,1000\}\) image-text pairs from the original dataset, with images at 224 \(\times\) 224 resolution, and 768-dimensional sentence embeddings obtained via pretrained BERT. Our findings in Appendix Sec. B.1 show that initializing images from Gaussian distribution results in significantly lower performance. The complexity of images makes learning from random initializations challenging. In contrast, there is little difference in performance between using real and randomly initialized text embeddings. Surprisingly, despite the initial lack of semantic meaning between 'noise' texts and real images, we found notable semantic similarity between distilled text and real images, suggesting potential applications of our method in Visual Question Answering (VQA).
### Key Results
**Quantitative Results.** As shown in Tab. 2 and Tab. 5 in Appendix Sec. A, we observe that although there is relatively little variation in performance across each of the coreset selection baselines which we compare to, dataset distillation outperforms the best alternative by anywhere between 138% (improving R@1 from 5.6 of **K** to 13.3 of our model) to 661% (improving R@1 from 1.3 of **R** to 9.9 of our model). The relative improvement increases when fewer pairs are used for training and for smaller values of K in R@K as shown in Tab. 5. We report the practical upper/lower performance limits in Tab. 3 and we keep the BERT backbone frozen for a fair comparison. Note that the upper bound results do not reflect SOTA performance, but a full dataset training under the same setting. Moreover, as shown in Tab. 5, we note that with 1000 pairs, almost 30 times fewer examples than in the original dataset, our data distillation approach reaches 43.7 R@10 for TR, relative to a practical upper bound of 75.2, and 34.4 for IR R@10, relative to an upper bound of 69.7. We also observe that the performance among the baseline coreset selection methods varies only slightly, with no single method consistently outperforming the others across all pair sizes and retrieval metrics, often matching or underperforming random selection. This suggests limitations to these coreset selection methods in multimodal settings. In comparison, our bi-trajectory co-distillation method is optimized for vision-language alignment settings and thus performs significantly better. Our results show the effectiveness of distilled data, achieving unparalleled efficiency with significantly fewer examples.
**Qualitative Results.** Here we provide distilled image-text pairs visualizations out of 100 distilled pairs from Flickr30K after 2000 distillation steps in Fig. 3. We visualize the distilled text embeddings via their nearest neighbor sentences (cosine similarity) in the training set embedding space for more intuitive understanding. Additional visualizations are in Appendix Sec. D. The distilled images, compared to the original ones, add high-frequency components that help improve the generalization performance (Wang et al., 2020). While the distilled texts maintain semantic components associated with the distilled images and capture the key attributes e.g. "couple", "kiss", "man", "surf", "huge wave", they also deviate from original sentence embeddings, as they are not in the original five captions paired with the images. The improved performance indicates that both high-frequency components and semantic ones are perceived by models and these significantly help in aligning vision-language modalities.
### Ablation Studies
We conduct a set of ablation studies to understand the unimodal distillation vs. co-distillation, distilled dataset initialization (Sec. B.1), different encoder backbones (Sec. B.2), pretraining (Sec. B.3), synthetic steps (Sec. B.4), and their influence on the distilled dataset performance (see Appendix for full details). Here we first compare co-distillation to unimodal distillation where we keep one of the modalities fixed. Tab. 4 shows the retrieval performance of text-only distillation (**T**), image-only distillation (**I**), and co-distillation (**Ours**). For all tasks and metrics the joint co-distillation method clearly outperforms the text-only and image-only distillation. For example, with 1000 training pairs in the distilled dataset, when performing TR, the text-only distillation achieves 7.7 R@1, image-only distillation achieves slightly lower at 5.0, while co-distillation beats both handily at 13.2.
We observed that the improvement of text-only distillation was generally less than that of image-only distillation. This may not be surprising: a good description of an image typically contains only a salient but small portion of the visual information. On the other hand, descriptions in our evaluated
\begin{table}
\begin{tabular}{l c c|c c c c|c c c|c c c|c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{4}{c|}{TR} & \multicolumn{4}{c}{IR} \\ \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{Coreset Selection} & \multicolumn{4}{c|}{Coreset Selection} & \multicolumn{4}{c|}{} \\ \cline{3-13} \cline{6-13} \multicolumn{1}{c}{Dataset} & \#pairs & ratio \% & R & H & K & F & Dist (ours) & R & H & K & F & Dist (ours) \\ \hline \multirow{4}{*}{Flickr30K} & 100 & 0.34 & 1.3 & 1.1 & 0.6 & 1.2 & **9.9**\(\pm\) 0.3 & 0.8 & 0.8 & 1.4 & 0.7 & **2.5**\(\pm\) 0.3 \\ & 200 & 0.68 & 2.1 & 2.3 & 2.2 & 1.5 & **10.2**\(\pm\) 0.8 & 1.0 & 1.0 & 1.2 & 1.1 & **3.3**\(\pm\) 0.2 \\ & 500 & 1.67 & 5.2 & 5.1 & 4.9 & 3.6 & **13.3**\(\pm\) 0.6 & 1.9 & 1.9 & 2.5 & 2.1 & **5.0**\(\pm\) 0.4 \\ & 1000 & 3.45 & 5.2 & 5.0 & 5.6 & 3.1 & **13.3**\(\pm\) 1.0 & 1.9 & 2.4 & 2.4 & 1.9 & **6.8**\(\pm\) 0.4 \\ \hline \multirow{4}{*}{COCO} & 100 & 0.08 & 1.0 & 0.7 & 0.7 & 0.7 & **4.7**\(\pm\) 0.2 & 0.3 & 0.5 & 0.4 & 0.3 & **1.3**\(\pm\) 0.1 \\ & 200 & 0.17 & 1.1 & 1.5 & 1.5 & 1.2 & **6.6**\(\pm\) 0.9 & 0.6 & 0.9 & 0.7 & 0.6 & **1.7**\(\pm\) 0.1 \\ & 500 & 0.44 & 2.4 & 3.0 & 3.5 & 1.8 & **6.6**\(\pm\) 0.3 & 1.1 & 1.7 & 1.1 & 0.8 & **2.5**\(\pm\) 0.5 \\ & 1000 & 0.88 & 3.3 & 3.2 & 3.2 & 2.9 & **9.1**\(\pm\) 0.5 & 1.5 & 1.3 & 1.5 & 0.7 & **3.3**\(\pm\) 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Baseline comparisons on Flickr30K (top) and COCO (bottom).** We compare our distillation method to four coreset selection methods: random selection of training examples (**R**), Herding (**H**) (Welling, 2009), K-center (**K**) (Farahani and Hekmatfar, 2009; Sener and Savarese, 2017) and Forgetting (**F**) (Toneva et al., 2018). We consider different selected sizes (100, 200, 500, and 1000) and report the image-to-text (TR) and text-to-image (IR) R@1 retrieval performance on Flickr30K and COCO datasets. See Appendix Sec. A for complete results with R@5/10. Ratio (%): the ratio (in percent) of the distilled set to the entire training set. We report our distillation results along with standard deviation, they are calculated from the performance of five differently initialized models after training on the same distilled dataset.
\begin{table}
\begin{tabular}{l c c|c c c|c c c|c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{**Lower Bound: Random Ranking} & \multicolumn{4}{c}{**Upper Bound: Full Dataset} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{c}{Dataset} & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline Flickr30K & 0.1 & 0.6 & 1.1 & 0.1 & 0.5 & 1.0 & 33.9 & 65.1 & 75.2 & 27.3 & 57.1 & 69.7 \\ COCO & 0.02 & 0.1 & 0.2 & 0.02 & 0.1 & 0.2 & 19.6 & 45.6 & 59.5 & 16.9 & 41.9 & 55.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Practical Limits Comparison.** A side-by-side comparison of the image-to-text (TR) and text-to-image (IR) retrieval results obtained from (_lower bound, left_) random ranking (Kiros et al., 2014) and (_upper bound, right_) full dataset training on Flickr30K and COCO.
datasets typically contain no information that cannot be inferred from the images. By modifying the images to text-relevant aspects, the optimization can highlight essential image features, but modifying text features can only introduce more information. Thus, if we interpret each original image as having substantially more information than its original sentence, we would expect image-only distillation to perform better in a smaller-scale regime (removing spurious information) and text-only distillation to perform better in a larger-scale regime (adding useful details). For example, with 1000 training pairs, IR performance is similar for both image-only and text-only distillation, while TR performance is better with text-only distillation than image-only distillation.
In contrast, co-distillation allows the synthetic dataset to optimize further for both compact representation and efficient storage, removing redundant information between examples in the smaller-scale contexts and adding information not present in the selected original images in larger-scale contexts. Our co-distillation method, combining text and image modalities during training, outperforms the single modality distillation approaches consistently across the different number of training pairs and metrics. While the improvement from co-distillation is consistent, it is particularly substantial when the number of pairs is smaller: in the 100 and 200 pairs rows, co-distillation outperforms its unimodal alternatives by over 2\(\times\). In fact, co-distillation with 100 pairs consistently outperforms unimodal distillation with 1000 pairs. These results demonstrate the effectiveness of jointly distilling across modalities and highlight the complementary nature of multimodal data.
## 5 Conclusion
In this work, we propose the first vision-language dataset distillation method. By co-distilling both vision and language modalities, we can progressively optimize and distill the most critical information from a training dataset. Our experiments show that co-distilling different modalities via bi-trajectory matching holds promise. We hope that the insights we gathered can serve as roadmap for future studies exploring more complex settings, and that our work lays the groundwork for future research aimed at understanding what is the minimum information required for a vision-language model to achieve comparable performance quickly, thereby building a better understanding of the compositionality of compact visual-linguistic knowledge.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{4}{c}{TR} & \multicolumn{4}{c}{IR} & \multicolumn{4}{c}{IR} \\ \cline{2-13} & \multicolumn{4}{c|}{R@1} & \multicolumn{4}{c|}{R@5} & \multicolumn{4}{c|}{R@10} & \multicolumn{4}{c|}{R@10} & \multicolumn{4}{c|}{R@1} & \multicolumn{4}{c|}{R@5} & \multicolumn{4}{c}{R@10} \\ \cline{2-13} \# pairs & T & Ours & T & \multicolumn{1}{c|}{I} & Ours & \multicolumn{1}{c|}{I} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c|}{T} & \multicolumn{1}{c|}{I} & Ours & \multicolumn{1}{c|}{T} & \multicolumn{1}{c|}{I} & Ours & \multicolumn{1}{c|}{T} & \multicolumn{1}{c|}{I} & Ours \\ \hline
100 & 1.3 & 3.5 & **9.9** & 3.5 & 11.5 & **28.3** & 5.9 & 17.4 & **39.1** & 0.5 & 1.6 & **4.7** & 2.1 & 5.6 & **15.7** & 3.4 & 9.7 & **24.6** \\
200 & 1.4 & 4.5 & **10.2** & 4.8 & 12.8 & **28.7** & 8.2 & 21.7 & **41.9** & 0.7 & 2.0 & **4.6** & 2.7 & 8.1 & **16.0** & 4.7 & 13.0 & **25.5** \\
500 & 6.6 & 6.5 & **13.3** & 19.5 & 19.4 & **32.8** & 30.4 & 28.9 & **46.8** & 3.8 & 3.8 & **6.6** & 13.5 & 12.4 & **20.2** & 20.8 & 19.9 & **30.0** \\
1000 & 7.7 & 5.0 & **13.3** & 20.7 & 17.4 & **34.8** & 31.2 & 24.9 & **45.7** & 4.0 & 3.9 & **9.1** & 13.3 & 13.1 & **24.1** & 20.1 & 20.1 & **33.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation with single modality distillation**. We reported the image-text retrieval performance of text-only distillation **(T)**, image-only distillation **(I)**, and co-distillation **(Ours)** on Flickr30K. The results demonstrated the effectiveness of jointly distilling both image and text.
Figure 3: **Before and After Distillation. (_Left_) The image-text pairs before the distillation. (_Right_) The image and text pairs after 2000 distillation steps. Note that the texts visualized here are nearest sentence decodings in the training set corresponding to the distilled text embeddings.**
## Ethics Statement
Our exploration is centered on the scientific understanding and practical applications of vision-language dataset distillation. While our work does not directly imply negative impacts, it may indirectly propagate the existing biases in the original datasets. Therefore, it is important to incorporate rigorous bias-mitigation measurements and comprehensive ethical guidelines for dataset distillation. Discussion on these critical aspects should remain a priority as we further explore the potential of vision-language dataset distillation.
## Reproducibility Statement
To enhance the reproducibility of this paper, we provide the method setup in Sec. 3.3, the implementation details in Sec. 4.2. We also provide open-source implementations in [https://github.com/princetonvisualai/multimodal_dataset_distillation](https://github.com/princetonvisualai/multimodal_dataset_distillation) to help reproduce the results and evaluate the performance.
## 6 Acknowledgement
This material is based upon work supported by the National Science Foundation under Grant No. 2107048. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We thank many people from Princeton Visual AI lab (Allison Chen, Jihoon Chung, Tyler Zhu, Ye Zhu, William Yang and Kaiqu Liang) and Princeton NLP group (Carlos E. Jimenez, John Yang), Tiffany Ling and George Cazenavette for their helpful feedback on this work.
|
2306.11853 | Generalization Across Experimental Parameters in Machine Learning
Analysis of High Resolution Transmission Electron Microscopy Datasets | Neural networks are promising tools for high-throughput and accurate
transmission electron microscopy (TEM) analysis of nanomaterials, but are known
to generalize poorly on data that is "out-of-distribution" from their training
data. Given the limited set of image features typically seen in high-resolution
TEM imaging, it is unclear which images are considered out-of-distribution from
others. Here, we investigate how the choice of metadata features in the
training dataset influences neural network performance, focusing on the example
task of nanoparticle segmentation. We train and validate neural networks across
curated, experimentally-collected high-resolution TEM image datasets of
nanoparticles under controlled imaging and material parameters, including
magnification, dosage, nanoparticle diameter, and nanoparticle material.
Overall, we find that our neural networks are not robust across microscope
parameters, but do generalize across certain sample parameters. Additionally,
data preprocessing heavily influences the generalizability of neural networks
trained on nominally similar datasets. Our results highlight the need to
understand how dataset features affect deployment of data-driven algorithms. | Katherine Sytwu, Luis Rangel DaCosta, Mary C. Scott | 2023-06-20T19:13:49Z | http://arxiv.org/abs/2306.11853v1 | Generalization Across Experimental Parameters in Machine Learning Analysis of High Resolution Transmission Electron Microscopy Datasets
###### Abstract
Neural networks are promising tools for high-throughput and accurate transmission electron microscopy (TEM) analysis of nanomaterials, but are known to generalize poorly on data that is "out-of-distribution" from their training data. Given the limited set of image features typically seen in high-resolution TEM imaging, it is unclear which images are considered out-of-distribution from others. Here, we investigate how the choice of metadata features in the training dataset influences neural network performance, focusing on the example task of nanoparticle segmentation. We train and validate neural networks across curated, experimentally-collected high-resolution TEM image datasets of nanoparticles under controlled imaging and material parameters, including magnification, dosage, nanoparticle diameter, and nanoparticle material. Overall, we find that our neural networks are not robust across microscope parameters, but do generalize across certain sample parameters. Additionally, data preprocessing heavily influences the generalizability of neural networks trained on nominally similar datasets. Our results highlight the need to understand how dataset features affect deployment of data-driven algorithms.
## 1 Introduction
With increasing amounts of data from faster detector speeds and new automated microscope setups, there is a pressing need for high-throughput analysis of high-resolution transmission electron microscope (HRTEM) images of nanomaterials. HRTEM enables atomic-scale visualization of material structure with high temporal resolution, making it a useful imaging modality for high-throughput and in situ TEM experiments of nanoparticle synthesis and behavior. The most promising HRTEM image analysis methods to date have been based on convolutional neural networks (CNNs), a class of machine learning models that naturally take advantage of spatial correlations in image data (Madsen et al.,
2018; Vincent et al., 2021; Groschner et al., 2021). These algorithms utilize a framework in which patterns and trends are extracted from a large corpus of data, called the training set, and then evaluated on data the algorithm has not seen during training. The subsequent performance then depends on both the construction of a suitable optimization problem (which depends on the network architecture, training data, and loss function) and the procedure used to solve for the optimal parameters.
While CNNs have consistently outperformed traditional image analysis methods, CNNs and other machine learning models have also been empirically shown to not perform as well on data that is separate from their training set (Recht et al., 2019; Torralba and Efros, 2011). This inability to generalize has consequences for deploying CNNs for large-scale microscopy analysis, for instance in determining which networks are reusable across multiple experiments or reliable for data streams with changing conditions, like in situ data. Generalization issues are typically categorized in two ways: 1) in-distribution generalization, or the ability to generalize on data that has been nominally sampled from a similar distribution as the training data and whose drop in performance is commonly referred to as the "generalization gap", and 2) out-of-distribution generalization, or the ability to extrapolate to new data that is known to be different from the training set. While there has been a growing amount of research focused on algorithmic solutions to minimize generalization issues (Shen et al., 2021), we first need to understand under what conditions generalization problems occur. Such an analysis requires domain-specific knowledge which associates model performance gaps with domain-knowledge of the modified image or data features (Kaufmann and Vecchio, 2021; Liu et al., 2020; Li et al., 2023).
With HRTEM data, it is unclear what types of images are considered out-of-distribution from others. While metadata information like sample and/or imaging parameters may designate images as different from one another, it is unknown whether a trained neural network would be sensitive to such changes given the limited number of image features typically seen in HRTEM images. There is also limited knowledge on how the training dataset affects neural network performance, despite our (often) relatively complete understanding of both the sample and imaging process. While there have been some attempts to understand the effect of the training dataset with simulated data (Vincent et al., 2021), we lack experimental benchmarks to fully validate these generalization effects. With more data-driven models being proposed and developed by the microscopy community, there is a need to understand the reusability of these models on new datasets, and under what conditions they succeed or fail (Wei et al., 2023; Larsen et al., 2023).
In this paper, we systematically examine the robustness of neural networks trained to identify nanoparticles in HRTEM images (Figure 1a), focusing on the effect of microscope and sample parameters in the training set, including magnification, electron dosage, nanoparticle diameter, and nanoparticle material. As an example task, we focus on segmentation, or pixel-wise classification, of atomically-resolved crystalline nanoparticles against an amorphous background, a typical initial image processing step for further analysis of atomic defects or crystal structure (Groschner et al., 2021), or nanoparticle dynamic behavior (Yao et al., 2020). By curating experimental HRTEM datasets with controlled imaging and sample parameters, we not only qualitatively identify conditions under which we expect networks to generalize (or not), but also provide new datasets with extensive metadata that enable benchmarking HRTEM image analysis methods under specified microscopy conditions. In addition to our observations on training set effects, we demonstrate how data preprocessing influences generalization, providing a case study in preparing and utilizing experimental TEM data.
## 2 Methods
### Sample Preparation
2.2nm Au nanoparticles with citrate ligands were purchased from Nanopartz. 5nm, 10nm, and 20nm Au nanoparticles capped with tannic acid were purchased from Ted Pella. 5nm Ag nanoparticles with citrate ligands were purchased from nanoComposix. 5nm CdSe nanoparticles with oleylamine ligands were purchased from Strem Chemicals. To create a TEM sample from aqueous nanoparticle solutions (Au, Ag), an ultrathin carbon grid (Ted Pella) was plasma cleaned with a shield for 3 seconds to promote hydrophilicity, then 5 \(\mu\)L of the purchased nanoparticle solution was dropcast onto the grid, let sit for 5 minutes, and excess liquid was wiped off with a Kimwipe. For the CdSe nanoparticles, the nanoparticle solution was diluted to 0.625% of the original concentration with hexane, and 5\(\mu\)L of the diluted nanoparticle solution was dropcast onto an ultrathin carbon grid (Ted Pella) and let evaporate.
### TEM Imaging
HRTEM images were taken with a TEAM 0.5 aberration-corrected microscope operated at 300kV and a OneView camera (Gatan) at full resolution (4096 x 4096 pixels).
### Preprocessing and Dataset Creation
All HRTEM images were labeled by hand into segmented images using Labelbox. To create a dataset, raw images (and their corresponding labels) were selected from the larger data repository using metadata (i.e. microscope conditions, nanoparticle parameters, etc.), and then preprocessed into a dataset (Sytwu et al., 2023). Preprocessing consisted of four steps: 1) Removal of x-rays. 2) Flat-field correction. 3) Image value rescaling. 4) Divide into smaller patches. We apply all preprocessing steps by image to ensure that our methods scale with new data (i.e. adding more images to a dataset) and are reflective of model deployment, which is likely to be done by image. X-rays were removed by averaging the surrounding pixels of outlier points above a certain threshold (1500 counts) above the mean counts. For flat field correction, we estimate the uneven illumination using iterative weighted linear regression to a 2D Bezier basis (n=2, m=2) (Sadre et al., 2021), and divide out the estimated illumination profile. The iterative reweighting lessens the contribution from nanoparticle regions such that the substrate regions are primarily used to determine the uneven illumination. The pixel values of each image are then rescaled using either normalization (set minimum to 0 and maximum to 1), standardization (set mean to 0 and standard deviation to 1), or a histogram-based scaling procedure similar to Digital Micrograph (normalize, but ignore the pixels outside the 1st and 99th percentiles). Finally, images are divided into 512x512 pixel patches to reduce GPU memory requirements during network training and patches that are mostly substrate are removed to obtain better class balance. The datasets used in this paper are described in more detail in Table S1.
### Neural Network Training and Testing
For every dataset, five networks are trained using five-fold cross-validation to account for the variations from initial conditions and choice of test set. Networks are trained using a 70-10-20 percentage train-validation-test set split. After removing the test set, 1/8th of the remaining images are held as a validation set. Patches are assigned sequentially such that it is less likely for patches from similar image regions to end up in both the training and test sets. Each training/validation/test dataset is then augmented using the eight dihedral augmentations and shuffled.
Our neural network model architecture is a residual variant of the UNet architecture (Ronneberger et al., 2015) with four residual blocks and detailed more in Sytwu et al. (2022) (Figure 1b). Models are trained under a supervised learning framework, using cross-entropy loss with a learning rate of 1e-4 with an Adam optimizer (default parameters). During training, we additionally augment 50% of the images with random rotations between 0 and 360 degrees which empirically produces smoother prediction edges, but do not apply these random rotations to the validation or test set. We train for 250 epochs and save the model weights with the lowest validation loss within those 250 epochs. All training is done locally on a NVIDIA RTX3090 GPU.
Figure 1: Overview of the network training and testing protocols. (a) Datasets with specified metadata parameters are labeled and created from large HRTEM images and used to train and test neural networks. (b) The residual UNet neural network architecture used for all models in this paper.
We evaluate our models using the hard dice score, also known as the F1 score, which quantifies the similarity between the prediction and expert-provided label. The hard dice score can be calculated by \(\frac{2TP}{2TP+FP+FN}\) for a binary classification, and ranges between 0 (for complete disagreement) to 1 (for exact agreement). The results reported in this paper are the mean and standard deviation of the 5 trained models on either the test set (if drawn from the same dataset) or the entire other datasets.
## 3 Results
### Preprocessing
In order to identify the effect of training dataset on network performance, models need to first generalize well on images with nominally similar microscope conditions and sample parameters, whether the images are taken from the same dataset or during a different microscope session. We find that choices in data preprocessing highly affect whether this statement holds true. Preprocessing encompasses the conversion process from raw camera data in the form of CCD counts into a data format that is conducive for neural network training. There are two types of preprocessing steps: one is related to how datasets are created from the acquired data such that we can train neural networks in a memory-efficient and class-balanced manner (i.e. dividing large images into smaller patches that we feed into the neural network during training); the other is related to how image data values are converted into a standard format that enables generalization across quantitatively different camera outputs (i.e. if recorded counts are slightly different from changes in the camera gain). In this section, we will focus on the latter type of data preprocessing, and its implications for network generalization, with the key steps highlighted in Figure 2a.
Pixel value rescaling is necessary to convert output camera data into a standard format that is robust against exact electron counts. TEM image data is outputted as an array of counts, often from a high dynamic range sensor, whose exact pixel values correspond to detector and microscope parameters like gain and exact electron dosage. We test three different rescaling methods: normalization, standardization, and a histogram-based scaling method similar to
Figure 2: Effect of data preprocessing on network generalizability. (a) Overview of the data preprocessing workflow, from camera output to dataset creation. (b) The effect of pixel value rescaling procedures (normalization, standardization, and histogram-based) on the average performance of the training set and the test set for networks trained on the same 5nm Au nanoparticle dataset. Error bars refer to standard deviation over 5 networks. (c,d) Confusion matrices of network performance when trained and tested on images of 5nm Au nanoparticles taken at 0.02nm/pixel scale and 423 e/A\({}^{2}\) dosage from four different sessions (c) without any flat field correction and (d) with flat field correction. Error refers to standard deviation over 5 networks.
Digital Micrograph, a commonly used micrograph viewing software. Given a single set of images taken during the same session (8 images of 5nm Au nanoparticles, which results in 211 patches), we create three datasets that have the same content, but only differ by how the data is rescaled.
We find that the choice of pixel value rescaling method affects the generalization gap, or how well networks generalize to new images from the same microscope session. The rescaling method does not seem to noticeably affect the network's ability to converge to a solution, as evidenced by the high dice scores and low standard deviation of the training set performance for all 3 rescaling methods, but does affect generalization performance to the test set (Figure 2b). Normalization is the least robust rescaling method, having both the largest drop and variation in average test set performance relative to training set performance. Both of these trends suggest that by normalizing images, network performance is influenced by the sampling of the test set.
We attribute the performance differences across pixel rescaling methods to the larger variations in image values when normalizing, compared to more consistent nanoparticle contrast and background values when standardizing or undergoing histogram-based rescaling. TEM images often do not use the full dynamic range of the scientific sensor, and so normalization is sensitive to fluctuations in the long tail of pixel value counts. For small datasets, these variations in image contrast across images can lead to large differences between the training and validation/test sets, while standardization and histogram-based rescaling result in more consistent pixel value distributions between images (Figure S1). Due to standardization's higher performance and lowest variance, we standardize HRTEM image data for the subsequent datasets used in this paper. We do note that standardization partially relies on the assumption that the image values are normally distributed. This assumption holds mostly true for a wide-view image where the majority of the image area is amorphous substrate, but can potentially fail for an image whose majority area is crystalline material with a bimodal pixel value distribution from strongly diffracting lattice fringes.
In addition to consistent performance across a single microscope session or dataset, a robust algorithm should also be consistent across datasets that are nominally similar. We test our neural networks' ability to generalize to datasets taken during four different microscope sessions but with nominally similar sample and imaging conditions (5nm Au nanoparticles taken at 0.02nm pixel scale with 423 e/A\({}^{2}\) dosage). Figure 2c shows a confusion matrix of the networks' performance, with the diagonal elements highlighting the performance on test data taken from the same dataset, and the off-diagonal elements showcasing the performance on data from sessions different from the training dataset. These networks primarily perform well on test sets drawn from the same dataset (i.e. same microscope session) they were trained on, but fail to generalize to nominally similar data, suggesting that there is session-dependent information that the models are capturing.
By applying flat field correction to our images, we are able to obtain better generalization performance across microscope sessions. This preprocessing step corrects for uneven illumination across the image caused by either shifts in the monochromator or incorrect gain references. As seen in Figure 2d, once the images are flat field corrected, networks generalize much better to other sessions of nominally similar data. Flat field correction is particularly influential in our datasets because preprocessing is done per-image; the correction ensures that there is less variation across patches such that patch statistics better match larger scale image statistics. Note that flat field correction does not seem to impact how well the networks analyze the data--the diagonal elements of the confusion matrix retain similar performance regardless of flat field correction. Therefore, flat field correction primarily removes session-dependent experimental artifacts that affect generalization.
### Generalizability across microscope parameters
Microscope parameters heavily affect how a HRTEM image is formed and the subsequent observed image features. As a sanity check, we first investigate the generalizability of networks across microscope magnifications. Our networks are not expected to generalize well since lattice fringes, a key nanoparticle image feature, have a characteristic length scale and our CNNs are, by construction, not scale-invariant. We create four datasets, each of 5nm Au nanoparticles taken at similar dosages but different magnifications (Figure 3a), and then train and test networks across the four datasets. As expected, neural network performance is worse on images taken at a different magnification than the training dataset, with a larger drop in performance on test sets with a greater difference in pixel scale (Figure 3b).
From the confusion matrix, we see that generalization behavior is not necessarily symmetrical. For instance, networks trained on images taken at 0.042nm pixel scale perform extremely poorly on images taken at 0.02nm pixel scale, but this difference in performance is smaller vice versa. We hypothesize that this asymmetry is from the neural network using additional information beyond just the spatial frequency of lattice fringes to make its decisions (Sytwu et al., 2022). In HRTEM imaging, changing the magnification alters the relative contributions of amplitude and diffraction (phase) effects in image formation, alongside rescaling the image. Nanoparticles have greater image contrast in images taken at lower magnifications (i.e. 0.042 nm/pixel) than in images taken at higher magnification (0.02 nm/pixel), and
are thus qualitatively easier to detect in low magnification images than higher ones. The ease of distinguishing between nanoparticle and background in the lower magnification images is also noted by the overall higher performance on the 0.042 nm/pixel dataset.
In addition to magnification, we find that networks do not generalize well to datasets taken at different electron dosages, which affects the signal-to-noise ratio in the image. We again create three datasets, each of 5nm Au nanoparticles taken at 0.02nm pixel scale, but at three different dosages to represent a low dose dataset (80 e/A\({}^{2}\)), a medium dose dataset (423 e/A\({}^{2}\)), and a high dose dataset (884 e/A\({}^{2}\)) (Figure 3c). Here, we see a slightly more symmetrical confusion matrix, with all networks dropping in performance when tested on data taken at a dosage different from the training dataset (Figure 3d). Upon further analysis, we observe that networks tested on an image taken at a higher dosage (relative to the training set) tend to oversegment, as evidenced by a higher false positive rate, while networks tested on images taken at a lower dosage tend to undersegment (Figure S4). This suggests that evaluating on a dataset with a dosage different from the training dataset could incorrectly bias subsequent nanoparticle size analysis, but more detailed studies with a wider range of dosage values are needed to quantify these potential errors.
### Generalization across sample parameters
Understanding the reliability of a trained neural network across sample parameters is especially crucial for in situ studies and automated microscopy where microscope parameters are usually fixed but sample parameters may change or be unknown in the future. For nanoparticle datasets, one commonly varying sample parameter is nanoparticle size. We again create three datasets, each of Au nanoparticles taken at similar microscope conditions but varying in average nanoparticle diameter from 2.2nm to 10nm (Figure 4a). While the observed lattice fringes have the same characteristic length scale in all three datasets, larger nanoparticles are thicker and therefore have greater nanoparticle contrast (both amplitude and phase-contrast) against the substrate background. When evaluating network generalization (Figure 4b), we find that some models and datasets generalize well. All models perform equally well on the 5nm and 10nm datasets, but there is some variation in performance on the 2.2nm dataset depending on training dataset. All models perform worse on the 2.2nm dataset, likely due to the low nanoparticle contrast and difficulty of interpreting the images. Qualitative analysis of the predicted labels of the 2.2nm dataset suggest that the lower dice score may also be from
Figure 3: Network generalizability over microscope conditions. (a) Sample images from the four datasets of 5nm Au nanoparticles taken at different microscope magnifications. (b) Confusion matrix of network performance when trained and tested with datasets taken at different magnifications. (c) Sample images from the three datasets of 5nm Au nanoparticles taken at various dosage conditions. (d) Confusion matrix of network performance when trained and tested with datasets taken at different electron dosages. All scale bars are 5nm.
the network identifying particles that were missed by the human labeler (Figure S5). When normalizing for dataset difficulty, it is clearer that models trained on 2.2nm data generalize better than models trained on larger nanoparticles (Figure S2). These results suggest that networks could be trained to perform well on image data streams without needing to know the exact nanoparticle size beforehand.
In addition to nanoparticle size, nanoparticles can also vary in their material, which leads to differences in contrast (from atomic number, Z), and nanoparticle lattice features (from lattice spacing and crystal structure). We create three datasets of approximately 5nm nanoparticles taken at similar microscope conditions, but varying in material: Au, Ag and CdSe. Au and Ag are both fcc metals with similar lattice spacings, but differ in contrast (Z\({}_{\text{Au}}\) = 79, Z\({}_{\text{Ag}}\) = 47). CdSe nanoparticles, on the other hand, can take on either a wurtzite (hexagonal) or zinc blende (fcc) structure (both appear in our sample) with average lattice spacings greater than Au and Ag, but with contrast similar to Ag (Z\({}_{\text{Cd}}\) = 48). Again, we see an imbalance in network performance depending on the dataset, with both Au- and Ag-trained networks performing well on the Au dataset, and a strong dependence on training data for the CdSe and Ag datasets (Figure 3d). For the CdSe and Ag datasets, training on similar data does not even provide very high performance. Most interestingly, the CdSe-trained model performs decently well on the Au dataset, despite the CdSe nanoparticle regions having both different contrast and frequency information from the Au nanoparticle regions.
## Discussion
Overall, we find that there is potential for networks to generalize under certain sample parameters (nanoparticle size and material) but not over different microscope parameters (magnification and dosage). This suggests that pre-trained neural networks could be used for data streams with controlled imaging parameters, for instance with in situ datasets and automated microscopy. We also find that networks trained on more difficult-to-interpret data tend to generalize to new data better than networks trained on easier-to-interpret data, which can be observed in most of our confusion matrices. The datasets have been qualitatively ordered from lowest to highest in terms of how easily the nanoparticles are distinguishable, with higher nanoparticle contrast and observable lattice fringes making an image easier to interpret. Consistently, the generalization performance is worse in the lower left corner of our confusion matrices (train on easy images, test on harder images) compared to the upper right corner (train on hard images, test on easier images).
Figure 4: Network generalizability over nanoparticle sample parameters. (a) Sample images from the three datasets of Au nanoparticles of various diameters taken with similar microscope conditions. (b) Confusion matrix of network performance when trained and tested with datasets of Au nanoparticles with different diameters. (c) Sample images from the three datasets of approximately 5nm nanoparticles of either CdSe, Ag, or Au taken with similar microscope conditions. (d) Confusion matrix of network performance when trained and tested with datasets of nanoparticles of different materials. All scale bars are 5nm.
Since labeling difficult-to-interpret data is prone to larger human bias and error, these results highlight the need for simulation-based or multimodal datasets with accurate ground truth information to create useful training data (Madsen et al., 2018; Vincent et al., 2021).
In the absence of collecting more data to improve the generalizability of our networks, we can alternatively mimic lower contrast and more difficult to interpret datasets by adding noise and corrupting information in the higher contrast datasets for which we have higher confidence in the labeling. Upon adding Gaussian noise to the images during training, we lower the nanoparticle contrast, but retain the lattice fringe features that denote nanoparticle regions (Figure 5a). Note that additive Gaussian noise augmentation is a known regularization protocol to prevent overfitting (Bishop, 1995) and synthetically promote robustness (Gilmer et al., 2019).
As an example, we explore how additive noise augmentation affects generalizability across electron dosage. We train a series of models such that their training dataset of high dosage images (884 e/A\({}^{2}\)) is augmented with additive Gaussian noise with a standard deviation of \(\rho\). We then evaluate the performance of these noise-augmented models on the original 884 e/A\({}^{2}\) test set (high dose), 423 e/A\({}^{2}\) dataset (medium dose), and 80 e/A\({}^{2}\) dataset (low dose). As seen in Figure 5b, performance on all three datasets improve upon additive Gaussian noise augmentation, though the ideal amount of additive noise \(\rho\) depends on dataset. As expected, more additive noise is needed to improve performance on lower dosage datasets. Additionally, for all datasets, additive Gaussian noise augmentation helps networks meet or exceed the average performance of neural networks trained on experimentally-collected similar data. This is surprising given that the measured noise from the OneView camera follows a scaled Poissonian distribution and not a Gaussian. It is unclear whether the high performance from this augmentation is from matching dataset characteristics or from regularizing decision boundaries. The optimal augmented noise level does not match the experimentally-collected dataset in either nanoparticle contrast (by matching histogram medians), nor noise statistics (by matching image roughness) (Figure S7). However, when repeating this noise augmentation procedure on the medium-dose dataset, the noise-augmented models generalize poorly to higher dose data and require less additive noise to generalize well to lower dose data, suggesting that there is some dependence on dataset characteristics (Figure S8). All networks degrade in performance when \(\rho>1\) standard deviation, likely because this large noise augmentation destroys information in the image itself.
Figure 5: The effect of additive Gaussian noise on experimental data. (a) Sample image from the 884 e/Å\({}^{2}\) dataset (same as in Figure 3c with various amounts of additive Gaussian noise of scale \(\rho\). Scale bar is 5nm. (b,c) Performance of neural networks trained on the 884 e/Å\({}^{2}\) dataset augmented with (b) additive Gaussian noise of scale \(\rho\) or (c) additive Gaussian noise with scale sampled between \([0,\rho_{\text{max}}]\) when tested on the 884 e/Å\({}^{2}\) test set (dark blue), 423 e/Å\({}^{2}\) dataset (blue), and 80 e/Å\({}^{2}\) dataset (turquoise). Dotted lines indicate the average performance of the respective dataset when trained on images from the same dataset.
As the necessary additive Gaussian noise scale may not be known a priori, we alternatively set the noise augmentation such that \(\rho\) is uniformly sampled between \([0,\rho_{\text{max}}]\) during training. Under this augmentation protocol, all noise-augmented-trained models perform well on the high dose and medium dose datasets, but none of them perform well enough on the low dose dataset to compare with low-dose trained models (Figure 5c). These results suggest that synthetic noise augmentation could be a viable strategy for developing robust networks on HRTEM images with decent signal-to-noise, but does not work effectively to generalize to low dosage images with low signal-to-noise. Recent work has highlighted the need for more accurate noise modeling, especially for low dosage images (Larsen et al., 2023), and our results similarly highlight the difficulty of generalizing to low-dosage images.
We emphasize that the focus of our results is in the data-driven generalization trends rather than absolute neural network performance, which can be affected by label error and choice of model hyperparameters. As the models in this paper are all trained from hand-labeled experimentally-collected data, there is inherent human bias and error in the labels, primarily at the edges of nanoparticles, which affects the absolute value of the dice scores. Similarly, while our training curves suggest that our networks have converged to a local minima that enables decent performance, there is still room for improvement by fine-tuning both model and optimization hyperparameters. We argue, however, that the generalization trends that we observe are data-dependent and seem to be robust even after hyperparameter tuning; in Figure S6, we show the generalization performance over nanoparticle size after hyperparameter tuning model parameters, and while the overall dice scores are slightly different, the generalization trends are the same as Figure 4b.
Finally, the observed sensitivity to data preprocessing suggests that we need a closer examination as to how we convert raw scientific data into datasets for machine learning and other data-driven methods. While compressed digital images are easier to share, there needs to be greater transparency on how color mapping was performed, which affects image contrast values, visibility of outliers, and potentially leads to dataset biases (Zhong et al., 2021). Given the generalizability differences due to pixel rescaling method that we see in Figure 2b, we recommend that researchers are open about the TEM image creation process, namely how camera data is converted to image data. To this end, we have not only made our processed datasets for all of our models publicly available, but also the raw camera data such that preprocessing steps can be explored (Sytwu et al., 2023). By sharing the raw camera data rather than digital images, we hope to invigorate research into the necessary data preprocessing steps for robust algorithms that work on data from any experiment.
## 4 Conclusions
We investigated how training dataset creation affects neural network segmentation performance on HRTEM images of nanoparticles. We find that choices in data preprocessing, or the conversion from raw camera data to a machine-learning-ready dataset, heavily impacts the ability for networks to generalize to new data. Overall, we find that our trained neural networks are not generalizable across microscope parameters like magnification and electron dosage, which correspond with changing image features like feature size and signal-to-noise ratio. However, networks are more generalizable across sample parameters like nanoparticle diameter and certain nanoparticle materials, which corresponds with image features like nanoparticle contrast and lattice fringe frequency. These results give insight into the experimental conditions under which we can expect trained neural networks to be reliable, and suggest the varieties of data needed for generalizable neural networks.
## 5 Data Availability
All processed datasets, raw image data, and corresponding labels used in this paper are available in the Dryad Digital Repository, at [https://doi.org/10.7941/D1SP93](https://doi.org/10.7941/D1SP93) (Sytwu et al., 2023). The raw image data is also available at [https://portal.nersc.gov/project/m3795/hrtem-generalization/](https://portal.nersc.gov/project/m3795/hrtem-generalization/); code to download specific files based on metadata attributes is available on our Github. Code and Jupyter notebooks on dataset creation and model training/testing, trained model weights, and more visualizations of our results are available at [https://github.com/ScottLabUCB/HRTEM-Generalization](https://github.com/ScottLabUCB/HRTEM-Generalization).
## 6 Competing Interests
The authors declare no competing interests.
## 7 Acknowledgements
K.S. was supported by an appointment to the Intelligence Community Postdoctoral Research Fellowship Program at Lawrence Berkeley National Laboratory administered by Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy and the Office of the Director of National Intelligence (ODNI). L.R.D. was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fellowship under Award Number DE-SC0021110. This work was also funded by the US Department of Energy in the program "4D Camera Distillery: From Massive Electron Microscopy Scattering Data to Useful Information with AI/ML". Imaging was done at the Molecular Foundry, which is supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
|
2304.07848 | A Study of Update Request Comments in Stack Overflow Answer Posts | Comments play an important role in updating Stack Overflow (SO) posts. They
are used to point out a problem (e.g., obsolete answer and buggy code) in a SO
answer or ask for more details about a proposed answer. We refer to this type
of comment as update request comments (URCs), which may trigger an update to
the answer post and thus improve its quality.
In this study, we manually analyze a set of 384 sampled SO answer posts and
their associated 1,221 comments to investigate the prevalence of URCs and how
URCs are addressed. We find that around half of the analyzed comments are URCs.
While 55.3% of URCs are addressed within 24 hours, 36.5% of URCs remain
unaddressed after a year. Moreover, we find that the current community-vote
mechanism could not differentiate URCs from non-URCs. Thus many URCs might not
be aware by users who can address the issue or improve the answer quality. As a
first step to enhance the awareness of URCs and support future research on
URCs, we investigate the feasibility of URC detection by proposing a set of
features extracted from different aspects of SO comments and using them to
build supervised classifiers that can automatically identify URCs. Our
experiments on 377 and 289 comments posted on answers to JavaScript and Python
questions show that the proposed URC classifier can achieve an accuracy of 90%
and an AUC of 0.96, on average. | Mohammad Sadegh Sheikhaei, Yuan Tian, Shaowei Wang | 2023-04-16T18:04:29Z | http://arxiv.org/abs/2304.07848v1 | # A Study of Update Request Comments
###### Abstract
Comments play an important role in updating Stack Overflow (SO) posts. They are used to point out a problem (e.g., obsolete answer and buggy code) in a SO answer or ask for more details about a proposed answer. We refer to this type of comment as update request comments (URCs), which may trigger an update to the answer post and thus improve its quality.
In this study, we manually analyze a set of 384 sampled SO answer posts and their associated 1,221 comments to investigate the prevalence of URCs and how URCs are addressed. We find that around half of the analyzed comments are URCs. While 55.3% of URCs are addressed within 24 hours, 36.5% of URCs remain unaddressed after a year. Moreover, we find that the current community-vote mechanism could not differentiate URCs from non-URCs. Thus many URCs might not be aware by users who can address the issue or improve the answer quality. As a first step to enhance the awareness of URCs and support future research on URCs, we investigate the feasibility of URC detection by proposing a set of features extracted from different aspects of SO comments and using them to build supervised classifiers that can automatically identify URCs. Our experiments on 377 and 289 comments posted on answers to JavaScript and Python questions show that the proposed URC classifier can achieve an accuracy of 90% and an AUC of 0.96, on average.
keywords: Stack Overflow, Answer quality, Crowd-sourced knowledge sharing, Commenting, Knowledge maintenance and update, Classification +
## 1 Introduction
Many developers utilize Stack Overflow (SO) to find solutions for their programming issues. With about 22 million questions and 33 million answers1, SO is the largest Q&A site for computer programming (May et al., 2019). According to Jeff Atwood, one of the founders of SO, the goal of SO is not "answer my question" but "let's collaboratively build an artifact that will benefit future coders" (Atwood, 2018). As a result, most of the answers (sometimes the questions) on SO are required to be continuously edited to maintain/improve their quality via resolving the textual/code issues (e.g., handle deprecated APIs and fix flawed code snippet) in the previous version (Zhang et al., 2019; Wang et al., 2018).
Footnote 1: [https://stackexchange.com/sites](https://stackexchange.com/sites)
Comments on the SO posts are the main channel for users to communicate and discuss the potential problems in the posts. SO encourages users to post comments in an answer post when they find an issue in the answer and ask for an update on the answer explicitly (e.g., "Please replace method A with method B as A is deprecated.") or implicitly (e.g., "So when using ArrayList:new the given key is inserted into the list?"). We refer to such comments as _update request comments (URCs)_ because they can potentially trigger an update in the corresponding answer and thus improve its quality.
The questions and issues mentioned in URCs may be addressed in the next comment(s) or body of the corresponding answer post, or even both of them. However, there is no guarantee for such URCs to be addressed. In SO, when a user writes a comment under an answer post, the system notifies the owner of the post, i.e., _answer owner_, about the new comment. Then for each URC, the answer owner can address it either by updating the answer body or by writing a new comment to reply. However, if the answer owner does not handle the problem, the URC remains unaddressed until other users address it in a new comment or in the body of the answer (i.e., becoming an _answer editor_). Prior studies find only a small portion of their collected comments resulted in an update in the corresponding answer post, and answer owners ignore many requests of answer update raised in comments (Soni and Nadi, 2019; Zhang et al., 2019). In other words, handling such URCs is still problematic, and relying solely on answer owners to maintain their answers maybe not be realistic. SO is a collaborative community and all users on SO are encouraged to maintain its answers via collaborative editing (Wang
et al., 2018a). Therefore, SO probably needs to attract eyeballs from the entire community to handle URCs. To alleviate the above problem, a first step is to have a deep understanding of URCs and investigate the possibility of developing an automated approach to identify such URCs so that they are visible to the community.
In this work, we first conduct an empirical study on URCs for having a deep understanding of URCs in terms of their prevalence, the percentages of URCs remained unaddressed, how URCs are addressed by the community members (i.e., addressed in answer post, addressed in the following comment(s), or addressed in both), how fast are URCs addressed, what is the contribution of different user roles in addressing URCs, which post part they choose to address URCs, and if comment votes can be used to distinguish URCs from non-URCs.
To answer the above questions, we manually examined 1,221 comments from a statistical randomly sampled 384 answer posts of Java questions (questions with the \(<\)java\(>\) tag). We observed that half of the analyzed comments are URCs. More interestingly, while 55.3% of URCs are addressed within 24 hours, 36.5% of URCs remain unaddressed after a year. We also found that the majority (80.1%) of addressed URCs are addressed by the answer owner. Among addressed URCs, 88.7% were addressed in the next comments, 33.3% were addressed in the post body, and 22.1% were addressed in both. By investigating the comment scores we realized that their scores are not good means to detect URCs.
Our findings show that although URCs are prevalent and more than half addressed within 24 hours, many are ignored (remained unaddressed). Moreover, these URCs might not be visible to the community as they may not be highly voted. Therefore, we continue to explore the feasibility of automated URC detection, as the first step towards improving the awareness of URCs. We propose multi-dimension features, such as comment author role and comment relative time, that could potentially differentiate URCs from other comments based on our manual analysis of 1,221 comments. Then we employ these features in common supervised learning models to identify URCs. Specifically, we apply random forest (Breiman, 2001), logistic regression (Hosmer Jr et al., 2013), naive bayes (Zhang et al., 2011), and also two deep learning models (a CNN model by Qu et al. (2019) and a Transformer based model by Gu and Budhkar (2021)), and train them by different inputs, i.e., the extracted features, TF-IDF/text, or extracted features + TF-IDF/text. We use the 1,221 annotated comments related to Java topic
(_Java comments_) to train these models, and then test them on comments extracted from JavaScript questions and Python questions to evaluate their performance and generalizability.
Our experiments on 377 and 289 comments posted on answers of JavaScript and Python questions respectively show that the models that are based on the extracted features outperform TF-IDF and text based models with a large margin in terms of accuracy and AUC. Also, among the investigated models, the Transformer based model (using BERT) that gets text and the extracted features as its input, archives the highest performance, i.e., around 90% accuracy and 0.96 AUC, indicating that URCs are highly predictable.
Our contributions include:
* Perform an empirical study on the update request comments in Stack Overflow.
* Propose a supervised-learning approach that leverages our extracted features to detect URCs with the average accuracy of 90%. This approach can be used in Stack Overflow to decrease the rate of unaddressed URCs.
* Provide an annotated dataset of 1,221 comments2 that are posted on randomly selected 384 answers to Java questions. We have provided three level of annotation: 1--If it is a URC, 2--For URCs, where is it addressed, i.e., in the next comments, in the post body, or both. 3--For URCs that are addressed by the following comments, which comment addresses it.
Footnote 2: available at [https://doi.org/10.6084/m9.figshare.19382156](https://doi.org/10.6084/m9.figshare.19382156)
## 2 Background
As mentioned earlier, SO posts (either questions or answers) are continuously updated to address different issues such as resolving bugs, meeting time-related concerns (such as deprecated APIs), and simplification. According to Stack Overflow3 "_Any user can propose edits, but not all edits are publicly visible immediately. If a user has less than 2,000 reputation, the suggested edit is placed in a review queue. Two accept or reject votes are required to remove the suggested edit from the queue and either apply the edit
to the post or discard it. Users with more than 2,000 reputation are considered trusted community members and can edit posts without going through the review process._" As a result, if there is a problem with a post, users usually write a comment on the post and ask for an update. Also, users may ask for clarification or question other related cases, resulting in the generalization of a solution.
Fig. 1 shows a sample answer post from a question having the \(<\)java\(>\) tag along with its comments. There are four comments on this sample4. The first comment is written by the questioner (the user who started this page by posting the question) to ask more information about the answer. The second comment is written by the answerer (the user who wrote the answering post), to answer the first comment. The third comment is written by a third person to ask for more clarification, and the last comment is again from the answerer to address the third comment. Therefore, in this example, there are two URCs and two non-URCs. This sample also shows that there is an update on the answering post on Mar 29, 2016. By clicking on this date, SO opens a new page and shows the history of edits on this post by highlighting the new added parts in green and deleted parts in red. In this example, the last line of the answering post, i.e., "_For a list of implementations, including validators in various languages, see JSON-Schema Implementations._", is appended by the answerer to address the first comment. Therefore, as shown in the figure, the first comment is a URC addressed in the next comment and the post body. But, the third comment is a URC that is only addressed by the next comment. Both of these URCs are addressed in less than 24 hours. Fig. 2 shows another example5 from the Java community. Although the answer is accepted, four unaddressed update request comments point out the problems with the proposed answer.
Footnote 4: [https://stackoverflow.com/questions/36152972](https://stackoverflow.com/questions/36152972)
Footnote 5: [https://stackoverflow.com/questions/27304654](https://stackoverflow.com/questions/27304654)
The effect of comments on answer updates was studied before by Soni and Nadi (2019). They employed three rule based heuristics to detect different types of comments regarding their effect on posts, although that approach discards around 35% of comments due to not matching to some primary conditions. In contrast, we created a dataset of 1221 comments by providing three levels of annotation for each comment. We employed this dataset to (A) answer different empirical questions about URCs, (B) train various ML
and DL models to predict the type of comments. Due to the differences between our approach and Soni's work, we come to a significantly different conclusion on the ratio of addressed and unaddressed URCs. The details have been presented in Section 7.
## 3 Empirical Analysis on Update Request Comments in SO Answers
To understand the prevalence of update request comments (URCs) and how they are handled in practice, we first conduct an empirical study by answering the following four research questions. Answers to them would also guide future studies leveraging SO comments. As far as we know, this is the first empirical analysis of URCs.
_RQ1: How prevalent are URCs in technical Q&A and how they
Figure 1: A sample answer post from a question tagged with “Java”. There are two addressed update request comments and two non-update request comments on this answer.
get addressed by the community members?_ Knowing the percentage of URCs and the number of unaddressed URCs among all comments gives us insight into how critical it is to analyze URCs. Other information such as the ratio of URCs caused a post update gives us a higher level of perception about the user interactions via comments to improve the quality of answers.
_RQ2: How fast are URCs addressed?_ Knowing the URCs address speed gives us insight into the delay in addressing URCs, i.e., potential waiting time for SO users to get their URCs addressed. This metric could also help the SO community to keep track of the effectiveness of the community in addressing URCs.
_RQ3: Which user role (questioner, answerer, other commenters) is more likely to address URCs? And in which part of the answer post do they choose to address URCs?_ Users who address URCs may have different roles in that SO page: questioner, answer owner, answer editor,
Figure 2: A sample answer posts from a question tagged with “Java”. There are four unaddressed update request comments on this answer.
and others. Also, they may address URCs in the post body, in the following comments, or both. Knowing which user role addressed URCs in which part of the answer post gives us insight on who contributes most to the address of URCs and where they prefer to address URCs.
_RQ4: Can comment votes be used to distinguish URCs from non-URCs?_ SO uses a community-vote mechanism to decide which comments should be shown at top positions. Knowing if votes can help identify URCs gives us insight into whether URCs can be naturally selected by SO users or not.
To answer the above questions, we collect a set of comments and determine whether each of them is a URC or not. For each URC, we also denote if the URC is addressed or not. For addressed URC, we record if they are addressed in the answer post or in the following comments, and identify the role of SO user who addressed the URC. Section 3.1 and 3.2 describe the coding guide for comment labeling and how we collect data for answering four RQs. In the end, we present the methodology and results of our empirical analysis in Section 3.3.
### Coding Guide to Annotate the Comments
Before labeling the comments, we need a coding guide to annotate the comments by URC/NO_URC, and then labeling the URC comments by URC_ADDRESSED/ URC_UNADDRESSED. A URC is a comment posted by any user but the answerer, explicitly or implicitly, asks to update the answer and improve its quality. As the answerer has the ability of changing his/her answering post, comments by this person is a NO_URC comment. For the other comments that are from the questioner or third users, we check the content of the comment. If it either points out problems in the answer post, asks for more information that would help to understand the answer better, or provides important information (e.g., regulations) associated with the answer, we label it as URC (because it has the potential of initiating a post update), otherwise we label it as NO_URC. Fig. 3 shows this process in a decision tree. According to this figure, the first and third comments in Fig. 1 are URC because they ask for more information about a special case that would help the questioner to better understand the answer. However, the second and fourth comments are NO_URC because they are written by the answerer. To provide more examples, these are some comments by a questioner or a third person on an answer post that we consider them as
* How can I use this solution in my code?
* I ran it and got this Exception:...
* Can I use class B rather than class A in this solution?
* It doesn't work for me.
* Function A is deprecated.
* Works but runs slowly.
And here are examples of comments by a questioner or a third person on an answer post that we consider them as NO_URC:
* Thank you!
* Great! This is the thing I was looking for.
* Oh, that's a nice point about having the possibility to put a null value there!
Figure 3: The decision tree applied in manually labeling comments by URC/NO_URC
To tag the URCs with URC_ADDRESSED/URC_UNADDRESSED label, we add another level of annotation which determines if the URC is addressed in the following comments, or addressed in the post body, or addressed in both of them, or remained unaddressed. Any answering comment related to the URC (even if it is not the correct answer) is acceptable to tag that comment as URC_ADDRESSED. We assume that if the answering comment is not what the user asked, she will write another URC. The only answering comments that we do not treat as answers are those that explicitly say "I don't know" and so forth.
To answer RQ2 and RQ3, we need to know which one of the next comments addresses the current URC (if any). Thus, the final step of our annotation process is to add another level of labeling to determine the addressing commentID (when the URC is addressed by the next comments). If there are multiple addressing comments, we take the first (the oldest) of them. As a result, our dataset has three levels (columns) of labeling, i.e., needs_update, addressed_in, and addressed_by_commentID. Table 1 shows our labeling for the comments on the two answer posts mentioned in Fig. 1 and 2. Due to lack of space, only the first few words of each comment text is shown. In the "user role" column, we mentioned the role of the commenter. Refer to Table 6 for more information about these roles. The three levels of our annotations are presented in "needs update", "addressed in", and "addressed by commentID" columns.
### Data Collection
We collect data from the 2020_03_15 version of SOTorrent dataset (Baltes et al., 2019) by performing queries on Posts, PostHistory, Users, and Comments tables.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**question** & **answer** & **comment** & & **needs** & **addressed** & **addressed by** \\
**ID** & **ID** & **ID** & **user role** & **Comment text** & **update** & **in** & **comment ID** \\ \hline
36152972 & 36155326 & 60185364 & Questioner & Thank you. Is it easy to... & yes & both & 60190443 \\ \hline
36152972 & 36155326 & 60190443 & Answerer & Generally, sure. See... & no & - & - \\ \hline
36152972 & 36155326 & 99591573 & Not seen comment & Can you expand on why... & yes & comment & 96594548 \\ \hline
36152972 & 36155326 & 99594548 & Answerer & @ubiquaocn: Of course any... & no & - & - \\ \hline
27304556 & 27304654 & 43072230 & Not seen commenter & What about if the smallest... & yes & no & - \\ \hline
27304556 & 27304654 & 43072237 & Not seen commenter & should probably stop at... & yes & no & - \\ \hline
27304556 & 27304654 & 4307307 & Questioner & 1 accepted your answer not... & yes & no & - \\ \hline
27304556 & 27304654 & 76939452 & Not seen commenter & @StevenAkaTax Its better... & yes & no & - \\ \hline \end{tabular}
\end{table}
Table 1: Labels for the comments of two answer posts from SO Java community
To achieve more reliable conclusions (less diverse results), in this study we focus on SO posts that are related to Java language (have <java> tag). We expect to see similar results on other popular programming languages such as Python and JavaScript. We also consider only the questions that their score is equal or greater than zero. We assume that negative scored questions/answers have not attracted enough interest from the community to evaluate and classify their quality. As we aim to investigate the comments on answer posts, only the answers that have at least one comment are considered. Given the huge number of candidate answer posts, we focus on those that are recently touched, i.e., the last activity date (last post edit date, or post creation date if it is not edited) is 1/1/2017 or later. Moreover, the accepted answers or the answers that have the highest vote among answers posted on the same question are more important for the community. So, we only consider answers that either is accepted or has the highest vote. After filtering the answers based on the mentioned features, we ended up with 124,472 answers. Then, we randomly choose a sample of 384 answers to fulfill 5% margin error with a confidence level of 95% for our statistical analysis. To calculate the sample size, we apply the sample size calculator6. Finally, we take all the comments of that 384 posts for our annotation process. Algorithm 1 shows the steps of data acquisition from SOTorrent.
Footnote 6: [http://www.raosoft.com/samplesize.html](http://www.raosoft.com/samplesize.html)
```
1- Select all questions with Java tag and \(Score\geq 0\)
2- Select answers of questions selected in Step 1 that A) is accepted or has the highest votes among answers posted on that question, B) posted or edited after 1/1/2017, and C) have one or more comments
3- Randomly select 384 answers from the set of answers made in step 2
4- Select all comments of the 384 answers selected in the step 3
```
**Algorithm 1** Data Acquisition from SOTorrent
To manually label the data, we follow the closed card sorting methodology. According to the coding guide for labeling the comments introduced in Section 3.1, the first two authors manually examined 100 randomly sampled comments obtained from step 4, and only found six inconsistent annotations. They discussed and agreed on the final labels of the six comments. The first author then continues to label the rest comments. When annotating the comments, we found two answers with ambiguous comments. We ignored
those two answers, and ended up with 1,221 annotated comments. Using that 100 randomly sampled comments that were annotated independently by the first two authors, the Cohen's kappa inter-rater agreement is 87.8%, which indicates an excellent agreement. We will publish our dataset online for future research around this subject.
### Methodology and Results
**RQ1: How prevalent are URCs in technical Q&A and how they get addressed by the community members?**
We use our labeled dataset to answer this question. Table 2 shows the main statistical questions about update request comments. About half of the comments (631 of 1,221) are URC. Among 631 URCs, 417 comments (about 66%) are addressed either by a post update or by another comment, and about 34% of URCs remained unaddressed in our dataset. A strong majority (88.7%) of the 417 addressed URCs are addressed in the next comment. Also, 139 comments (33.3%) of these addressed URCs are addressed in the post body while 92 of them are addressed in the next comments as well.
**RQ2: How fast are URCs addressed?**
Same as RQ1, we use the annotated dataset to answer this question. In our labeling, we didn't mention which post-update addresses the URC. But, we have labeled whether each URC is addressed by post-update or not. Thus, in case of being addressed in the post body, we assume the first post-update after that URC is the one that addresses it.
Table 3 shows the percentage of URCs that are addressed within 5 minutes, 1 hour, 1 day, 7 days, and a year. The table reports the portions based on both the 417 addressed URCs and all 631 URCs. Interestingly, 20% of 631 URCs are addressed within 5 minutes, and about 55.3% of them are addressed within a day. Among 417 addressed URC, 400 comments (95.9%)
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Statistical Questions** & **Count** & **Percent** \\ \hline How many comments are URC? & 631 of 1221 & 51.7\% \\ \hline How many of URCs are addressed (either in post or next comments)? & 417 of 631 & 66.1\% \\ \hline How many of the addressed URCs are addressed in the next comments? & 370 of 417 & 88.7\% \\ \hline How many of the addressed URCs are addressed in the post body? & 139 of 417 & 33.3\% \\ \hline How many of the addressed URCs are addressed both in the next comments and the post body? & 92 of 417 & 22.1\% \\ \hline \end{tabular}
\end{table}
Table 2: The main statistics about URCs in our dataset
were addressed within a year, meaning that 17 comments waited for more than a year to get addressed. On the other hand, among 47 unaddressed URC that were posted after 3/15/2019, one of them addressed within a year but after 3/15/2020. So, among 631 URCs, 417-17+1=401 comments (63.5%) addressed within a year, and 230 comments (36.5%) remained unaddressed within this period.
**RQ3: Which user role (questioner, answerer, other commenters) is more likely to address URCs? And in which part of the answer post do they choose to address URCs?**
Same as previous RQs, we investigate our dataset to answer this question. Table 4 shows the results. The rows show who (which user role) addressed the URCs, and the columns show where (which part of the post) URCs are addressed. There are 370 URCs that are addressed in the next comments, which a majority of them (296) are addressed by the answer owner (i.e., the person who posted the answer for the first time). The remaining are addressed by answer editor, i.e., the user who update the existing answer (3 URCs), by the questioner (15 URCs), and the other users (56 URCs).
There are 139 post edits (to address URCs) which 114 of them are by the post owners and the remaining 25 post edits are by answer editors.
\begin{table}
\begin{tabular}{|l|c c c c|} \hline
**URCs addressed** & in comment & in post & in either & in both \\ \hline by answer owner & 296 & 114 & 334 & 76 \\ by answer editor & 3 & 25 & 27 & 1 \\ by questioner & 15 & 0 & 15 & 0 \\ by others & 56 & 0 & 56 & 0 \\ \hline by anyone & 370 & 139 & 417 & 92 \\ \hline \end{tabular}
\end{table}
Table 4: Role of users who address the URCs and where they address the URCs
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Addressed within** & **Of 417 addressed URCs** & **Of 631 URCs** \\ \hline
5 min & 30.2\% & 20.0\% \\
1 hour & 58.8\% & 38.8\% \\
1 day & 83.7\% & 55.3\% \\
7 days & 87.5\% & 57.8\% \\
1 year & 95.9\% & 63.4\% \\ \hline \end{tabular}
\end{table}
Table 3: The percentage of addressed URCs within the specified times
The answer owners addressed 334 URCs either in the post or in the next comments. It means that 334/417=80.1% of addressed URCs are addressed by the answer owners. The table also shows that 76 URCs are addressed by the post owner both in the next comments and in the post body.
Note that the values in "by anyone" row for "in either" and "in both" columns are not the summation of that columns. For example, among 92 URCs that are addressed both in the next comments and in the post body, some of them are addressed by post owner in the post body (without writing a comment by them) while they also addressed by others in the comment.
**RQ4: Can comment votes be used to distinguish URCs from non-URCs?**
Table 5 shows the different quantiles of comment scores (votes) for each group. The comment score for more than 75% of comments of each group is 0. Also, the quantiles for 80% and 85% are equal, indicating that the comment scores are not good means to detect URCs.
**Summary:** About half of the comments are URC. While 55.3% of URCs are addressed within 24 hours, 36.5% of URCs remain unaddressed after a year. Also, the majority (80.1%) of addressed URCs are addressed by the answer owner. The majority URCs' score is 0, which may not be visible to the community.
## 4 Automatically Detect Update Request Comments
### URC Detection
We know that SO notifies the owner of posts when someone writes a new comment on their post. This SO feature explains why most of the URCs (80.1%) are addressed by the owner of the answering post, and why 55.3% of the URCs are addressed within 24 hours, whereas 36.5% remain unaddressed after a year. In addition, we observe that the majority URCs have a score of 0, which may cause them invisible to community members
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Category** & **50\%** & **75\%** & **80\%** & **85\%** & **90\%** & **95\%** \\ \hline NO\_URC & 0 & 0 & 1 & 1 & 1 & 2 \\ \hline URC & 0 & 0 & 1 & 1 & 2 & 4 \\ \hline \end{tabular}
\end{table}
Table 5: The quantiles of comment scores for each group of comments
who are potentially interested in addressing them. Therefore, if there is a reliable predictive model to automatically detect URCs, Stack Overflow can apply it to improve post qualities by decreasing the number of unaddressed URCs. For example, when a new URC is posted on an answer and accurately identified by a URC detector, if the answer remains unchanged and no other comments appear after the identified URC, SO could predict this URC to be an unaddressed URC and push it to the community members who are interested in addressing the issue pointed out in the URCs, including the original answer. Thus, in this section we aim to investigate how URCs are predictable.
To predict URCs, we apply three classic ML models and two deep learning models (a CNN and a BERT based model). Each ML model utilizes either the features extracted from comments, or the TF-IDF of the comment text or both of them (i.e., the extracted features and the TF-IDF table that are horizontally concatenated). Each DL model utilizes either the text only or the text plus the features extracted from comments. We describe the details of the feature extraction process, TF-IDF extraction, and training classifiers as below.
**Feature Extraction:** Table 6 describes the features we extracted for each comment by considering multiple dimensions, inspired by our manual comment annotation process. These dimensions are as follow:
* **Comment features:** the features that are related to global aspects of a comment. It includes comment_score and comment_order.
* **Post features:** the features that are related to global aspects of a post. It includes post_score and post_comment_count.
* **User features:** we believe the role of the user who posts a comment is an important clue to detect URCs. Most NO_URCs are posted by the answerer to address previous URCs. We also consider user_reputation that is provided by Stack Overflow. As users with high reputation have the permission of updating answers, they may update answers by themselves in case of new URCs, and write new comments when they want to address a URC.
* **Time features:** These features are the relative time between a comment and its previous/next comment and its previous/next post up
dates. The rational behind these features is that, in most cases URCs are addressed within a short period of time.
* **Text similarity features:** Each comment is more or less related to other comments and post updates. In most cases, they are related to immediately previous/next comment or the immediate next post update. We use two different text similarity measures, i.e., the Jaccard similarity and cosine similarity between the BERT vectors (obtained via running SBERT7 on each comment) to extract the similarities between a comment and its previous/next comments. We also use the Jaccard similarity to find the similarity between a comment and the next post change occurred after that comment. So, by extracting these similarities, we will find that how near events are related to the current comment. Footnote 7: [https://www.sbert.net/](https://www.sbert.net/)
* **Text semantic features:** We use TextBlob (Loria, 2018), a Python text processing library to extract the polarity and subjectivity of comments. We expect critic comments (that are kind of URCs) have a negative polarity.
* **Text extracted features:** There are some characters such as question mark, keywords such as "exception" or "but", URLs, and emotions (like ;)) that knowing their existence in a text provides good information about the content of a comment. We also considered the text length, because it somehow shows how much information is provided by that comment.
**TF-IDF Extraction:** Term frequency-inverse document frequency (TF-IDF) is one of the most popular methods to vectorize documents of a corpus. A word in the TF-IDF of a document achieves a high value if it has a high frequency in that document but a low frequency in the whole collection of documents. To archive the best results, we remove the stop words and also the words with very low frequency, i.e., one or two.
**Training Classifiers:** We consider three basic classification models, i.e., random forest, logistic regression, and naive bayes. These models are picked as they are most commonly used in prediction tasks in Software Engineering
domain (Yang et al., 2020), and have shown acceptable performance in those tasks. We also consider two deep learning models:
1) a convolutional neural network (CNN) proposed by Qu et al. (2019) that has the capability of incorporating the external features to the neural network (after the convolutional layers). To get the most from the CNN model, we modified its layers and tune its hyper parameters according to the 10% validation set that is randomly selected from 1,221 java comments. In the original implementation of the CNN model, the authors used three convolutional layers. Also they directly concatenated the external features to the convoluted text. However, we found that if we decrease the number of con
\begin{table}
\begin{tabular}{|p{142.3pt} p{142.3pt}|} \hline
**Feature** & **Description** \\ \hline \hline
**Comment Features:** & \\ comment\_score & The score of the comment which is zero or a positive number. \\ comment\_order & The order of the comment on that post. The first comment is 1, the second is 2, and so on. \\ \hline
**Post Features:** & \\ post\_score & The score of the post that can be a positive or negative number \\ post\_comment\_count & The number of comments on that post. \\ \hline
**User Features:** & \\ by\_asker & True: if the comment is written by the user who posted the question. False: otherwise. \\ by\_answer & True: if the comment is written by the user who posted the answer. False: otherwise. \\ by\_not\_seen\_commenter & True: if the comment is written by a user that is neither the questioner nor the answerer, and has not written any comment on this post before. False: otherwise. \\ by\_seen\_commenter & True: if the comment is written by a user that is neither the questioner nor the answerer, but has written at least one comment on this post before. False: otherwise. \\ user\_reputation & The reputation of the user who wrote the comment \\ \hline
**Time Features:** & \\ prey\_post\_edit\_time & log([The time of the comment] - [The time of the previous post edit] reported in minutes) \\ next\_post\_edit\_time & log([The time of the next post edit] - [The time of the comment] reported in minutes) \\ prev\_comment\_time & log([The time of this comment] - [The time of the previous comment] reported in minutes) \\ next\_comment\_time & log([The time of the next comment] - [The time of this comment] reported in minutes) \\ \hline
**Text Similarity Features:** & \\ prey\_comment\_jaccard\_sim & The Jaccard similarity between this comment and the previous comment \\ next\_comment\_jaccard\_sim & The Jaccard similarity between this comment and the next comment \\ prey\_comment\_bert\_sim & The Cosine similarity between the BERT vector of this comment and the previous comment \\ next\_comment\_bert\_sim & The cosine similarity between the BERT vector of this comment and the next comment \\ comment\_post\_change\_sim & The Jaccard similarity between this comment and the post change after this comment \\ \hline
**Text Semantic Features:** & \\ polarity & The polarity of the comment text (between -1 and +1) \\ subjectivity & The subjectivity of the comment text (between 0 and 1) \\ \hline
**Text Extracted Features:** & \\ text\_len & The length (number of characters) of the comment text. \\ starts\_with\_@ & If the comment starts with \(@\). For example: “\(@\)John please explain your code.” \\ contains\_question\_mark & If the comment contains a question (?) mark. \\ contains\_exclamation\_mark & If the comment contains an exclamation (!) mark. \\ contains\_but & If the comment contains the word “but”. \\ contains\_exception & If the comment contains the word “exception”. \\ contains\_url & If the comment contains a URL. \\ contains\_emotions & If the comment contains emotions like :) \\ talks\_to\_role & If this comment talks to a specific person by \(@\). 0: doesn’t include \(@\)user, \\ & 1: talks to the questioner, 2: talks to the answerer, 3: talks to a commenter \\ \hline \end{tabular}
\end{table}
Table 6: The extracted features from each comment
volutional layers from three to one, and also if we use a dense layer before concatenating the external features to the convoluted text we get better results. For the hyper parameters, we used the same optimizer (Adam) and same parameter values except for the learning rate that was 0.001 but we changed it to 0.0005.
2) Multimodal-Toolkit, the deep learning model proposed by Gu and Budhkar (2021) that gets text and tabular data and provides eight different architectures to combine the tabular data with a Transformer (such as BERT). We use bert-base-uncased as the Transformer in this toolkit. Our primitive investigations on that eight different architectures showed that we can achieve the highest performance by the architecture that embeds the text to a vector of size 768, then concatenate it with the output of a MLP which gets the tabular data (non-text data) and converts them to a vector of size 500. To be more specific, the MLP has one hidden layer with 10,000 nodes, and there are 500 nodes in its output layer. So, the concatenated vector has a size of 768+500=1268. For the hyper parameters, we use the default parameters and suggested values. To be more specific, the batch size is 16, the Adam learning rate is 5e-5, and the number of training epochs is 3.
### Experiments
**Evaluation Metrics:** The automated identification of URCs can be treated as a standard binary classification task. Thus, we use standard evaluation metrics, i.e., precision, recall, F1-score, accuracy, and area under the curve (AUC) to evaluate our models. Precision \(P\) measures the correctness of our models in predicting the type of a comment, i.e., whether the comment is a URC or not. A prediction is considered correct if the predicted type is the same as the actual type of the comment. Precision is calculated as the proportion of correctly predicted URCs. Recall \(R\) measures the completeness of a model. A model is considered complete if all of update request comments are predicted to be URC. Recall is calculated as the proportion of actual URCs that were correctly predicted as such. F1-score is the harmonic mean of precision and recall, i.e., \((\frac{2*P*R}{P+R})\). Accuracy is the most intuitive performance measure and it is the ratio of correct predictions to the total predictions. The area under a receiver operating characteristic (ROC) curve, abbreviated as AUC, measures the overall performance of a binary classifier (Hanley and McNeil, 1982). The AUC value is within the range [0.5-1.0], where 0.5 represents the performance of a random classifier and 1.0 corresponds to a perfect classifier.
**Experiment Setup:** We use the 1,221 labeled comments from the SO Java community to train the models. To test these models, we create two test datasets, one from SO JavaScript community, and another from SO Python community. There are three main reasons behind the cross-programming language evaluation setup. First, due to the time limit, similar to literature studies on SO posts, e.g., Soni and Nadi (2019) and Tang and Nadi (2021), we do not consider all questions on Stack Overflow but those tagged with specific programming languages. We pick Java, Python, and JavaScript, as they are reported to be among the most popular programming languages 8. Second, it would be easier for us to label URCs in posts tagged with these programming languages, as we have enough domain knowledge to understand the background and content of the questions. Last but not least, we choose to perform cross-programming language prediction because URCs in different programming language communities might share different characteristics, and we would like to investigate if the predictive models trained from one programming language can perform stably in other programming languages. As all the extracted features shown in Table 6 are language independent, all the applied ML and DL models in our experiments are language independent consequently. Therefore, we can train the models on the Java community comments, and test them on different datasets from different domains (e.g., JavaScript or Python), and expect similar performance on each domain.
Footnote 8: [https://survey.stackoverflow.co/2022/#technology](https://survey.stackoverflow.co/2022/#technology)
To create the test datasets, we tag 377 comments (posted on 100 randomly selected answers) from the SO JavaScript community and 289 comments (posted on 100 randomly selected answers) from the Python community. To randomly select 100 answering post from JavaScript or Python community, we follow the steps described in Algorithm 1, but using <javascript> or <python> tag.
As random forest is a stochastic algorithm, it provides different results in each run. So, we run this algorithm for 100 iterations and report the result with the median accuracy. In each iteration, we train the algorithm by the 1,221 comments from the Java community and test it on 377 comments from the JavaScript community and 289 comments from the Python community. For logistic regression and naive bayesian models, as they provide stable results, we only run them for once. As the CNN model provides stable results
through different runs, we also run it for once. For Multimodal-Toolkit, we run it for 11 times and take the result with median accuracy.
Among the features mentioned in Table 6, six features may not be available in real scenarios: next_comment_jaccard_sim, next_comment_bert_sim, comment_post_change_jacc_sim, next_post_edit_time, comment_score, and next_comment_time. These features need some time to be available when a new comment is posted. So, we drop them before running the experiments.
**Baseline:** We also compare our models with the heuristic approach provided by Soni and Nadi (2019) (details provided in Section 7). They use three heuristics that are based on regular expressions and the code parts that are common between comments and post updates.
The heuristic algorithm proposed by Soni and Nadi generates four labels, three of which are equivalent to our labels, but their UNKNOWN label is undefined in our labeling. Moreover, some comments are discarded by their algorithm. We decided to treat UNKNOWN and discarded comments in two different ways: One way is to treat all of them as NO_URC. The second way is to drop all UNKNOWN or discarded comments and report the performance on the remaining comments. Among 377 JavaScript comments, 146 comments were discarded or labeled as UNKNOWN by their heuristic algorithm. For the Python community, among 289 comments, 101 comments were discarded or labeled as UNKNOWN by this heuristic.
### Results
Table 7 and Table 8 show the performance of the three ML models (that each one is trained by three different inputs), the CNN model (trained with two different inputs), the Multimodal-Toolkit by Gu and Budhkar (2021) that uses BERT (trained with two different inputs), and two baselines adjusted from Soni and Nadi's heuristic approach, on JavaScript and Python respectively. The results show that for both test data, the Multimodal-Toolkit trained by the extracted features + text achieves the best performance, i.e., about 90% accuracy and 0.96 AUC. The random forest model trained by features+TF-IDF archives the second rank in the list for both test data.
The results in Table 7 and Table 8 also reveal that when we only use the text data (TF-IDF or pure text), none of the models achieves an accuracy higher than 74%. However, when we only employ the extracted features, all of the three ML models achieve much higher performance comparing their TF-IDF based models. Also, for the deep learning based models (CNN and
Multimodal-Toolkit), incorporating the extracted features resulted in a much higher performance comparing to its text based version. As expected, among the models which only uses the text data, either pure text or TF-IDF, the BERT based models provide the highest performance.
The two considered baselines did not perform well on our datasets. One potential reason is that regular expressions may not be as accurate as ML approaches. Moreover, we included additional important features such as the role of commenters. Finally, their definition of URC is not exactly the same as ours. The details are presented in section 7.1.
Fig. 4 shows the feature importance obtained by the feature based RF model with the median accuracy on JavaScript comments. As expected,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Classifier** & **Acc** & & & & \\ (**Input**) & **AUC** & **Category** & **P** & **R** & **F1** \\ \hline \hline
**RandomForest** & 88.3\% & NO\_URC & 0.886 & 0.867 & 0.876 \\ (**features**) & 0.946 & URC & 0.881 & 0.898 & 0.889 \\ \hline
**RandomForest** & 60.7\% & NO\_URC & 0.611 & 0.489 & 0.543 \\ (**TF-IDF**) & 0.655 & URC & 0.605 & 0.716 & 0.656 \\ \hline
**RandomForest** & 89.4\% & NO\_URC & 0.902 & 0.872 & 0.887 \\ (**features + TF-IDF**) & 0.949 & URC & 0.887 & 0.914 & 0.900 \\ \hline \hline
**LogisticRegression** & 70.8\% & NO\_URC & 0.765 & 0.561 & 0.647 \\ (**features**) & 0.769 & URC & 0.678 & 0.843 & 0.751 \\ \hline
**LogisticRegression** & 65.0\% & NO\_URC & 0.638 & 0.617 & 0.627 \\ (**TF-IDF**) & 0.686 & URC & 0.660 & 0.680 & 0.670 \\ \hline
**LogisticRegression** & 84.1\% & NO\_URC & 0.895 & 0.756 & 0.819 \\ (**features + TF-IDF**) & 0.928 & URC & 0.804 & 0.919 & 0.858 \\ \hline \hline
**GaussianNB** & 66.3\% & NO\_URC & 0.627 & 0.728 & 0.674 \\ (**features**) & 0.730 & URC & 0.708 & 0.604 & 0.652 \\ \hline
**GaussianNB** & 55.4\% & NO\_URC & 0.524 & 0.733 & 0.611 \\ (**TF-IDF**) & 0.600 & URC & 0.616 & 0.391 & 0.478 \\ \hline
**GaussianNB** & 66.3\% & NO\_URC & 0.627 & 0.728 & 0.674 \\ (**features + TF-IDF**) & 0.730 & URC & 0.708 & 0.604 & 0.652 \\ \hline \hline
**CNN** & 66.8\% & NO\_URC & 0.655 & 0.644 & 0.650 \\ (**text**) & 0.706 & URC & 0.680 & 0.690 & 0.685 \\ \hline
**CNN** & 88.3\% & NO\_URC & 0.915 & 0.833 & 0.872 \\ (**text + features**) & 0.947 & URC & 0.859 & 0.929 & 0.893 \\ \hline \hline
**Multimodal-Toolkit by Gu and Budhkar (2021) using BERT** & 74.0\% & NO\_URC & 0.720 & 0.744 & 0.732 \\ (**text**) & 0.820 & URC & 0.759 & 0.736 & 0.747 \\ \hline
**Multimodal-Toolkit by Gu and Budhkar (2021) using BERT** & **89.9\%** & NO\_URC & 0.908 & 0.878 & 0.893 \\ (**text** + features**) & **0.953** & URC & 0.892 & 0.919 & 0.905 \\ \hline \hline
**Heuristic by Soni and Nadi (2019) (treat UNKNOWN and discarded comments as NO\_URC)** & 49.3\% & NO\_URC & 0.480 & 0.744 & 0.584 \\ \hline
**Heuristic by Soni and Nadi (2019) (ignore UNKNOWN and discarded comments)** & & URC & 0.531 & 0.264 & 0.353 \\ \hline \end{tabular}
\end{table}
Table 7: The performance of different models with different input features on JavaScript comments
by_answered has the highest weight because most of the addressing comments (that are NO_URC) are written by the post answerer. The feature importance for other running iterations of RF is similar to this figure.
**Summary:** The update request comments are highly detectable. Utilizing the features we proposed to extract from comments, the supervised models achieved 89.9% and 90.7% accuracy on JavaScript and Python community respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Classifier** & **Acc** & & & & \\
**(Input)** & **AUC** & **Category** & **P** & **R** & **F1** \\ \hline \hline
**RandomForest** & 87.2\% & NO\_URC & 0.930 & 0.811 & 0.866 \\
**(features)** & 0.943 & URC & 0.825 & 0.936 & 0.877 \\ \hline
**RandomForest** & 63.0\% & NO\_URC & 0.672 & 0.541 & 0.599 \\
**(TF-IDF)** & 0.679 & URC & 0.600 & 0.723 & 0.656 \\ \hline
**RandomForest** & 87.9\% & NO\_URC & 0.931 & 0.824 & 0.875 \\
**(features + TF-IDF)** & 0.947 & URC & 0.835 & 0.936 & 0.883 \\ \hline \hline
**LogisticRegression** & 72.3\% & NO\_URC & 0.788 & 0.628 & 0.699 \\
**(features)** & 0.786 & URC & 0.678 & 0.823 & 0.744 \\ \hline
**LogisticRegression** & 63.7\% & NO\_URC & 0.684 & 0.541 & 0.604 \\
**(TF-IDF)** & 0.694 & URC & 0.605 & 0.738 & 0.665 \\ \hline
**LogisticRegression** & 83.4\% & NO\_URC & 0.910 & 0.750 & 0.822 \\
**(features + TF-IDF)** & 0.933 & URC & 0.778 & 0.922 & 0.844 \\ \hline \hline
**GaussianNB** & 64.4\% & NO\_URC & 0.615 & 0.811 & 0.700 \\
**(features)** & 0.741 & URC & 0.702 & 0.468 & 0.562 \\ \hline
**GaussianNB** & 59.5\% & NO\_URC & 0.578 & 0.777 & 0.663 \\
**(TF-IDF)** & 0.635 & URC & 0.633 & 0.404 & 0.494 \\ \hline
**GaussianNB** & 64.4\% & NO\_URC & 0.615 & 0.811 & 0.700 \\ (features + TF-IDF) & 0.741 & URC & 0.702 & 0.468 & 0.562 \\ \hline \hline CNN & 64.7\% & NO\_URC & 0.669 & 0.615 & 0.641 \\
**(text)** & 0.682 & URC & 0.627 & 0.681 & 0.653 \\ \hline
**CNN** & 87.5\% & NO\_URC & 0.889 & 0.865 & 0.877 \\ (text + features)** & 0.953 & URC & 0.862 & 0.887 & 0.874 \\ \hline \hline
**Multimodal-Toolkit by Gu and Budhkar (2021) using BERT** & 74.0\% & NO\_URC & 0.735 & 0.770 & 0.752 \\ (text) & 0.826 & URC & 0.746 & 0.709 & 0.727 \\ \hline
**Multimodal-Toolkit by Gu and Budhkar (2021) using BERT** & **90.7\%** & NO\_URC & 0.923 & 0.892 & 0.907 \\ (text + features)** & **0.969** & URC & 0.890 & 0.922 & 0.906 \\ \hline \hline
**Heuristic by Soni and Nadi (2019) (treat UNKNOWN and** & 50.5\% & NO\_URC & 0.513 & 0.669 & 0.581 \\ \cline{2-6} & discarded comments as NO\_URC)** & & URC & 0.490 & 0.333 & 0.397 \\ \hline
**Heuristic by Soni and Nadi (2019) (ignore UNKNOWN** & 50.5\% & NO\_URC & 0.522 & 0.495 & 0.508 \\ \cline{2-6} & and discarded comments)** & & URC & 0.490 & 0.516 & 0.503 \\ \hline \end{tabular}
\end{table}
Table 8: The performance of different models with different input features on Python comments
## 5 Discussion: Can we automatically identify unaddressed update request comments?
Identifying unaddressed URCs from existing SO comments might also help the SO community to improve the awareness of potential unresolved issues in existing answer posts. Thus in this discussion, we explore whether comment features proposed in Section 4 can also be applied to identify unaddressed URCs in existing SO comments.
**Features and Model:** We reuse the features proposed in Table 6 (including the six not available to new comments) and apply the Multimodal-Toolkit with BERT (as the best model to identify URCs) to detect all three classes: NO_URC, URC_ADDRESSED, and URC_UNADDRESSED. We train the model on the 1,221 Java comments and test it on 377 and 289 comments from accepted answers to JavaScript and Python questions, respectively.
**Baseline:** We compare our results with the heuristic rule-based model provided by Soni and Nadi (2019) (ref. Section 7).
**Experiments and Evaluation:** Following the same method of performance evaluation which is described in Section 4.2, we run the Multimodal-Toolkit for 11 times and report median accuracy value.
Figure 4: Feature importance by the random forest model
**Results:** Table 9 shows the performance measures for each category for both JavaScript and Python communities. The accuracy on JavaScript and Python comments is 84.1% and 84.8% respectively. However, the model cannot provide a high f1-score for the URC_UNADDRESSED class indicating the difficulty of detecting this type of comments.
Table 10 reports the performance of the heuristic model on JavaScript and Python comments when we treat the discarded and UNKNOWN comments as NO_URC. It provides 39.8% and 40.1% accuracy on JavaScript and Python comments respectively that is far lower than the performance of our proposed model. As treating the discarded and UNKNOWN comments by the second way (i.e., ignoring them) led to even worse performance (i.e., 34.6% and 34.6%), we don't present the detailed performance obtained by this treatment. The above results show that our model outperforms the baseline with a large margin in terms of accuracy and f1-score.
Thus, we conclude that, **unlike URCs, URC_UNADDRESSED are difficult to identify based on comment features.** In the future, more advanced models are needed to better capture the answer edit and comment post history.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **Category** & **P** & **R** & **F1** & **Supp.** \\ \hline \hline \multirow{3}{*}{**JavaScript**} & NO\_URC & 0.480 & 0.744 & 0.584 & 180 \\ \cline{2-6} & URC\_ADDRESSED & 0.250 & 0.047 & 0.080 & 148 \\ \cline{2-6} & URC\_UNADDRESSED & 0.129 & 0.184 & 0.151 & 49 \\ \hline \hline \multirow{3}{*}{**Python**} & NO\_URC & 0.513 & 0.669 & 0.581 & 148 \\ \cline{2-6} & URC\_ADDRESSED & 0.412 & 0.070 & 0.120 & 100 \\ \cline{1-1} \cline{2-6}
**(Acc: 40.1\%)** & URC\_UNADDRESSED & 0.127 & 0.244 & 0.167 & 41 \\ \hline \end{tabular}
\end{table}
Table 10: The performance of the heuristic model (Soni and Nadi, 2019) on JavaScript and Python comments
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Community** & **Category** & **P** & **R** & **F1** & **Supp.** \\ \hline \hline \multirow{3}{*}{**JavaScript**} & NO\_URC & 0.869 & 0.922 & 0.895 & 180 \\ \cline{2-6} & URC\_ADDRESSED & 0.868 & 0.797 & 0.831 & 148 \\ \cline{2-6} & URC\_UNADDRESSED & 0.660 & 0.673 & 0.667 & 49 \\ \hline \hline \multirow{3}{*}{**Python**} & NO\_URC & 0.905 & 0.905 & 0.905 & 148 \\ \cline{2-6} & URC\_ADDRESSED & 0.827 & 0.910 & 0.867 & 100 \\ \cline{1-1} \cline{2-6}
**(Acc: 84.8\%)** & URC\_UNADDRESSED & 0.645 & 0.488 & 0.556 & 41 \\ \hline \end{tabular}
\end{table}
Table 9: The performance of Multimodal-Toolkit using BERT to detect 3-class comments in JavaScript and Python communities
## 6 Threats to Validity
In this section, we start with threats to external validity. Our empirical study is based on a dataset that contains 1,221 labeled comments extracted from a set of statistically sampled answer posts from Java questions. Although Java is one of the biggest communities in SO, and we can expect similar results for other popular languages, our findings may not be the same for questions on other topics.
Similarly, we only evaluate the performance of our models on sampled comments in accepted answer posts to questions relevant to Python, and JavaScript. Thus, the reported performance may not generalize to other communities. However, as the reported results across different categories are close in terms of accuracy, AUC and F1-score, we believe that our model should work for other communities.
Another threat to external validity is the limited size of empirical study and evaluation datasets due to the time-consuming annotation process. In the empirical study, we manually annotated a statistically significant sample of Stack Overflow answer posts and their comments. Specifically, We randomly sampled 384 answer posts, achieving a 5% margin error with a confidence level of 95%. To evaluate the proposed automated approaches, we created two test datasets by manually annotating 289 comments that were posted on 100 Python posts and 377 comments that were posted on 100 JavaScript posts. The performance of our models on the two datasets are close to each other. But still, the above results may not be applied to all Stack Overflow posts and comments.
As for threats to internal validity, the empirical study and the train/test of automated URC detection models rely on manually labeled comments and might be biased or error-prone. To avoid such a problem, two authors independently checked 100 comments and reported a high agreement ratio. Therefore we believe our coding guide is clear and could support future replication on other data.
## 7 Related Work
### Studies on Stack Overflow Comments
The most relevant work to our study is by Soni and Nadi (2019) that analyzed how comments affect answer updates on Stack Overflow, although they did not perform any empirical study. They employed a heuristic rule-based
approach to classify comments into four categories. Since these categories are also determine whether a comment is asking for update, we clarify the relationship between their comment categories and our categories below to reduce ambiguities.
1. **WARRANT UPDATE**: A comment that warranted an update but an edit was not made to the answer. The equivalent of this class in our classification is URC_UNADDRESSED.
2. **UPDATE**: A comment that warranted an update and an edit were made to the answer. This class is equivalent to our URC_ADDRESSED.
3. **NO UPDATE**: A comment that did not warrant an update. It is equivalent to our NO_URC class.
4. **UNKNOWN**: All other comments that are just text, URLs, or discussion (e.g., "Thank you so much for this answer.").
Soni and Nadi claimed that "_only a few (\(\sim\)4-5%) comments resulted in answer updates_" and "_More than a quarter (\(\sim\)26-29%) of the comments we studied across the five tags_ [java, python, javascript, android, php] _require the answer to be updated, but are ignored by answer posters_". From these statements, one can conclude that \(\sim\)84-88%9 of update request comments are unaddressed. Different from them, by manually analyzing 1,221 comments, we found that only 36.5% of URCs remain unaddressed after a year. Such inconsistent results may arise due to several reasons. First, their study scope is different from ours. They focused on the updates in the code block of an answer post and ignored the update in text. But in reality, answer owners sometimes only need to update the text to address URCs rather than touching code. We argue that text updates are also important because both text and code are important for a high-quality post (Calefato et al., 2015), and developers rely on both when utilizing SO (Wu et al., 2019; Chatterjee et al., 2020). Secondly, Soni and Nadi ignore the answers in the following comments. However, we found that 58.6% of our annotated URCs are addressed in the following comments. Thirdly, their findings rely on their rule-based approach to identify update-introduce comments, and their reported accuracy is 85% on labeled comments from 30 answer posts. Instead, we manually examined the studied dataset, which is more accurate.
Footnote 9: 26/(26+5)=83.9% and 29/(29+4)=87.9%
Though their categorization is different from ours, we can still compare with their heuristic approach in identifying URCs and unaddressed URCs. Our experiment results (ref. Table 7, 8, 9, and 10) show that our proposed machine learning-based approach outperforms their approach with a large margin. Moreover, while the algorithm by Soni and Nadi (2019) discards many comments due to either not including code updates or being labeled as UNKNOWN, our model labels all comments and doesn't discard any comment.
Another closed study on SO comments is by Zhang et al. (2021). They found that 4.4 million comments (possibly including informative comments) are hidden by default from developers. To help identify informative comments, they propose a taxonomy to group comments into seven types: Praise (praise an answer), Advantage (discuss the advantage of an answer), Improvement (make improvement to an answer), Weakness (point out the weakness of an answer), Inquiry (make inquiry based on an answer), Addition (provide additional information to an answer), and Irrelevant (discuss irrelevant topics to an answer). Their categorization is different from ours because our categorization focuses on whether a comment asks for an update on the answer post.
### Other Studies on Stack Overflow
Studies on Stack Overflow can be categorized into two types, i.e., mining questions and answers on Stack Overflow to extract the challenges faced by developers (Treude et al., 2011; Barua et al., 2014; Rosen and Shihab, 2016; Ahmed and Bagherzadeh, 2018; Tahir et al., 2018; Bagherzadeh and Khatchadourian, 2019; Openja et al., 2020; Tan et al., 2020; Wen et al., 2021) and investigating the mechanisms used by Stack Overflow and proposing new feature/model to improve user experience on Stack Overflow (Xia et al., 2013; Saha et al., 2013; Nasehi et al., 2012; Asaduzzaman et al., 2013; Ponzanelli et al., 2014; Beyer and Pinzger, 2015; Zhang et al., 2015; Ahasanuzzaman et al., 2016; Srba and Bielikova, 2016; Yang et al., 2016; Mizobuchi and Takayama, 2017; Chen et al., 2018; Wang et al., 2018, 2018, 2019; Chatterjee et al., 2020).
Treude et al. (2011) manually categorized the types of questions on Stack Overflow, and observed that Stack Overflow could be useful for code review and learning the concepts of programming. Barua et al. (2014) first applied LDA, a popular statistical topic model, to discover topics from the contents
on Stack Overflow and track the changes of the topics over time. Following their methodology, researchers have analyzed contents on Stack Overflow related to fine-grained domains. For instance, Rosen and Shihab (2016) analyzed 13,232,821 posts to examine what mobile developers ask about. They discovered hot topics and determined what popular mobile-related issues are the most difficult. Ahmed and Bagherzadeh (2018) applied a similar methodology to analyze what do concurrency developers ask on Stack Overflow. Tahir et al. (2018) found that developers widely use Stack Overflow to ask for general assessments of code smells or anti-patterns instead of asking for particular refactoring solutions. More recent, Stack Overflow content related to big data analysis (Bagherzadeh and Khatchadourian, 2019), release engineering (Openja et al., 2020), bug severity (Tan et al., 2020), and serverless computing (Wen et al., 2021) are analyzed to help relevant stakeholders better understand the trends, challenges, and potential future development/research directions.
Many prior studies are investigating the quality of the crowd-sourced knowledge presented on Stack Overflow. Asaduzzaman et al. (2013) analyzed unanswered questions on Stack Overflow and found that the quality of questions is strongly related to whether a question receives an answer. Srba and Bielikova (2016) observed that an increasing amount of content with relatively lower quality is hurting the Stack Overflow community. Nasehi et al. (2012) examined code examples on Stack Overflow and identified characteristics of high-quality code examples. Yang et al. (2016) focused on the quality of code snippets on Stack Overflow. They examined the usability of code snippets by compiling or running them. Zhang et al. (2019) analyzed the obsoleteness of answers on Stack Overflow. They found that more than half of the obsolete answers were probably already obsolete when they were first posted. Moreover, when an obsolete answer is observed, only a small proportion (20.5%) of such answers are ever updated. Thus they suggest that Stack Overflow should develop mechanisms to encourage the whole community to maintain answers. Chatterjee et al. (2020) conducted an exploratory study of novice software engineers' focus in stack overflow posts. They found that Novice programmers focus on 15-21% text and 27% code in a Stack Overflow post.
Prior studies also examined Stack Overflow's mechanisms to understand its operation better and proposed tools to improve the efficiency of the knowledge-sharing process. For instance, to enhance the quality of knowledge on Stack Overflow, Ponzanelli et al. (2014) proposed an automated approach
to identify the quality of posts and filter low-quality content. Wang et al. (2018) studied how Stack Overflow users revise answers and what is the impact of those revisions. They found that although the current badge system on Stack Overflow is designed to ensure the quantity of revisions, such a badge system fails to consider the quality of revisions and should be improved in the future. Chen et al. (2018) proposed a convolutional neural network (CNN) based approach to predict the need for post revisions to improve the overall quality of Stack Overflow posts. Several approaches are raised to automatically predict tags on Stack Overflow questions (Xia et al., 2013; Saha et al., 2013; Beyer and Pinzger, 2015; Wang et al., 2018; Chen et al., 2019), and identify duplicate posts (Zhang et al., 2015; Ahasanuzzaman et al., 2016; Mizobuchi and Takayama, 2017).
## 8 Conclusion and Future Work
Comments on Stack Overflow answer posts act as a potential way to improve the quality of the answers, which is one main concern of Stack Overflow community. In this paper, we conduct a study on URCs (update request comments)--comments in answer posts that explicitly or implicitly ask for an answer update due to reasons such as warning issues in the answer. Specifically, we investigate what happens when a user posts a URC and how/when/by whom it gets addressed. For this purpose, we manually examine a sample set of 1,221 comments on answer posts of questions tagged with "java". We find that 51.7% of the analyzed comments are URCs. Most addressed URCs (80.1%) are addressed by the answer owners, and more interestingly, most URCs (55.3%) are addressed within 24 hours. Nevertheless, 36.5% of URCs remain unaddressed after a year.
Upon checking the votes received by URCs, we find that the majority URCs have a score of 0, which may cause them to be invisible to community members who are potentially interested in addressing them. Thus, as the first step towards improving the awareness of URCs, we explore the feasibility of building a tool that can automatically identify URCs as they post. Such a tool can also be leveraged to mine URCs for research purposes. Specifically, we proposed a set of comment features for URC detection and trained several supervised models, including random forest and BERT from 1,221 annotated Java comments. We evaluated the performance of our models on Python and JavaScript comments. Experiments results show that our automated URC detector can identify URCs with around 90% accuracy.
In the future, we would like to increase the number of annotated comments for train and evaluation and investigate if specific kinds of URCs are more likely to be addressed. We also plan to analyze URCs that non-answer owner addresses to explore what types of SO users are more likely to help the community address URCs.
## Acknowledgement
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number: RGPIN-2019-05071].
|
2310.02677 | Coronal Diagnostics of Solar Type-III Radio Bursts Using LOFAR and PSP
Observations | This study aims to investigate the ambiguous source and the underlying
physical processes of the solar type III radio bursts that occurred on April 3,
2019, through the utilization of multiwavelength observations from the LOFAR
radio telescope and the PSP space mission, as well as incorporating results
from a PFSS and MHD models. The primary goal is to identify the spatial and
temporal characteristics of the radio sources, as well as the plasma conditions
along their trajectory. Data preprocessing techniques are applied to combine
high- and low-frequency observations from LOFAR and PSP between 2.6 kHz and 80
MHz. We then extract information on the frequency drift and speed of the
accelerated electron beams from the dynamic spectra. Additionally, we use LOFAR
interferometric observations to image the sources of the radio emission at
multiple frequencies and determine their locations and kinematics in the
corona. Lastly, we analyze the plasma parameters and magnetic field along the
trajectories of the radio sources using PFSS and MHD model results. We present
several notable findings related to type III radio bursts. Firstly, through our
automated implementation, we were able to effectively identify and characterize
9 type III radio bursts in the LOFAR-PSP combined dynamic spectrum and 16 type
III bursts in the LOFAR dynamic spectrum. Secondly, our imaging observations
show that the electrons responsible for these bursts originate from the same
source and within a short time frame of fewer than 30 minutes. Finally, our
analysis provides informative insights into the physical conditions along the
path of the electron beams. For instance, we found that the plasma density
obtained from the MAS model is significantly lower than the expected
theoretical density. | Mohamed Nedal, Kamen Kozarev, Peijin Zhang, Pietro Zucca | 2023-10-04T09:35:23Z | http://arxiv.org/abs/2310.02677v1 | # Coronal Diagnostics of Solar Type-Ill Radio Bursts Using LOFAR and PSP Observations
###### Abstract
Context:Solar type III radio bursts are quite common phenomena. They are the result of accelerated electron beams propagating through the solar corona. These bursts are of particular interest as they provide valuable information about the magnetic field and plasma conditions in the corona, which are difficult to measure directly.
Aims:This study aims to investigate the ambiguous source and the underlying physical processes of the type III radio bursts that occurred on April 3, 2019, through the utilization of multi-wavelength observations from the Low-Frequency Array (LOFAR) radio telescope and the Parker Solar Probe (PSP) space mission, as well as incorporating results from a Potential Field Source Surface (PFSS) and magnetohydrodynamic (MHD) models. The primary goal is to identify the spatial and temporal characteristics of the radio sources, as well as the plasma conditions along their trajectory.
Methods:Data preprocessing techniques are applied to combine high- and low-frequency observations from LOFAR and PSP between 2.6 kHz and 80 MHz. We then extract information on the frequency drift and speed of the accelerated electron beams from the dynamic spectra. Additionally, we use LOFAR interferometric observations to image the sources of the radio emission at multiple frequencies and determine their locations and kinematics in the corona. Lastly, we analyze the plasma parameters and magnetic field along the trajectories of the radio sources using PFSS and MHD model results.
Results:We present several notable findings related to type III radio bursts. Firstly, through our automated implementation, we were able to effectively identify and characterize 9 type III ra
dio bursts in the LOFAR-PSP combined dynamic spectrum and 16 type III bursts in the LOFAR dynamic spectrum. We found that the frequency drift for the detected type III bursts in the combined spectrum is ranging between 0.24 and 4 MHz s\({}^{-1}\), and the speeds of the electron beams are ranging between 0.013 and 0.12 C. Secondly, our imaging observations show that the electrons responsible for these bursts originate from the same source and within a short time frame of fewer than 30 minutes. Finally, our analysis provides informative insights into the physical conditions along the path of the electron beams. For instance, we found that the plasma density obtained from the Magnetohydrodynamic Algorithm outside a Sphere (MAS) model is significantly lower than the expected theoretical density.
Sun: solar radio bursts - Sun: plasma emissions - Sun: remote observations - Sun: ground-based observations - LOFAR - Parker Solar Probe
## 1 Introduction
Type III radio bursts are manifestations of transient energetic electron beams injected into the solar corona, propagating along the interplanetary magnetic field (IMF) lines (Ergun et al. 1998; Pick 2006; Reid 2020). As these beams traverse the corona, they trigger plasma waves, also known as Langmuir waves, which are then transformed into radio emission at the local plasma frequency or its harmonic components (Melrose 2017). In the radio spectrograms, type III bursts are usually observed as intense emissions that drift in frequency over timescales of seconds-minutes and over a wide range of frequencies, from metric to decametric wavelengths (Wild & McCready 1950; Lecacheux et al. 1989; Bonnin et al. 2008), making them detectable by ground-based instruments on Earth and various spacecraft within the heliosphere. The frequency of the radio emission is directly related to the plasma density, making type III bursts a valuable diagnostic tool for examining the inner heliosphere and the processes that drive solar active phenomena, such as solar flares and coronal mass ejections (Reid & Ratcliffe 2014; Kontar et al. 2017).
The electron beams follow open magnetic field lines and can persist well beyond 1 astronomical unit (AU) (e.g., Dulk et al. (1985); Boudjada et al. (2020)), offering in-situ insights into the burst and ambient conditions of the heliosphere, including electron density, radio frequency drift, speed of the electron beams, and even potential direct detection of Langmuir waves (see Gurnett & Anderson (1976, 1977) and Reid & Ratcliffe (2014) and references within). In addition, tracing the path of type III bursts provides a map of the density structure of the heliosphere, serving as a foundation for developing and testing density models. Since radio observations below \(\sim\)10 MHz cannot be accomplished from the ground, it is important to combine high- and low-frequency observations from ground-based and space-borne instruments. In this work, we perform a study of several type III radio bursts that occurred in close succession on April 3, 2019. We use remote observations of type III radio bursts detected by the Low-Frequency Array (van Haarlem et al. 2013, LOFAR) ground-based radio telescope and the Parker Solar Probe (Fox et al. 2016, PSP) spacecraft during
Encounter 2 to study the sources of these radio emissions and to investigate the physical conditions responsible for their generation. Additionally, we incorporate results of two steady-state models of the solar corona: the Potential Field Source Surface (PFSS) model (Altschuler & Newkirk 1969; Schatten et al. 1969) and the Magnetohydrodynamic Algorithm outside a Sphere (MAS) model (Mikic et al. 1999), to gain a better understanding of the coronal magnetic environment and its role in the acceleration of electrons. The ground-based LOFAR imaging observations provide valuable insight into the actual location of the burst sources. This research aims to expand upon current knowledge of the electron beams responsible for triggering type III radio bursts and the coronal conditions they experience. Gaining a deeper insight into this aspect is vital in comprehending other solar phenomena such as solar energetic particles and solar wind, and how they influence the near-earth space environment.
A number of recent studies investigate the physical mechanisms responsible for the generation of solar type III radio bursts. For example, Chen et al. (2013) investigated the association of type III bursts with flaring activities in February 2011, via combined multi-wavelength observation from the Solar Dynamic Observatory (SDO) instruments, as well as Wind/WAVE and ground-based instruments. They found that the SDO measurements indicated that type III emission was correlated with a hot plasma (7 MK) at the extreme ultraviolet (EUV) jet's footpoint. By using a triangulation method with the Wind and the twin STEREO spacecraft, Bonnin et al. (2008) reported the first measurements of the beaming characteristics for two type III bursts between 2007 - 2008, assuming the source was located near the ecliptic plane (see also Reiner et al. (2009)). They concluded that the individual type III bursts have a broad beaming pattern that is roughly parallel to the Parker spiral magnetic field line at the source. Saint-Hilaire et al. (2012) conducted a study on almost 10,000 type III bursts observed by the Nancay Radioheliograph between 1998 and 2008. Their analysis revealed discrepancies in the location of type III sources that may have been caused by a tilted magnetic field. Additionally, they found that the average energy released during type III bursts throughout a solar cycle could be comparable to the energy produced by non-thermal bremsstrahlung mechanisms in nano-flares. Morosan & Gallagher (2017) utilized LOFAR data to investigate the statistical characteristics of over 800 type III radio bursts within an 8-hour period on July 9, 2013. They discovered that the drift rates of type III bursts were twice that of type S bursts, and plasma emission was the primary emission mechanism for both types.
Pulupa et al. (2020) introduced a statistical overview of type III radio bursts during the first two PSP solar encounters. While the first encounter in November 2018 revealed a small number of bursts, the second encounter in April 2019 exhibited frequent type III bursts, including continuous occurrences during noise storms. They reported the characteristics of type III bursts with spectral and polarization analysis.
Krupar et al. (2020) performed a statistical survey of 30 type III radio bursts detected by PSP during the second encounter in April 2019 and estimated their decay times, which were used to estimate the relative electron density fluctuations in the solar wind. They localized radio sources
using a polarization-based-radio triangulation technique, which placed the sources near the modeled Parker spiral rooted in the active region AR12738 behind the plane of the sky as seen from Earth.
Cattell et al. (2021) explored correlations between type III radio bursts and EUV emission in the solar corona. Using coordinated observations from PSP, SDO, and Nuclear Spectroscopic Telescope Array (NuSTAR) on April 12, 2019, they identified periodicities in EUV emission correlated with type III burst rates. The findings suggested impulsive events causing heating and cooling in the corona, possibly nano-flares, despite the absence of observable flares in X-ray and EUV data, which implies periodic non-thermal electron acceleration processes associated with small-scale impulsive events.
Harra et al. (2021) explored the origin of the type III radio bursts we are tackling in this paper and found that electron beams that triggered radio bursts may have emanated from the periphery of an active region that showed significant blue-shifted plasma. More recently, Badman et al. (2022) observed a distinct type III radio burst using the PSP and LOFAR between 0.1 and 80 MHz on April 9, 2019, around 12:40 UT, six days after the occurrence of the event analyzed in our study. While no detectable flare activity was linked with the event, a type III noise storm was ongoing during the PSP encounter 2. The authors determined the type III trajectory and reconstructed its source using observations from Wind and STEREO spacecraft, as well as measuring related electron enhancement in situ.
In the last few years, we witnessed the emergence of modern instruments, such as LOFAR and PSP, that allowed to observe solar radio emissions with higher sensitivity from a better vantage point. Although type III bursts have been extensively studied (Dabrowski et al. 2021), there are still some unresolved issues regarding the exact mechanism of type III emissions. For example, it is not yet clear how the electrons are accelerated to the high energies required to generate type III radio bursts, or what role the coronal magnetic field plays in this process. Furthermore, there are inconsistencies between the observations and the models, which need to be resolved in order to gain a more complete understanding of the dynamics of the solar corona. Examples of these inconsistencies are the origin of the type III radio bursts and the discrepancy between the estimated plasma densities from the models and the observations. This paper aims to address these unresolved challenges by using new observations from LOFAR and PSP and models of the solar corona to study the physical mechanisms responsible for the generation of type III bursts. The data analysis includes a combination of radio spectroscopy and imaging techniques to study the frequency, temporal and spatial variations of the radio bursts.
The paper is organized as follows: In Section 2, we describe the observations of type III radio bursts made with LOFAR and PSP. In Section 3 we explain the data analysis and modeling techniques used to study these events. In Section 4, we present the results of our analysis, including an investigation of the potential physical mechanisms responsible for the generation of type III radio
bursts, and a comparison of the observations with models of the solar corona. Finally, in Section 5, we summarize our findings and discuss their implications.
## 2 Observations
A number of studies focused on observing the solar radio emissions during the second encounter of the PSP in late 2019 (Krupar et al. 2020; Pulupa et al. 2020; Cattell et al. 2021; Harra et al. 2021; Badman et al. 2022). In this study, our primary emphasis is directed towards investigating a set of type III radio bursts that took place on April 3, 2019, during the time interval spanning from \(\sim\)12:10 to 12:50 UT. This period coincided with the presence of two distinct active regions (ARs) on the Sun, denoted as AR12737 and AR12738. AR12737 was situated on the solar near side at coordinates E12\({}^{o}\)N06\({}^{o}\). Notably, this region had 8 sunspots and exhibited a \(\beta\) magnetic configuration according to the Hale magnetic classification (Hale et al. 1919). On the other hand, AR12738 was positioned on the solar far side at coordinates E140\({}^{o}\)N02\({}^{o}\). Due to its remote location, detailed observations of the magnetic configuration and activity within AR12738 were unattainable during this time frame.
We observed a group of intense type III radio bursts by four instruments (Wind/WAVES, PSP/FIELDS, STEREO-A/SWAVES, and LOFAR/LBA) while doing a regular survey. In Figure 1, we show the first type III burst within the time of this study as observed by the four instruments. By taking the 2\({}^{nd}\) derivative of the light curve at a specific frequency channels, we determine the start time of the burst which is denoted by the vertical red dashed line. The frequency bands used for obtaining the start time at each instrument are as follows: 6.97 MHz (Wind), 7.03 MHz (STEREO), 5.03 MHz (PSP), and 40.16 MHz (LOFAR).
We checked the relative orientations of the instruments with respect to Earth (Fig. 2). Since the PSP and STEREO spacecraft were almost aligned (close in an angular sense) with the Sun, the STEREO/EUVI image could be taken as what PSP would see (Fig. 3). Figure 3 shows how the solar disk looks like from the Earth perspective (using the SDO/AIA instrument) and from the eastern side where the PSP and STEREO were located at that time (using the STEREO/EUVI instrument). The right panel shows a closer view of AR12737 with the contours of the photospheric magnetic field obtained from the Helioseismic and Magnetic Imager (HMI) onboard SDO. From the GOES-15/XRS and SDO/EVE instruments in the panels below, they also confirm that there is no flaring activity at that time.
The solar disk was quiet, including the only one AR visible with no X-rays and no EUV transient emissions over this period. Nevertheless, the very sensitive LOFAR telescope detected a number of bursts close to noon. We checked PSP data, and we found bursts there as well. Meanwhile, from the EUVI and AIA images, we see that there are numerous small localized regions of relatively higher intensity, probably small-scale coronal brightenings spots or campfires (see Young et al. (2018); Madjarska (2019); Berghmans et al. (2021)). In the next subsections, we introduce the PSP and LOFAR instruments and their observations of the radio bursts.
Figure 1: Radio dynamic spectra for a single burst obtained from multiple instruments. The top-left panel is from the LOFAR/LBA instrument, the top-right is from the PSP/FIELDS instrument, the bottom-left is from the STEREO/SWAVES instrument, and the bottom-right is from the Wind/WAVES. The vertical red dashed line denotes the start time of the burst.
Figure 2: Top view of the spacecraft positions in the ecliptic plane at 12:15 UT on April 3, 2019, with the Sun-Earth line as the reference point for longitude. The earth’s location is representative of the positions of LOFAR, Wind/WAVES, and GOES-15/XRS instruments. The spacecraft were connected back to the Sun by a 400 km/s reference Parker Spiral. The black arrow represents the longitude of AR12737, and the blue arrow represents the longitude of the AR12738. The gray dotted lines are the background Parker spiral field lines. The black dashed spiral shows the field line connected to the AR12737, and the blue dashed spiral is connected to the AR12738. The figure is generated using the Solar MAgnetic Connection Haus (Solar-MACH) tool (Gieseler et al. 2023).
### PSP Observations
Parker Solar Probe (PSP) is a pioneering spacecraft with cutting-edge technologies, launched on August 12, 2018, to help resolving key questions about solar corona and solar wind (Fox et al. 2016). To study the radio bursts, we use the level-2 data of the radio dynamic spectrum obtained from the FIELDS instrument suite (Bale et al. 2016; Pulupa et al. 2017), which can be downloaded from this website1. The data file is in CDF format and the unit of the data values is converted from \(V^{2}/Hz\) to \(dB\) units using the formula
Footnote 1: PSP FIELDS data products: [http://research.ssl.berkeley.edu/data/psp/data/sci/fields/](http://research.ssl.berkeley.edu/data/psp/data/sci/fields/)
\[I_{dB}=10\times log_{10}(I/10^{-16}) \tag{1}\]
Figure 3: Exploring the X-ray and extreme ultraviolet (EUV) emissions from the Sun. The top panel showcases a cutout region of the SDO/AIA 193Å image of the solar disk along with the STEREO-A EUVI 195Å point of view. The white curve is the limb of the solar disk as seen by AIA from the right side. The red-blue colors are the contours of the line-of-sight magnetogram from the SDO/HMI instrument. The levels are (50, 100, 150, 300, 500, 1000) Gauss. The middle panel shows the X-ray flux from the GOES-14 spacecraft shows minimum activity. The bottom panel shows the time series of the ESP Quad band from the SDO/EVE instrument, which shows the solar irradiance in the extreme ultraviolet (EUV) band.
The minimum power spectral density (PSD) of \(10^{-16}~{}V^{2}/Hz\) is used as a threshold for radio bursts according to Pulupa et al. (2020) for converting to decibels. Then, both the High-Frequency Receiver (HFR: 1.3 - 19.2 MHz) and the Low-Frequency Receive (LFR: 10.5 kHz - 1.7 MHz) data are combined into a single dynamic spectrum as shown in Figure 4 with a full frequency range between 10.5 kHz - 19.2 MHz. The mean intensity value at each timestep over the full frequency range is subtracted from each frequency channel to clean the spectrum and minimize the noise level.
### LOFAR Observations
The LOw Frequency ARray (LOFAR) radio telescope (van Haarlem et al. 2013) is a powerful tool for studying the Sun at low radio frequencies ranging between 10 and 240 MHz. Its high sensitivity and high time resolution have enabled the detection of various solar phenomena, including radio bursts and CMEs, and the study of dynamic processes in the solar atmosphere on timescales of milliseconds. The LOFAR dynamic spectrum from the beamformed radio observations is obtained by the Low-Band Antenna (LBA: 10 - 90 MHz) and can be downloaded from the LOFAR long-term archive (LTA)2. The High-Band Antenna (HBA: 110 - 190 MHz) data is not available for that time. For this day under study, the LOFAR data is available between 11:42 - 13:27 UT. To clean the spectrum, background subtraction is performed, which flattens the sensitivity (response) with the frequency of the LBA antennas. Basically, the mean spectrum along each frequency band is calculated and subtracted from the whole frequency band, the same applied to the PSP spectrum. This operation effectively removes the constant background from the spectrum. Then a Gaussian smoothing filter is applied to the spectrum using the **scipy.ndimage.gaussian_filter** function with a sigma value of 1.5, which helps to reduce noise and variations in the data. After that, the PSP and LOFAR spectra are combined together in a single plot within the same time interval. The bursts' signals observed by the PSP occur earlier than those at LOFAR. This is due to the fact that the PSP spacecraft is much closer to the Sun and hence it detects the radio emissions earlier than LOFAR because of the shorter travel time of radio signals from the Sun. Therefore, the PSP dynamic spectrum must be shifted with respect to the LOFAR observations based on a calculation of the relative time travel of the radio emission from the Sun to PSP and to LOFAR. In addition, the time cadence of the PSP observations changes according to its distance from the Sun. On that day, the PSP data cadence was 7 seconds, while LOFAR's is 1 second. Therefore, the LOFAR dynamic spectrum was downsampled to 7 seconds to match the time resolution of the PSP. Figure 4 shows the resulting combined LOFAR-PSP spectrum on a logarithmic y-axis. The LOFAR LBA frequency ranges between 19.82 - 80.16 MHz and for the PSP is 10.55 kHz - 19.17 MHz.
Footnote 2: LOFAR LTA: [https://lta.lofar.eu/](https://lta.lofar.eu/)
In order to detect the type III radio bursts automatically from the combined dynamic spectrum, we applied Zhang et al. (2018)'s algorithm which is based on the probabilistic Hough transform
mation that detects vertical bright edges in images, within a certain degree of deviation from the vertical direction.
## 3 Methods
### Imaging of Radio Sources
As part of our task, we developed an automated pipeline consisting of several modules that not only preprocessed and calibrated the LOFAR interferometric data to produce cleaned images of the Sun in the radio band (Zhang et al. 2022), but also utilized the resulting data to find the trajectory of the radio sources and sample the magnetic field and plasma parameters at their respective locations through modeling and simulation in subsequent modules.
First, we ran the burst detection algorithm (Zhang et al. 2018) 3 on the combined dynamic radio spectrum of LOFAR and PSP (Fig. 4) in order to find the characteristics of each type III burst. We converted the spectrum into a binary map to isolate the bursts from the background. Then we applied the Hough transformation to get line segments of the features. For each type III burst, the line segments are grouped together into one group. To account for the interplanetary component within radio dynamic spectra, we employed the Parker electron-density model (Parker 1960) assuming a fundamental emission. This model enabled mapping between the time and frequency indices for each type III burst and subsequently converted electron densities into radial distances. Finally, a least-squares fitting method was applied to derive both the frequency drifts and the speed of the electron beams.
Footnote 3: Detection algorithm repository: [https://github.com/peijin94/type3detect](https://github.com/peijin94/type3detect)
After this step, we did the same for the LOFAR dynamic spectrum only (Fig. 5) to find the \((f,t)\) pairs for every type III burst. Then we took snapshot frequencies for each burst defined by a list of 60 central frequencies between \(\sim\)20 - 80 MHz from LOFAR LTA for the interferometric imaging. We obtained the interferometric data from LOFAR core and remote stations at the snapshot frequencies for all type III bursts. We used the concurrent observations of the radio source Tau-A in order to calibrate the interferometric observations. For that, we used the default preprocessing pipeline (DP3) (van Diepen et al. 2018) for preliminary processing and calibrating the measurement sets (MS). Finally we obtained the cleaned images of the radio sources by using \(w\)-stacking clean (WSClean) algorithm (Offringa et al. 2014) at only the time indices in the MS files that are equivalent to the snapshot frequencies.
After processing and cleaning the interferometric measurements of LOFAR, we explored the observations of each burst individually. Out of the 60 frequency bands in the LOFAR LTA, we chose 54 frequency bands that have unique integer numeric, between 19.92 - 80.08 MHz. For each burst, at each timestamp, the nearest frequency of the fit model to the list of chosen frequencies is picked as the snapshot frequency at that particular timestamp. This process was repeated for all the 16 type III bursts detected in the LOFAR dynamic spectrum in order to obtain snapshot images
for each type III burst (Fig. 6). For each type III burst, we applied persistence imaging in order to create a continuous display of the radio emissions (Thompson & Young 2016).
Persistence imaging enables the creation of a clearer and more informative image. In the context of a time-ordered series of images, a method of persisting pixel values can be employed as
\begin{table}
\begin{tabular}{c c c c c c c} \hline Burst & Start Time & End Time & Start Frequency & End Frequency & Frequency Drift & Beam Speed \\ ID & (UT) & (UT) & (MHz) & (MHz) & (MHz s\({}^{-1}\)) & (c) \\ \hline
1 & 12:18:45 & 12:22:42 & 76.44 & 1.57 & 0.892 & 0.044 \\
2 & 12:34:05 & 12:26:31 & 41.24 & 0.86 & 0.241 & 0.119 \\
3 & 12:34:40 & 12:34:56 & 54.44 & 26.54 & 3.992 & 0.046 \\
4 & 12:37:14 & 12:38:09 & 66.03 & 10.02 & 4.006 & 0.046 \\
5 & 12:38:17 & 12:40:54 & 76.92 & 1.57 & 0.77 & 0.066 \\
6 & 12:39:34 & 12:40:11 & 78.86 & 11.93 & 3.192 & 0.062 \\
7 & 12:40:28 & 12:40:40 & 45.34 & 22.9 & 3.21 & 0.067 \\
8 & 12:41:39 & 12:43:06 & 78.21 & 2.13 & 1.555 & 0.093 \\
9 & 12:43:53 & 12:44:15 & 59.07 & 42.13 & 2.424 & 0.013 \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of the type III bursts detected via the automatic algorithm from the combined spectrum.
Figure 4: Automatic detection of type III radio bursts from the combined radio dynamic spectrum of LOFAR and PSP instruments. The dashed horizontal lines separates the LOFAR frequency range (top) and the PSP frequency range (bottom).
Figure 5: Automatic detection of type III bursts observed by LOFAR. The red symbols along the fit lines are the \((f,t)\) coordinates of the image snapshots shown in Figure 6.
follows: for each image, compare the value of each pixel to its corresponding value in the previous persistence image in the series. If the pixel value in the current image is brighter than its corresponding pixel in the previous image, replace the previous value with the current one; otherwise, retain the previous value. This process generates a new image, referred to as the current persistence image, which serves as the basis for the subsequent evaluation of the next image in the series. This evaluation involves a pixel-by-pixel comparison between the current image and its associated persistence image, allowing for the identification of any changes or patterns that may have occurred over time. The mathematical background is explained in Appendix A.
In order to estimate the locations of the type III sources in 3D space, we combined observations with modeling. We used magnetogram data from the Global Oscillation Network Group project (GONG) (Harvey et al. 1996). We constructed a grid of footpoints on the GONG map over two longitudinal belts around the two active regions AR12737 and AR12738, which are the two potential candidates source regions for the group of type III bursts under study. These points are used as the seed points for tracing the coronal magnetic field lines using pfsspy python package4, which is a robust implementation in python of the PFSS model developed by Stansby et al. (2020).
Footnote 4: Pfsspy tool: [https://pfsspy.readthedocs.io/](https://pfsspy.readthedocs.io/)
Figure 6: Persistence imaging for the 16 type III bursts detected in the LOFAR dynamic spectrum. The label shows the observation frequencies in MHz and times in (minutes:seconds from 12:00:00 UT). Here, the color coding is not absolute, but rather each panel has its own color code.
Using the major and minor axes of the beam size, we estimated the radius of the radio source using Equation 2, which was used to approximate the source size. Since we already obtained the \((x,y)\) positions of the type III sources in the plane of the sky (POS) through LOFAR observations, now it is necessary to determine their corresponding \(z\) position to have an overall understanding of their spatial distribution. Therefore, we employed Badman et al. (2022)'s approach here, assuming that the type III bursts were from harmonic emission. First, we found the radial distance from the Sun in POS (\(r_{pos}\)) of the radio source on the POS (Eq. 3). Second, we calculated the sources' radial distance (\(r_{model}\)) using the 2.5\(\times\)Newkirk electron-density model (Newkirk 1961, 1967). The 2.5 fold factor is taken to incorporate the effects of scattering and overdensity (streamers) beyond the nominal Newkirk quiet Sun model. The MAS model results (Fig. 8) show streamers above the eastern limb, supporting the inclusion of such a factor. Lastly, we estimated the \(z\) location of the type III sources (Eq. 4). We proceeded with the \(+z\) solution because the theory precludes emission behind POS in this region of high-density gradients (i.e., the emission would be absorbed by passing through the high-density regions of the corona). More details are explained in Appendix B.
\[r_{source}=\sqrt{(b_{major}^{2}+b_{minor}^{2})} \tag{2}\]
\[r_{pos}=\sqrt{(x^{2}+y^{2})} \tag{3}\]
\[z=\sqrt{(r_{model}^{2}-r_{pos}^{2})} \tag{4}\]
The result of the deprojection of the type III sources for the 6\({}^{th}\) burst are shown in Figure 7 with 70%-contours made for 10 frequencies on the extrapolated magnetic field lines. The red dashed line is a spline fitting curve that represents the trajectory of the centroids of the radio sources. The black arrow points towards the Earth's line of sight (LOS). It is worth to mention that the axes direction in the POS of LOFAR images are different in the 3D space. The \((x,y)\) coordinates in the POS are translated into \((y,z)\) in the 3D space, and \(z\) in the POS is translated into \(x\) in the 3D space.
### Modeling
To explore the characteristics of the coronal plasma environment during the studied events, we used Predictive Science Inc. (PSI)'s standard coronal solutions from magnetohydrodynamic (MHD) simulations originating from the Magnetohydrodynamic Algorithm outside a Sphere (MAS) code (Mikic et al. 1999). The data is available on the PSI's data archive5. We obtained the PSI MAS coronal solution (a thermodynamic-with-heating MHD model) on April 3, 2019, at 12:00 UT with a simulation result ID of hmi_med-cor-thermo2-std@1_med-hel-poly-std@1. Initially, we
calculated the angle between the burst's source radial vector and the line of sight (LOS). Moreover, we calculated the complement angle, which is the separation angle between the burst's radial vector and the plane of the sky (POS) from the Earth's perspective. Subsequently, we utilized the complement angle to derive the Carrington longitude (Thompson, W. T. 2006), facilitating the extraction of a longitudinal segment from the MAS datacube, as if it were in the POS. Following this, the selected data slice was fed into the FORWARD model--a toolset responsible for generating synthetic coronal maps of observable quantities describing the plasma state. For extracting the longitudinal slices from the MAS data, we utilized the psipy python package6. The MAS datacube is specifically defined on a spherical grid and represents a steady-state MHD model. Owing to the inherent attributes of this datacube, the utilization of the FORWARD toolset proves more practical and advantageous for our objective. In Figure 8 we show the first radio contour of the 6\({}^{th}\) type III burst on top of the equivalent 2D maps for 6 plasma parameters, as an example. The plasma parameters are, from left to right and from the top to bottom: plasma density, plasma temperature, magnetic field strength, plasma beta parameter, the total plasma pressure, and the Alfven speed, By taking the value of these physical plasma quantities at the centroids' coordinates of the type III sources at each frequency band, we obtained estimates of local plasma conditions shown in Figure 9 for the 6\({}^{th}\) type III burst, as an example.
Footnote 6: Psipy repository: [https://github.com/predsci/PsiPy](https://github.com/predsci/PsiPy)
## 4 Results and Discussion
### Detection and characterization of type III radio bursts
We found that the radio waves arrived at STEREO one minute before they arrived at Wind (Fig. 1). However, the difference between the \(+z\) and \(-z\) positions of the burst this close to the Sun in terms of light travel time is \(\sim\)10 seconds (\(\sim\)4 \(R_{\odot}\)), which is within the time resolution of the observations (1-min time resolution). Thus, we cannot confidently conclude whether the emission arrived at one spacecraft first and the other second.
Figure 4 shows the combined dynamic spectrum from both LOFAR and PSP. The free parameters of the auto-detection algorithm do not have the same values as for detection the type III bursts in the LOFAR spectrum alone. Upon visual examination, we observed that the detection algorithm effectively identified type III bursts in the LOFAR dynamic spectrum (Fig. 5), but it had limitations in detecting type III bursts in the combined spectrum of the LOFAR and PSP, as well as missing segments of the detected bursts and a few bursts entirely. This could be due to the increased frequency drift and dispersion of the radio bursts at lower frequencies, which made it a challenging task for the detection algorithm. We captured 9 type III bursts from the combined dynamic spectrum and their characteristics are reported in Table 1. However, the detection algorithm performed better on the LOFAR dynamic spectrum only and we traced 16 type III bursts.
### Imaging of radio emission sources
Figure 6 shows the persistence imaging for the 16 type III bursts in the LOFAR dynamic spectrum (Fig. 5). The observation frequencies and timestamps of the snapshot images used to produce the persistence image are shown at the top-right corner of each image. From visual inspection of Figure 6, it seems that all the type III emissions originated from the same quadrant in the images (south-east direction on the solar disk), although there was no active region presented at that location except for a single active region nearby the central meridian (Fig. 3). Based on the imaging data presented in Figure 6, we chose one representative type III burst (No. 6) for a single-burst analysis in this paper, as it shares similarities in extent and location with other bursts. To determine the spatial connection between the sources of radio emissions and the coronal magnetic field, a three-dimensional projection of the radio source contours onto the extrapolated coronal magnetic field via the PFSS model was employed (Fig. 7). The result indicates a discernible south-eastward propagation of the radio sources relative to the Earth's perspective, with no open field line crossing the radio sources. In Figure 7, we performed an extrapolation only over the two active regions presented on the solar surface at that time. However, when we extrapolated the magnetic field over the entire solar surface, we noticed that the radio sources are aligned with the lower part of large-scale closed field lines, and are placed onto the open field lines emanating from the southern coronal hole. No open field lines crossing the radio sources are observed.
We note that the PFSS modeling is limited by the fact that AR12738 is behind the limb on April 3 as observed from Earth. Consequently, the magnetic data available to us could be around two weeks old or more. This might limit the reliability of PFSS extrapolation for that region during that specific timeframe.
From Figure 7, the results suggest several potential origins of these type III radio emissions:
* they could be triggered in a closed-field lines structure such as large-scale coronal loops, given that the radio sources are aligned to closed-field lines geometry in the southern hemisphere;
* they could be triggered by electron beams that are accelerated from an open-field active region (Kong et al. 2018). However, from the PFSS model, we found no evidence for magnetic connectivity from both ARs on the Sun at that time;
* they may result from electron beams that are accelerated in the corona due to expanding magnetic fields from plasma upflows in the active region (Del Zanna et al. 2011; Harra et al. 2021).
Our findings indicate a notable inverse relationship between imaging quality and the level of solar radio emission brightness (e.g., for type III bursts No. 10 and 13, for instance). This observation is due to the leakage of solar radio emission into the side lobes of the calibrator beam, which disrupts the accuracy of calibration solutions.
### Plasma diagnostics and magnetic field analysis
Considering the observed alignment of radio sources in Figure 7 and the case depicted in Figure 8, it becomes evident that radio sources at higher frequencies (indicating proximity to the Sun) align
with a streamer-like structure near the equator within the coronal model. This structure is characterized by elevated plasma beta, reduced coronal temperature, and diminished Alfven speed. The coronal plasma density was relatively homogeneous with no prominent structures, probably due to the model resolution.
The location of radio sources of all the bursts were in the same quadrant as seen from Earth. Therefore, we assumed that the former description applies for all bursts. We also found that the radio sources were confined between the equatorial sheet and the southern coronal hole and moving along that boundary. Figure 9 shows the variability of the coronal plasma quantities at the radio sources' centroids, taken from FORWARD maps in Figures 8, at different frequencies for the 6\({}^{th}\) burst. To estimate the error bars, we initialized random centroids, within the limits of the 70%-contours of the radio emissions, to sample the plasma quantities at those locations. Then the standard error (SE) is calculated using Equation 5, where \(\sigma\) is the standard deviation, and \(n\) is the number of points.
\[SE=\frac{\sigma}{\sqrt{n}} \tag{5}\]
The coronal temperature was increasing with radial distance, which implies there may have been some heating locally. The behavior of the coronal magnetic field, the plasma total dynamic pressure, and the Alfven speed were decreasing over distance as expected. Finally the value of plasma beta parameter started increasing sharply around 40 MHz, which implies that the plasma pressure became more dominant than the magnetic pressure around that distance from the Sun (for a 2.5\(\times\)Newkirk model, it is 1.89 \(R_{\odot}\) assuming a fundamental emission, or 2.57 \(R_{\odot}\) assuming a harmonic emission).
The top-left panel of Figure 9 shows a comparison between the density profiles of the MAS model, the 2.5\(\times\)Newkirk model, and the theoretical expected density profiles under the fundamental and harmonic assumptions. Although the Newkirk density model provided a useful approximation for determining the height of radio sources in the corona, it is not entirely accurate due to a number of its underlying assumptions, for instance, the assumption of a steady state and the spherical symmetry of the corona, which do not always apply. Therefore, we tried to use the MAS density values to estimate the depth along the LOS of the radio source since it is supposed to give a more realistic result.
We found that the plasma density obtained from the MAS and FORWARD modeling results were significantly lower compared with the 2.5\(\times\)Newkirk density model and the theoretical expected density obtained from the classical relation in Equation 6, where \(f_{p}\) is the plasma frequency (in MHz) and \(n_{e}\) is the electron density (in cm\({}^{-3}\)).
\[n_{e}=\left(\frac{f_{p}}{8.98\times 10^{-3}}\right)^{2} \tag{6}\]
The required density from the fitted Newkirk model is much higher (\(\sim\)10 times) than what is obtained from the MAS model, even after accounting for the 2.5\(\times\) enhancement already applied to the standard Newkirk model. This implies the discrepancy cannot be fully explained by the density enhancement factor alone. Furthermore, the imaging places the radio sources near a streamer which is an overdense region in the MAS model, so it seems unlikely the source's apparent location in the model is wrongly attached to a less dense feature, as there are not denser options available. The apparent source positions from the imaging are likely too high, possibly due to scattering effects (Kontar et al. 2019, 2023; Chen et al. 2023), which could lead to fitting an overly dense Newkirk model. Another potential explanation is that there could be a stealth CME that pushed the coronal magnetic field outward, allowing the plasma to follow it to be perceived as having a higher density than expected, and there was not enough time for the magnetic field relaxation to occur (private communication with J. Magdalenic). However, scattering alone does not seem to fully explain the large density discrepancy. While further investigation is certainly needed regarding scattering and propagation effects on the radio waves, it is interesting to report this significant discrepancy between the model and observations, as it highlights limitations in the current modeling and suggests the need for additional physics to properly characterize the density distribution. Resolving this discrepancy could lead to important insights into the true nature of the corona.
## 5 Summary and Conclusions
In this work we analysed the characteristics of a series of type III bursts that occurred on April 3\({}^{rd}\), 2019, during the second near-Sun encounter period of PSP. The bursts were observed in dynamic spectra taken with the PSP/FIELDS (2.6 kHz - 19 MHz) instrument, as well as in interferometric imaging with the LOFAR (20 - 80 MHz) ground-based telescope, as part of a coordinated observing campaign. The series of 16 separate weak bursts were observed over the span of \(\sim\)20 minutes, during an otherwise relatively quiet period. The solar disk as observed from Earth was dominated by a single active region near its centre. We combined the dynamic spectra for the LOFAR frequency range and the PSP frequency range to study the solar radio emissions within the frequency range of 2.6 kHz - 80 MHz.
For the study, we developed a semi-automated pipeline, which allowed us to obtain the exact times and frequencies of the bursts. These we used to align the PSP to the LOFAR observations, and to generate interferometric images between 20 and 80 MHz. We performed data pre-processing of the PSP and LOFAR dynamic spectra to resample and shift the data based on the relative location of the spacecraft with respect to the Sun and Earth, and found an excellent temporal match between the two sets of observations. Thus we automatically traced the type III bursts in the dynamic spectra algorithmically and estimated frequency drift and the electron beam speeds. We found that frequency drifts remained relatively uniform between the high-frequency (LOFAR) and low-frequency (PSP) observations, as well as among the bursts, suggesting that they are related.
In addition, we imaged the type III emission at multiple frequency bands using the interferometric observations from LOFAR to determine the locations of the sources in the solar corona. The type III emissions observed were all found to occur in the same general region off the southeast limb of the Sun, leading us to conclude that they shared a single source of electron beams low in the corona. The potential origins of these emissions are varied and include:
* small-scale impulsive events such as nano-flares (Ish 2017; Che 2018; Chhabra et al. 2021);
* plasma upflows from the active region (Harra et al. 2021);
* coronal closed-loop structures (Wu et al. 2002);
* electron beams accelerated from interchange reconnection (Gopalswamy et al. 2022);
* high-frequency Alfven waves and/or magnetic reconnection in the outer corona (Morton et al. 2015; Alielden & Taroyan 2022).
Our magnetic extrapolation shows that there is no open potential field to either AR12737 or AR12738, which is consistent with Cattell et al. (2021). Our findings are in line with the con
Figure 7: Different viewing angles for the de-projection of the radio sources of the \(6^{th}\) burst using the \(2.5\times\)Newkirk electron-density model on the PFSS solution. The black arrow points toward the Earth LOS. The \(yz\) plane is the plane of sky as seen from the Earth. The red dashed line is a spline curve fit for the sources’ centroids. The red, black, and blue curves are open northern, closed, and open southern field lines, respectively. The opacity of the closed field lines is decreased for a better visualization.
clusions of Harra et al. (2021), who proposed that the likely origin of these type III bursts is the AR12737 region. The type III radio bursts in Harra et al. (2021) occurred between April 1\({}^{st}\) and 4\({}^{th}\), align in time with the emergence of AR12737 near the eastern limb of the solar disk.
While potential field source surface models provide valuable insight into the large-scale magnetic topology, their reliability decreases near active regions where the field can deviate significantly from a potential configuration. Therefore, the lack of open field connectivity directly to AR12737 suggested by the PFSS model should be viewed with some caution.
Figure 8: Synthesized maps of plasma parameters obtained using the FORWARD toolset, with the 70%-contour of radio emission of the 6\({}^{th}\) burst at the first timestamp (12:34:06.8 UT) at the frequency of 72.26 MHz depicted on top of the 2D plane-of-sky cuts. The left column represents, from top to bottom, plasma density, magnetic field, and the total plasma dynamic pressure. The right column represents, from top to bottom, the temperature, plasma beta, and the Alfven speed.
This work complements those results by locating precisely the burst sources in the middle corona. We used the Newkirk density model to estimate the height of the radio sources from the Sun of one of the type III bursts, as representative of all. Combining this with PFSS magnetic modeling, we found good agreement between the centroids of the radio sources and the location of the southern open field lines in the corona, which would be required to produce radio emissions at interplanetary wavelengths in general. On the other hand, this location does not seem to be well connected to the AR itself, according to the PFSS model.
We attempted to correct the radial distance of the radio sources from the Sun by replacing the Newkirk model with more realistic MHD results from the MAS model, but we found that there is a significant discrepancy between the Newkirk model profile fitted to the observations and the MAS density. This could result from scattering lensing the apparent burst location to a higher altitude, thus, overestimating the height of radio sources in the corona. The presence of type III radio sources at relatively high distances in the corona, with plasma density higher than expected from the MAS model, suggests that there may be missing information in the modeling. One possibility is the existence of a stealth CME that pushed the coronal magnetic field outward, causing the plasma to appear denser than expected (see Dumbovic et al. (2021)) -- or other non-obvious changes in large-scale coronal magnetic topology. These findings demonstrate that scattering and propagation effects play a significant role in determining the location and directionality of solar radio bursts (Kontar et al. 2019, 2023; Chen et al. 2023). Therefore, the discrepancy between the observed and modeled density profiles could potentially be attributed to scattering and lensing effects that make the radio sources appear higher in the corona than their true location. Further investigation
Figure 9: Coronal plasma parameters sampled from the 2D maps by the source centroids. The top panel shows, from left to right, plasma density profiles from the MAS model, 2.5\(\times\)Newkirk model, and the theoretical densities under the fundamental and harmonic assumptions, plasma temperature, and magnetic field. The bottom panel shows, from left to right, the total plasma dynamic pressure, Alfvén speed, and plasma beta. The x-axis is inverted to to demonstrate a progression of increasing radial distance from the Sun as one moves towards the right.
is required to disentangle these effects from limitations in the density models themselves. Overall, accounting for scattering and refraction will likely lead to improved modeling of the corona and solar radio bursts. In future work, we will also employ the Time Delay of Arrival (TDoA) technique (Zhang et al. 2019) to estimate the radio burst source positions from multi-instrument observations and compare that with the current methodology in this paper. Solar Orbiter observations shall also be included.
High-fidelity interferometric radio imaging in metric-decametric wavelengths provides a powerful method to characterise solar eruptive events. It is also becoming increasingly important for studying relatively quiet periods, during which there may be elevated levels of in situ particle fluxes. The ability to observe and image faint radio bursts such as those presented in this work, which may be related to episodes of reconnection on the solar surface, and potentially to episodes of solar wind release, is a testament to LOFAR's power as a space weather instrument. In future work, we will automate and use our method for studying hundreds of faint bursts observed with LOFAR, and will investigate their relation to small-scale activity on the solar surface.
Through a novel combination between the LOFAR imaging and MAS model results, we observed that the type III radio bursts experienced a weakening background magnetic field, decreasing solar wind dynamic pressure and Alfven speed, increasing plasma beta and coronal temperature, and plasma rarefaction. The radio sources appeared at larger radial distances than the models predicted, which suggests scattering and density fluctuations are important to interpreting the true burst trajectory. The discrepancies between the observed and modeled radial distances of the radio sources suggest refinements are needed in the models to fully explain the radio imaging and modeling results. Overall, comparing the LOFAR imaging and MAS modeling for these type III bursts motivates further analysis on additional radio bursts to better understand the physical conditions that influence the propagation of radio emissions in the corona.
###### Acknowledgements.
We thank the anonymous referee for the constructive feedback. Thanks go to Bing Ma, Marc Pulupa, Dejin Wu, and Jon Vandegriff for helping with the PSP data. We thank Pete Riley for helping with the psipy tool, and Jan Gieseler for helping with the Solar-MACH tool. Many thanks to N. Gopalswamy, L. Harra, Nour E. Raouti, Nariaki V. Nitta, J. Magdalenic, and K. Aialleton for the valuable discussions during private communications. We thank the Sunpy and Helionauts communities for the technical support. This work was supported by the Bulgarian National Science Fund, VHHEN program, under contract KP-06-DV-8/18.12.019 (MOS/AICS project). It was also supported by the STELLAR project, funded by the European Union's Horizon 2020 research and innovation program under grant agreement No. 952439. The authors acknowledge data usage from LOFAR core and remote stations, from the PSP/FIELDS, STEREO/SWAVES, Wind/WAVES instruments, as well as from the SDO/AIA, HMI, and EVE instruments. This research used version 4.1.5 (The SunPy Community et al. 2020) of the SunPy open source software package.
|
2301.05881 | Nonlinear approximation of functions based on non-negative least squares
solver | In computational practice, most attention is paid to rational approximations
of functions and approximations by the sum of exponents. We consider a wide
enough class of nonlinear approximations characterized by a set of two required
parameters. The approximating function is linear in the first parameter; these
parameters are assumed to be positive. The individual terms of the
approximating function represent a fixed function that depends nonlinearly on
the second parameter. A numerical approximation minimizes the residual
functional by approximating function values at individual points. The second
parameter's value is set on a more extensive set of points of the interval of
permissible values. The proposed approach's key feature consists in determining
the first parameter on each separate iteration of the classical non-negative
least squares method. The computational algorithm is used to rational
approximate the function $x^{-\alpha}, \ 0 < \alpha < 1, \ x \geq 1$. The
second example concerns the approximation of the stretching exponential
function $\exp(- x^{\alpha} ), \ \ \quad 0 < \alpha < 1$ at $ x \geq 0$ by the
sum of exponents. | Petr N. Vabishchevich | 2023-01-14T10:25:56Z | http://arxiv.org/abs/2301.05881v1 | # Nonlinear approximation of functions based on non-negative least squares solver
###### Abstract
In computational practice, most attention is paid to rational approximations of functions and approximations by the sum of exponents. We consider a wide enough class of nonlinear approximations characterized by a set of two required parameters. The approximating function is linear in the first parameter; these parameters are assumed to be positive. The individual terms of the approximating function represent a fixed function that depends nonlinearly on the second parameter. A numerical approximation minimizes the residual functional by approximating function values at individual points. The second parameter's value is set on a more extensive set of points of the interval of permissible values. The proposed approach's key feature consists in determining the first parameter on each separate iteration of the classical non-negative least squares method. The computational algorithm is used to rational approximate the function \(x^{-\alpha},\ 0<\alpha<1,\ x\geq 1\). The second example concerns the approximation of the stretching exponential function \(\exp(-x^{\alpha}),\ \ \ \ 0<\alpha<1\) at \(x\geq 0\) by the sum of exponents.
keywords: self-adjoint positive operator, fractional powers of the operator, rational approximation, approximation by exponential sums, first-order differential-operator equation Msc: [2010] 26A33, 35R11, 65F60, 65M06 +
Footnote †: journal: arXiv
url][https://sites.google.com/view/vabishchevich/](https://sites.google.com/view/vabishchevich/)
## 1 Introduction
Problems of nonlinear approximation of functions are in high demand in computational practice for many applied problems. Rational approximations are widely used in various variants [1]. Remez algorithm [2] and recently developed algorithms for rational approximation [3; 4] are used to find parameters of the approximating function. Much attention is paid to approximation by the sum of exponents in function approximation. In particular, achievements in this area are reflected by the works of [5; 6]. In many cases, the approximation of the function \(f(x)\) is
\[f(x)\approx\sum_{i=1}^{m}u_{i}\varphi(x,v_{i})\]
with the known functional dependence \(\varphi(x,v)\). Some restrictions may be imposed on the approximation parameters to be sought. A typical example with non-negative coefficients \(u_{i},\ i=1,2,\ldots,m\) is interesting for many applications.
The theory [7] and computational practice [8; 9] of function approximation are well developed in the linear case. In this the function \(f(x)\) is approximated by a given set of trial functions \(\varphi_{i}(x),\ i=1,2,\ldots,m\):
\[f(x)\approx\sum_{i=1}^{m}u_{i}\varphi_{i}(x).\]
Optimal approximations are constructed in Hilbert spaces using the least squares method [10; 11]. Special note that computational algorithms have long been well developed to consider the constraints \(u_{i}>0,\ i=1,2,\ldots,m\). We propose a heuristic algorithm for solving the nonlinear function approximation problem. It is based on expanding the set of test functions and their subsequent selection during the iterative solution of the nonlinear least squares problem.
The paper is organized as follows. In Section 2, the problem of nonlinear approximation of functions is posed. The proposed computational algorithm is described in Section 3. Section 4 presents results on rational approximation and approximation by the sum of exponents of functions \(x^{-\alpha},\ x\geq 1\) and \(\exp(-x^{\alpha}),\ x\geq 0\) at \(0<\alpha<1\). The work results are summarized in Section 5.
## 2 Problem formulation
We consider the problem of nonlinear approximation of one-dimensional function \(f(x)\) on the interval \([a,b]\). In the Hilbert space \(L_{2}([a,b],\varrho(x))\) with weight \(\varrho(x)>0\) the scalar product and norm are defined as follows
\[(g,q)=\int_{a}^{b}\varrho(x)\big{(}g(x)-q(x)\big{)}^{2}dx,\quad\|g\|=(g,g)^{1/ 2}.\]
The function \(f(x)\) is approximated by the function \(r(x,\mathbf{u},\mathbf{v})\) with two numerical parameter sets \(\mathbf{u}=\{u_{1},u_{2},\ldots,u_{m}\}\), \(\mathbf{v}=\{v_{1},v_{2},\ldots,v_{m}\}\). Let's assume that the approximating function has the form
\[r(x,\mathbf{u},\mathbf{v})=\sum_{i=1}^{m}u_{i}\varphi(x,v_{i}). \tag{2.1}\]
In the representation (2.1) we have isolated the linear coefficients \(u_{i},\ i=1,2,\ldots,m\), and the dependence on \(v_{i},\ i=1,2,\ldots,m\), is determined by the given function \(\varphi(x,v)\).
Among the methods of nonlinear approximation of functions, rational approximation and approximation by the sum of exponents are the most widespread. In particular, in the case of rational approximation at \(a\geq 0\) we use the parametric function
\[\varphi(x,v)=\frac{1}{1+vx}. \tag{2.2}\]
When approximated by the sum of the exponents, we have
\[\varphi(x,v)=\exp(-vx). \tag{2.3}\]
In the nonlinear approximation problem we consider, \(\mathbf{u},\mathbf{v}\) are subject to the following restrictions: the parameters \(u_{i},\ i=1,2,\ldots,m\), are non-negative, and \(v_{i},\ i=1,2,\ldots,m\), are chosen from the interval \([c,d]\). We come to the problem
\[J(\mathbf{u},\mathbf{v})\rightarrow\min,\quad(\mathbf{u},\mathbf{v})\in K, \tag{2.4}\]
where
\[J(\mathbf{u},\mathbf{v})=\|f(x)-r(x,\mathbf{u},\mathbf{v})\|^{2},\quad K=\{(\mathbf{u},\mathbf{v})\ |\ u_{i}>0,\ v_{i}\in[c,d],\ i=1,2,\ldots,m\}. \tag{2.5}\]
An approximate solution to the functional minimization problem (2.1), (2.4), (2.5) is constructed by setting the functions \(f(x),\ r(x,\mathbf{u},\mathbf{v})\) on a sufficiently detailed set of points on the interval \([a,b]\).
## 3 Nonlinear approximation algorithm
We begin by dividing the interval \([a,b]\) into \(n\) partial intervals of length \(h_{j},\ j=1,2,\ldots,n\), so that
\[b-a=\sum_{j=1}^{n}h_{j}.\]
We denote the centers of the intervals \(h_{j}\) by \(x_{j},\ j=1,2,\ldots,n\). Using the quadrature formula of rectangles, we compare the original problem (2.4), (2.5) with the minimization problem
\[J^{h}(\mathbf{u},\mathbf{v})\rightarrow\min,\quad(\mathbf{u},\mathbf{v})\in K, \tag{3.1}\]
\[J^{h}(\mathbf{u},\mathbf{v})=\sum_{j=1}^{n}\varrho(x_{j})\big{(}f(x_{j})-r(x_{j},\mathbf{ u},\mathbf{v})\big{)}^{2}h_{j}. \tag{3.2}\]
To simplify the approximation problem, the coefficients \(v_{i},\ i=1,2,\ldots,m,\) will not be evaluated over the whole interval \([c,d]\), but only at given points
\[\widetilde{v}_{k}\in V_{l},\quad V_{l}=\{c\leq\widetilde{v_{1}}\leq\widetilde {v_{2}}\leq\ldots\leq\widetilde{v_{l}}\leq d\}\]
for a sufficiently fine partition \((l\gg m)\). Thus we take instead of \(K\) the set of constraints in the form
\[\widetilde{K}=\{(\mathbf{u},\mathbf{v})\ |\ u_{i}>0,\ v_{i}\in V_{l},\ i=1,2,\ldots,m\}.\]
Given (2.1), we will come from the problem (3.1), (3.2) to the problem
\[J^{h}(\mathbf{u},\mathbf{v})\rightarrow\min,\quad(\mathbf{u},\mathbf{v})\in\widetilde{K}, \tag{3.3}\]
\[J^{h}(\mathbf{u},\mathbf{v})=\sum_{j=1}^{n}\varrho(x_{j})h_{j}\Big{(}\sum_{i=1}^{m}u_ {i}\varphi(x_{j},v_{i})-f(x_{j})\Big{)}^{2}. \tag{3.4}\]
Instead of the vector \(\mathbf{u}\) with components \(u_{i},\ i=1,2,\ldots,m\), we introduce a vector \(\widetilde{\mathbf{u}}\) of larger dimension with components \(\widetilde{u}_{k},\ k=1,2,\ldots,l\). The components of the vector \(\widetilde{\mathbf{u}}\) we define from the approximation condition
\[\sum_{i=1}^{m}u_{i}\varphi(x_{j},v_{i})=\sum_{k=1}^{l}\widetilde{u}_{k} \varphi(x_{j},\widetilde{v}_{k}). \tag{3.5}\]
By doing so, we put \(\widetilde{u}_{k}=u_{i}\) if \(\widetilde{v}_{k}=v_{i}\) and \(\widetilde{u}_{k}=0\) if \(\widetilde{v}_{k}\neq v_{i}\). Considering (3.5), we proceed from the minimization problem (3.3), (3.4) to the problem
\[\widetilde{J}(\widetilde{\mathbf{u}})\rightarrow\min,\quad\widetilde{u}_{k}\geq 0,\quad k=1,2,\ldots,l, \tag{3.6}\]
\[\widetilde{J}(\widetilde{\mathbf{u}})=\sum_{j=1}^{n}\varrho(x_{j})h_{j}\Big{(} \sum_{k=1}^{l}\widetilde{u}_{k}\varphi(x_{j},\widetilde{v}_{k})-f(x_{j}) \Big{)}^{2}. \tag{3.7}\]
As a result, we obtained a constrained (non-negative) least squares problem of a larger dimension to determine the parameters \(\widetilde{u}_{k},\ k=1,2,\ldots,l\), in the linear representation of the approximating function.
Computational algorithms for the minimization problem (3.6), (3.7) are well studied [11; 10]. In computational practice, the most widely used is the NNLS (Non-Negative Least Squares) algorithm described in detail in [10]. Given the specifics of the problem (3.6), (3.7), we will separately specify variants of NNLS algorithms for large-scale problems (see, for example, [12; 13]).
The standard NNLS algorithm is a two-step iterative method with main and inner loop iterations. The number of positive coefficients is initially set to zero. As the number of iterations increases, the residual
decreases, though not monotonically. First, decreasing the residual is provided by increasing the number of positive coefficients.
Taking into account the above features of the iterative process of NNLS algorithm, we can propose the following strategy of coefficient selection in a nonlinear approximation of functions based on the minimization problem (3.6), (3.7). We perform a sufficiently large number of iterations of the NNLS algorithm. At each iteration, we control the residual and the number \(\widetilde{m}\) of positive coefficients \(\widetilde{u}_{k},\ k=1,2,\ldots,l\). The number of iterations of the NNLS algorithm is chosen such that \(\widetilde{m}=m\) and the residual is minimal. No additional modifications of the standard algorithm are performed. The computational implementation is based on the non-negative least squares solver from the SciPy library [14] (module optimize, function nnls).
## 4 Numerical experiments
Let us illustrate the possibility of constructing nonlinear approximations of functions with two examples. First, we construct rational approximations of the function \(x^{-\alpha},0<\alpha<1\) at \(x\geq 1\). In the last decade, such a problem has been actively discussed in the literature in connection with solving boundary value problems with fractional power elliptic operators (see, for example, [15; 16]), and also when considering more general problems with [17] operator functions. The second example concerns the approximation of the function \(\exp(-x^{-\alpha}),0<\alpha<1\) at \(x\geq 0\). In this case, we use approximations by the sum of exponents. Such problems are typical in approximate solutions of nonstationary problems with memory when the difference kernel of the integral term [18] is approximated.
### Approximation of \(x^{-\alpha}\)
We will approximate the function \(f(x)=x^{-\alpha}\) when \(a=1\) and \(b\) is large enough. We used \(b=10^{15}\) in the following calculations. In applied problems, it is often important to impose an additional restriction on the approximating function
\[r(a,\mathbf{u},\mathbf{v})=f(a). \tag{4.1}\]
In the class of rational approximations of the type (2.2), given (4.1), for the approximation \(x^{-\alpha}\), we obtain the representation
\[x^{-\alpha}\approx 1+\sum_{i=1}^{m}u_{i}\varphi(x,v_{i}),\]
when
\[\varphi(x,v_{i})=\frac{1}{1+v_{i}x}-\frac{1}{1+v_{i}},\quad i=1,2,\ldots,m.\]
The choice of the weight function \(\varrho(x)\), which allows controlling the approximation accuracy in some parts of the interval \([a,b]\), requires special attention. In our case, the approximated function decreases to zero as \(x\) increases. A more significant influence of points at small \(x\) is provided by setting a decreasing function \(\varrho(x)\). To partition the interval \([a,b]\) we use partial intervals of increasing length (\(h_{i+1}>h_{i},\ i=1,2,\ldots,m-1\)). This approach is formalized by introducing a new variable \(\theta\) instead of \(x\). Put \(x=\exp(\theta)\), so that \(\theta\in[0,\beta]\) (\(\exp(\beta)=b\)). For the residual functional, we get
\[J(\mathbf{u},\mathbf{v})=\int_{0}^{\beta}\varrho\big{(}\exp(\theta)\big{)}\exp(\theta )\Big{(}\exp(-\alpha\theta)-1-\sum_{i=1}^{m}u_{i}\varphi\big{(}\exp(\theta),v_ {i}\big{)}\Big{)}^{2}d\theta.\]
The following is the computational data when setting \(\varrho\big{(}\exp(\theta)\big{)}=\exp(-\theta)\) (\(\varrho(x)=x^{-1}\)).
At sufficiently large values of the number of points \(n\) on the interval \([a,b]\), the approximation accuracy changes insignificantly. In the presented results of calculations, we limited ourselves to the case \(n=5000\). Of greater interest are the calculated data at different partitioning of the interval \([c,d]\) of permissible values of the parameter \(v\). Figure 1 illustrates the effect of \(l\) in (3.7) when we approximate the function \(x^{-\alpha}\) with \(\alpha=0.5\). The residual equal to \(\widetilde{J}^{1/2}(\widetilde{\mathbf{u}})\) using different numbers of iterations of the non-negative least squares method is shown in the left-hand side of the figure. We observe a reasonably fast, generally
speaking, non-monotonic decrease in the residual with the increasing number of iterations. This residual is achieved with varying numbers of non-zero coefficients \(m\) with a general tendency of \(m\) increasing as the number of iterations of the non-negative least squares method increases. From these data, we estimate the number of iterations to achieve the minimum norm of residual for a given number \(m\) of terms of the rational approximation (3.5).
The effect of interval partitioning \([c,d]\) on the approximation accuracy is shown in Fig.2. Here is the data for \(m=10\) at \(l=500,1000,2000\). The approximation accuracy is estimated by the value
\[\varepsilon(x_{j})=\Big{|}\sum_{i=1}^{m}u_{i}\varphi(x_{j},v_{i})-f(x_{j}) \Big{|},\quad j=1,2,\ldots,n.\]
We observe a remarkable similarity in determining the approximation parameters \(u_{i},v_{i},\ i=1,2,\ldots,m\), (right-hand side of the figure) and in the accuracy of the approximation function \(x^{-\alpha}\) with \(\alpha=0.5\) at \(x\in[1,10^{15}]\) (left-hand side of the figure). With this in mind, we fixed the number of partitions \(l=1000\) in our calculations.
Figure 3 shows the approximation parameters \(u_{i},v_{i},\ i=1,2,\ldots,m\), when given \(m=5,10,20\). There is a significant increase in the approximation accuracy with increasing \(m\). For this variant, the number of iterations is \(13,38,\) and \(110\), respectively.
Figure 1: Residual (left) and the number of non-zero elements \(m\) (right) for \(\widetilde{u}_{k},\ k=1,2,\ldots,l\), in individual iterations of the non-negative least squares method.
Figure 2: Approximation accuracy \(\varepsilon\) (left) and approximation parameters \(u_{i},v_{i},\ i=1,2,\ldots,m\), (right) at different partitioning of the interval \([c,d]\).
When approximating the function \(x^{-\alpha}\), special attention is paid to the influence of the parameter \(\alpha\). The approximation accuracy for \(m=5,10,20\) at \(\alpha=0.25\) is shown in Figure 4. Similar results at \(\alpha=0.75\) are presented in Figure 5. As with other computational algorithms of rational approximation [15; 16], increasing \(\alpha\) increases the accuracy. Note also that the calculation of the coefficient \(u_{i},v_{i},\ i=1,2,\ldots,m\), at larger \(\alpha\) is performed with a more significant number of iterations of the non-negative least squares method. Table 1 shows the calculated values of the approximation parameter \(u_{i},v_{i},\ i=1,2,\ldots,10\), (\(m=10\)) when approximating the function \(x^{-\alpha}\) for \(\alpha=0.25,0.5,0.75\).
\begin{table}
\begin{tabular}{l l l l l l l} \hline & \multicolumn{2}{c}{\(\alpha=\) 0.25} & \multicolumn{2}{c}{\(\alpha=\) 0.5} & \multicolumn{2}{c}{\(\alpha=\) 0.75} \\ \cline{2-7} \(i\) & \(u_{i}\) & \(v_{i}\) & \(u_{i}\) & \(v_{i}\) & \(u_{i}\) & \(v_{i}\) \\ \hline
1 & 1.060084e-03 & 2.115485e-13 & 1.263660e-04 & 5.816049e-09 & 1.653295e-06 & 1.135126e-08 \\
2 & 2.778250e-03 & 6.526663e-11 & 1.318851e-04 & 6.336196e-08 & 1.664949e-05 & 6.273950e-07 \\
3 & 7.184790e-03 & 3.607348e-09 & 2.177478e-03 & 2.389865e-06 & 2.008706e-04 & 1.954833e-05 \\
4 & 1.608844e-02 & 1.812161e-07 & 1.423375e-02 & 1.453310e-04 & 9.792299e-04 & 2.343140e-04 \\
5 & 2.879614e-02 & 3.853128e-06 & 3.605113e-02 & 3.090116e-03 & 7.011612e-03 & 2.808580e-03 \\
6 & 6.751752e-02 & 8.192757e-05 & 4.002657e-02 & 9.723689e-03 & 3.444878e-02 & 3.059759e-02 \\
7 & 1.117978e-01 & 1.439033e-03 & 7.561481e-02 & 4.933185e-02 & 9.142663e-03 & 6.570394e-02 \\
8 & 2.518764e-01 & 2.088023e-02 & 2.411886e-01 & 1.552328e-01 & 2.280614e-01 & 3.333403e-01 \\
9 & 3.723954e-01 & 3.667548e-01 & 3.672604e-01 & 1.397038e+00 & 6.238136e+00 & 8.579865e+00 \\
10 & 6.229275e-01 & 1.537079e+00 & 2.193928e+00 & 3.631519e+00 & 2.478274e+00 & 1.842403e+01 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters approximation with \(m=10\) for \(x^{-\alpha}\)
Figure 5: Approximation accuracy \(\varepsilon\) (left) and approximation parameters \(u_{i},v_{i},\ i=1,2,\ldots,m,\) (right) for the function \(x^{-\alpha}\) with \(\alpha=0.75\) at various \(m\).
### Approximation of \(\exp(-x^{\alpha})\)
The second example concerns the approximation of the stretching exponential function:
\[f(x)=\exp(-x^{\alpha}),\quad 0<\alpha<1.\]
We will consider the case where the approximation is made with \(a=0,\;b=10^{3}\), and an additional constraint (4.1). When approximating by the sum of exponents, we have
\[\exp(-t^{\alpha})\approx 1+\sum_{i=1}^{m}u_{i}\varphi(x,v_{i}),\]
where
\[\varphi(x,v_{i})=\exp(-v_{i}x)-1,\quad i=1,2,\ldots,m.\]
The choice of the weight function \(\varrho(x)\) is done similarly to how it was when approximating the function \(x^{-\alpha}\). A uniform grid on the new variable \(\theta\in[0,\beta]\) is introduced, with \(x=\exp(\theta)-1\) (\(\exp(\beta)-1=b\)). The calculations are performed using \(\varrho(x)=(1+x)^{-1}\).
For the base variant the following parameters are chosen: \(\alpha=0.5\), \(c=10^{-4}\), \(d=10^{4}\), \(n=5000\), \(l=1000\), \(m=10\). Figure 6 shows the residual and the number of non-zero elements \(m\) when using different numbers of iterations of the NNLS method. The no significant influence of the interval partition detail of \([c,d]\) on the approximation accuracy and values of approximation parameters is illustrated by Fig.7.
The increase in approximation accuracy with increasing \(m\) is illustrated by Fig.8; the figure also shows the approximation parameters \(u_{i},v_{i},\;i=1,2,\ldots,m\). Similar data when setting \(m=5,10,20\) for the approximation of the function \(\exp(-x^{\alpha})\) at \(\alpha=0.25\) and \(\alpha=0.75\) are shown in Figures 9,10, respectively. When the parameter \(\alpha\) is reduced, the approximation accuracy decreases. The obtained approximation parameters \(u_{i},v_{i},\;i=1,2,\ldots,10\), (\(m=10\)) for approximation of function \(\exp(-x^{\alpha})\) for \(\alpha=0.25,0.5,0.75\) are presented in Table 2.
Figure 6: Residual (left) and number of nonzero elements \(m\) (right) for \(\widetilde{u}_{k},\;k=1,2,\ldots,l\), in separate iterations of the NNLS method when approximating the function \(\exp(-x^{\alpha})\).
Figure 7: Approximation accuracy \(\varepsilon\) (left) and approximation parameters \(u_{i},v_{i},\ i=1,2,\ldots,m\), (right) at different partitioning of the interval \([c,d]\) when approximating \(\exp(-x^{\alpha})\).
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{2}{c}{\(\alpha\) = 0.25} & \multicolumn{2}{c}{\(\alpha\) = 0.5} & \multicolumn{2}{c}{\(\alpha\) = 0.75} \\ \cline{2-7} \(i\) & \(u_{i}\) & \(v_{i}\) & \(u_{i}\) & \(v_{i}\) & \(u_{i}\) & \(v_{i}\) \\ \hline
1 & 3.684368e-02 & 4.361538e-03 & 8.918599e-03 & 4.939622e-02 & 1.204240e-01 & 3.772042e-01 \\
2 & 4.849511e-02 & 1.282650e-02 & 6.127055e-02 & 1.064209e-01 & 3.399225e-01 & 6.078323e-01 \\
3 & 1.017103e-01 & 4.546295e-02 & 2.567263e-01 & 2.821308e-01 & 2.922859e-01 & 1.180517e+00 \\
4 & 1.241971e-01 & 1.714882e-01 & 2.731277e-01 & 9.794697e-01 & 5.133306e-02 & 2.024447e+00 \\
5 & 1.319054e-01 & 6.742622e-01 & 1.406392e-01 & 2.821308e+00 & 1.093081e-01 & 3.400412e+00 \\
6 & 6.933268e-02 & 1.942175e+00 & 1.200802e-01 & 8.296959e+00 & 4.550720e-02 & 8.648423e+00 \\
7 & 1.337784e-01 & 5.831305e+00 & 4.969377e-02 & 2.491130e+01 & 2.014393e-02 & 2.066880e+01 \\
8 & 1.251127e-01 & 4.098384e+01 & 4.906249e-02 & 7.959777e+01 & 1.329286e-02 & 5.479472e+01 \\
9 & 8.343032e-02 & 2.543346e+02 & 2.565896e-02 & 4.452959e+02 & 6.492292e-03 & 2.940820e+02 \\
10 & 1.409798e-01 & 4.184289e+04 & 1.487512e-02 & 3.471687e+04 & 1.283043e-03 & 3.400412e+04 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters approximation with \(m=10\) for \(\exp(-x^{\alpha})\)
Figure 10: Approximation accuracy \(\varepsilon\) (left) and approximation parameters \(u_{i},v_{i},\ i=1,2,\ldots,m\), (right) for the function \(\exp(-x^{\alpha})\) with \(\alpha=0.75\) at various \(m\).
## 5 Conclusions
1. The problem of nonlinear approximation of functions based on two sets of parameters is considered. A particular term of the approximating function is the product of the first non-negative unknown parameter by a given nonlinear function that depends on the second unknown parameter.
2. We present a heuristic computational algorithm for the nonlinear approximation of functions based on the minimization of the residual functional by values of the approximated function at separate points. The unknown nonlinear approximation parameters are set on many points of the interval of permissible values. Linear non-negative parameters are determined using the non-negative least squares method. We find the solution of the nonlinear function approximation problem based on the estimation of the residual and the number of the first non-negative unknown parameters at each iteration of the non-negative least squares method.
3. The main points of practical use of the computational algorithm are illustrated by two typical problems of nonlinear approximation of functions. In the first example, we construct a rational approximation of the function \(x^{-\alpha},\ 0<\alpha<1\) at \(x\geq 1\). The second example is the approximation of the function \(\exp(-x^{\alpha}),\ \ \ \ 0<\alpha<1\) at \(x\geq 0\) by the sum of exponents.
|
2306.10595 | Semi-classical pseudo-differential operators on $\hbar\mathbb{Z}^n$ and
applications | In this paper we consider the semiclassical version of pseudo-differential
operators on the lattice space $\hbar \mathbb{Z}^n$. The current work is an
extension of a previous work and agrees with it in the limit of the parameter
$\hbar \rightarrow 1$. The various representations of the operators will be
studied as well as the composition, transpose, adjoint and the link between
ellipticity and parametrix of operators. We also give the conditions for the
$\ell^p(\hbar \mathbb{Z}^n)$, weighted $\ell^2(\hbar \mathbb{Z}^n)$ boundedness
and $\ell^p(\hbar \mathbb{Z}^n)$ compactness of operators. We investigate the
relation between the classical and semi-classical quantization and employ its
applications to Schatten-Von Neumann classes on $\ell^2( \hbar \mathbb{Z}^n)$.
We establish G\r{a}rding and sharp G\r{a}rding inequalities, with an
application to the well-posedness of parabolic equations on the lattice $\hbar
\mathbb{Z}^n$. Finally we verify that in the limiting case where $\hbar
\rightarrow 0$ the semi-classical calculus of pseudo-differential operators
recovers the classical Euclidean calculus, but with a twist. | Linda N. A. Botchway, Marianna Chatzakou, Michael Ruzhansky | 2023-06-18T16:41:39Z | http://arxiv.org/abs/2306.10595v1 | # Semi-classical pseudo-differential operators on \(\hbar\mathbb{Z}^{n}\) and applications
###### Abstract.
In this paper we consider the semiclassical version of pseudo-differential operators on the lattice space \(\hbar\mathbb{Z}^{n}\). The current work is an extension of the previous work [1] and agrees with it in the limit of the parameter \(\hbar\to 1\). The various representations of the operators will be studied as well as the composition, transpose, adjoint and the link between ellipticity and parametrix of operators. We also give the conditions for the \(\ell^{p}\), weighted \(\ell^{2}\) boundedness and \(\ell^{p}\) compactness of operators. We investigate the relation between the classical and semi-classical quantization in the spirit of [13] and [13] and employ its applications to Schatten-Von Neumann classes on \(\ell^{2}(\hbar\mathbb{Z}^{n})\). We establish Garding and sharp Garding inequalities, with an application to the well-posedness of parabolic equations on the lattice \(\hbar\mathbb{Z}^{n}\). Finally we verify that in the limiting case where \(\hbar\to 0\) the semi-classical calculus of pseudo-differential operators recovers the classical Euclidean calculus, but with a twist.
Key words and phrases:Semi-classical pseudo-differential operators, lattice, calculus, kernel, ellipticity, difference equations, Fourier integral operators, Garding inequality 2020 Mathematics Subject Classification: 58J40, 35S05, 35S30, 42B05, 47G30 LNA. Botchway is supported by the Carnegie Corporation of New York ( Banga-Africa project at the University of Ghana). M. Ruzhansky is partially supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations, the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021) and by by EPSRC grant EP/R003025/2. M. Chatzakou is a postdoctoral fellow of the Research Foundation - Flanders (FWO) under the postdoctoral grant No 12B1223N. LNA Bothcway is grateful to Prof. Benoit F. Sehba for the supervision of her PhD thesis and for mathematical discussions.
6.6 Boundedness and compactness on \(\ell^{p}(\hbar\mathbb{Z}^{n})\)
* 7 Approximation of the classical Euclidean case
* 8 Examples
## 1. Introduction
The main aim of this work is to develop a calculus of pseudo-differential operators on the lattices
\[\hbar\mathbb{Z}^{n}=\{x\in\mathbb{R}^{n}:x=\hbar k\,,\quad k\in\mathbb{Z}^{n} \}\,,\]
where \(\hbar\in(0,1]\) is a small parameter. The particular case when \(\hbar=1\) has been considered in [1]. Our analysis will allow to solve difference equations on the lattice \(\hbar\mathbb{Z}^{n}\) that can appear either as the discretisation of the continuous counterpart, or naturally in modelling problems such as the visualization of physical phenomena etc. We also investigate the behaviour of the calculus in the limit \(\hbar\to 0\).
The recent work [1] on a semi-classical version of the nonhomogeneous heat equation on \(\hbar\mathbb{Z}^{n}\) is one of the main motivations for our analysis. More generally, the analysis of Cauchy problems with space variable on the lattice, see e.g. the parabolic Anderson model [13] with many applications in real word problems, stimulate the current work and in particular the development of the semi-classical calculus. Importantly, and still referring to the example of the parabolic Anderson model, the investigation of the limiting case when \(\hbar\to 0\), allows for the study of the continuous analog of the parabolic Anderson model [15] initiating from its discretized analysis.
Let us consider a rigorous and rather simple example of a discretised difference equation: for \(g\) being a function on the lattice \(\hbar\mathbb{Z}^{n}\), and \(a\in\mathbb{C}\), we regard the equation:
\[\sum_{j=1}^{n}\Big{(}f(k+\hbar v_{j})+f(k-\hbar v_{j})\Big{)}-2af(k)=g(k),\quad k \in\hbar\mathbb{Z}^{n}, \tag{1.1}\]
where \(v_{j}=(0,\cdots,0,1,0,\cdots)\in\mathbb{Z}^{n}\), and the only non-zero element of the vector \(v_{j}\) is the \(j^{\text{th}}\) element and is equal to \(1\). In the case where \(\text{Re}(a)\neq 0\), and \(g\in\ell^{2}(\hbar\mathbb{Z}^{n})\) the solution \(f\) to the equation (1.1) is given by the following expression
\[f(k)=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\frac{1}{2\sum_{ j=1}^{n}\cos(2\pi\theta_{j})+a}\widehat{g}(\theta)\text{d}\theta, \tag{1.2}\]
where \(\widehat{g}\) denotes the Fourier transform of \(g\) on the lattice \(\hbar\mathbb{Z}^{n}\); that is the expression
\[\widehat{g}(\theta)=\sum_{k\in\hbar\mathbb{Z}^{n}}e^{-2\pi\frac{i}{\hbar}k \cdot\theta}g(k),\quad\theta\in\mathbb{T}^{n}\,. \tag{1.3}\]
Additionally, it follows that whenever \(g\in\ell^{2}(\hbar\mathbb{Z}^{n})\) then we also have \(f\in\ell^{2}(\hbar\mathbb{Z}^{n})\). Importantly, we know that the formula (1.2) for the solution to the equation (1.1) can be extended to give solutions also in the case where \(g\) is any tempered growth
distribution, i.e., when \(g\in\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\). For instance, whenever \(g\) satisfies the estimate
\[\sum_{k\in\hbar\mathbb{Z}^{n}}(1+|k|)^{s}|g(k)|^{2}<\infty\]
then the solution \(f\) satisfies the same estimate; that is we have
\[\sum_{k\in\hbar\mathbb{Z}^{n}}(1+|k|)^{s}|f(k)|^{2}<\infty\,,\]
see Example (3) in Section 8.
We point out that the operators that are of the form (1.1) extend the classical notion of difference operators on a discrete setting like in particular the lattice case \(\hbar\mathbb{Z}^{n}\). Indeed, as shown in Section 7 the calculus of pseudo-differential operators in our semi-classical setting agrees with the classical pseudo-differential calculus in the Euclidean setting. Thus, in this work we adopt the terminology _pseudo-differential operators_ as it exists already in the literature, see e.g. [10], to describe the operators that we consider, emphasising this way that they extend the usual class of difference operators into a \(*\)-algebra.
Hence, the analysis here aims to develop a global calculus of pseudo-differential operators on the lattice \(\hbar\mathbb{Z}^{n}\) that will be employed to deal with problems around
* the type of the difference equations that can be solved within the developed framework;
* the properties of the function \(g\) as in (1.1) that can be transferred to the solution \(f\);
* the solvability of equation of the form (1.1) in the case where the coefficient of the operators depend also on the variable \(k\in\hbar\mathbb{Z}^{n}\).
## 2. Preliminary notions and tools
In this section we aim to recall the necessary toolkit and notions that shall be used for our analysis developed in later sections.
To begin with let us start with the formal definition of the Fourier transform of a function \(f\in\ell^{1}(\hbar\mathbb{Z}^{n})\) (semi-classical Fourier transform) that is given by
\[\mathcal{F}_{\hbar\mathbb{Z}^{n}}f(\theta):=\hat{f}(\theta):=\sum_{k\in\hbar \mathbb{Z}^{n}}e^{-2\pi\frac{i}{\hbar}k\cdot\theta}f(k),\quad\theta\in \mathbb{T}^{n}=\mathbb{R}^{n}/\mathbb{Z}^{n}. \tag{2.1}\]
In the formula (2.1), as well as in the sequel, the product \(k\cdot\theta\) for \(k=\hbar(k_{1},\cdots,k_{n})\in\hbar\mathbb{Z}^{n}\,,\theta=(\theta_{1}, \cdots,\theta_{m})\in\mathbb{T}^{n}\), is calculated by the following expression
\[k\cdot\theta=\hbar\sum_{j=1}^{n}k_{j}\theta_{j}\,.\]
The Plancherel formula for the lattice \(\hbar\mathbb{Z}^{n}\) reads
\[\sum_{k\in\hbar\mathbb{Z}^{n}}|f(k)|^{2}=\int_{\mathbb{T}^{n}}|\widehat{f}( \theta)|^{2}\mathrm{d}\theta\,. \tag{2.2}\]
To prove the inverse Fourier transform we perform the following computations: Let \(f\in\ell^{1}(\hbar\mathbb{Z}^{n})\). If we set \(k=\hbar l\), \(l\in\mathbb{Z}^{n}\), and \(f_{\hbar}(l):=f(\hbar l)\), then \(f_{\hbar}\) is a function from \(\mathbb{Z}^{n}\) to \(\mathbb{C}\) and using (2.1) we have:
\[\mathcal{F}_{\hbar\mathbb{Z}^{n}}f(\theta)=\sum_{k\in\mathbb{R}\mathbb{Z}^{n} }e^{-2\pi\frac{i}{\hbar}k\cdot\theta}f(k)=\sum_{l\in\mathbb{Z}^{n}}e^{-2\pi il \cdot\theta}f(\hbar l)=\mathcal{F}_{\mathbb{Z}^{n}}f_{\hbar}(\theta)\,.\]
Now, using the inverse Fourier transform on the lattice \(\mathbb{Z}^{n}\) we can write
\[f_{\hbar}(l)=\int_{\mathbb{T}^{n}}e^{2\pi il\cdot\theta}\mathcal{F}_{\mathbb{ Z}^{n}}f_{\hbar}(\theta)\mathrm{d}\theta=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{ \hbar}k\cdot\theta}\widehat{f}(\theta)\mathrm{d}\theta\,,\]
and thus we have shown that the inverse Fourier transform on \(\hbar\mathbb{Z}^{n}\) is given by
\[f(k)=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\widehat{f}( \theta)\mathrm{d}\theta,\quad k\in\mathbb{Z}^{n}\,. \tag{2.3}\]
A measurable function \(\sigma_{\hbar}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\), defines a sequence \(\mathrm{Op}_{\hbar}(\sigma_{\hbar})\) by
\[\mathrm{Op}_{\hbar}(\sigma_{\hbar})f(k):=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i} {\hbar}k\cdot\theta}\sigma_{\hbar}(k,\theta)\mathcal{F}_{\hbar\mathbb{Z}^{n}} f(\theta)\,\mathrm{d}\theta\,, \tag{2.4}\]
provided that there are some rational restrictions on \(\sigma_{\hbar}\). The operator (2.4) shall be called a semi-classical pseudo-differential operator on \(\hbar\mathbb{Z}^{n}\) corresponding to the _symbol_\(\sigma_{\hbar}(k,\theta)\) on \(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\), or in short a \(\Psi_{\hbar}DO\). The process of associating a symbol \(\sigma_{\hbar}\) to a pseudo-differrential operator \(\mathrm{Op}_{\hbar}(\sigma_{\hbar})\), i.e., the mapping \(\sigma_{\hbar}\mapsto\mathrm{Op}_{\hbar}(\sigma_{\hbar})\), is called \(\hbar\mathbb{Z}^{n}\)-quantization, or simply _quantization_.
The space of rapidly decreasing functions \(\mathcal{S}(\hbar\mathbb{Z}^{n})\) on the lattice shall be called the _Schwartz space_. This consists of functions \(\varphi:\hbar\mathbb{Z}^{n}\to\mathbb{T}^{n}\) for which for every \(N<\infty\) there exists a constant \(c_{\varphi,N}\) (depending on the function \(\varphi\) and on the choice of \(N\)) so that
\[|\varphi(k)|\leq c_{\varphi,N}(1+|k|)^{-N}\,,\quad\text{for all}\quad k\in \hbar\mathbb{Z}^{n}\,,\]
where we have denoted by \(|k|\) the \(\ell^{2}\)-norm of \(k\), i.e., we have \(|k|=\hbar\left(\sum_{j=1}^{n}k_{j}^{2}\right)^{\frac{1}{2}}\). The topology of \(\mathcal{S}(\hbar\mathbb{Z}^{n})\) is given by the seminorms \(p_{j}(\varphi)\), \(j\in\mathbb{N}_{0}\)1, where \(p_{j}(\varphi):=\sup_{k\in\hbar\mathbb{Z}^{n}}(1+|k|)^{j}|\varphi(\xi)|\). The space of _tempered distributions_\(\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\) is the dual of \(\mathcal{S}(\hbar\mathbb{Z}^{n})\); that is the continuous linear functionals on \(\mathcal{S}(\hbar\mathbb{Z}^{n})\).
Footnote 1: Throughout the paper we will use the notation \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\).
**Informal discussion.** The main underlying idea behind the definition of the pseudo-differential operator as (2.4) is that given a linear continuous operator \(A:\ell^{\infty}(\hbar\mathbb{Z}^{n})\to\mathcal{S}^{\prime}(\hbar\mathbb{Z}^ {n})\), the image of the functions \(e_{\theta}=(k\mapsto e^{2\pi\frac{i}{\hbar}k\cdot\theta})\) for \(\theta\in\mathbb{T}^{n}\) via the operator \(A\) completely determines the operator \(A\). To this end we define the symbol \(\sigma_{\hbar}\) of operator \(A=\mathrm{Op}(\sigma_{\hbar})\) by testing the operator \(A\) on the functions \(e_{\theta}\) yielding \(Ae_{\theta}(k)=e^{2\pi\frac{i}{\hbar}k\cdot\theta}\sigma_{\hbar}(k,\theta)\), i.e., we define
\[\sigma_{\hbar}(k,\theta):=e^{-2\pi\frac{i}{\hbar}k\cdot\theta}Ae_{\theta}(k)\,, \tag{2.5}\]
see Proposition 3.9 for the proof of (2.5).
We claim that for a symbol \(\sigma_{\hbar}\) as in (2.5) the operator \(A\) is indeed the operator arising as the quantization of \(\sigma_{\hbar}\). Indeed, with the use of the inverse Fourier transform (2.3) we have
\[Af(k) = A\left(\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta} \widehat{f}(\theta)\,\mathrm{d}\theta\right)\] \[= \int_{\mathbb{T}^{n}}A\left(e^{2\pi\frac{i}{\hbar}k\cdot\theta} \right)\widehat{f}(\theta)\,\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\sigma_{ \hbar}(k,\theta)\widehat{f}(\theta)\,\mathrm{d}\theta=\mathrm{Op}(\sigma_{ \hbar})f(k)\,,\]
and we have proved our claim.
## 3. Representation of \(\Psi_{\hbar}\)Do's
### Symbol classes
Let us begin this section by defining the notion of difference operators (or semi-classical difference operators) in our setting; these are exactly the operators that can be served as the analogues of the derivatives with respect to the Fourier variable in the Euclidean setting.
**Definition 3.1** (Semi-Classical Difference Operator).: For \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\), we define the difference operator \(\Delta_{\hbar}^{\alpha}\) in our setting, as the operator acting on functions \(g:\hbar\mathbb{Z}^{n}\to\mathbb{C}\) via
\[\Delta_{\hbar}^{\alpha}g(k)=\frac{1}{\hbar^{|\alpha|}}\int_{\mathbb{T}^{n}}e^ {2\pi\frac{i}{\hbar}k\cdot\theta}\Big{(}e^{2\pi\frac{i}{\hbar}\theta}-1\Big{)} ^{\alpha}\widehat{g}(\theta)d\theta\,, \tag{3.1}\]
where we have used the notation
\[\left(e^{2\pi\frac{i}{\hbar}\theta}-1\right)^{\alpha}=\left(e^{2\pi\frac{i}{ \hbar}\theta_{1}}-1\right)^{\alpha_{1}}\cdots\left(e^{2\pi\frac{i}{\hbar} \theta_{n}}-1\right)^{\alpha_{n}}. \tag{3.2}\]
The usual (semi-classical) difference operators \(\Delta_{\hbar,j}\), \(j=1,\cdots,n\), on \(\hbar\mathbb{Z}^{n}\) are as follows: Let \(v_{j}=(0,\ldots,0,1,0,\ldots,0)\) be the vector with \(1\) is at the \(j^{th}\) position. Then the formula for the operator \(\Delta_{\hbar,j}\) when acting on \(g\) is given by
\[\Delta_{\hbar,j}g(k) = \frac{1}{\hbar}\left[\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}( k+\hbar v_{j})\cdot\theta}\widehat{g}(\theta)d\theta-\int_{\mathbb{T}^{n}}e^{2 \pi\frac{i}{\hbar}k\cdot\theta}\widehat{g}(\theta)d\theta\right] \tag{3.3}\] \[= \frac{g(k+\hbar v_{j})-g(k)}{\hbar}\,. \tag{3.4}\]
It is then easy to check the following decomposition
\[\Delta_{\hbar}^{\alpha}=\Delta_{\hbar,1}^{\alpha_{1}}\cdot\cdots\cdot\Delta_{ \hbar,n}^{\alpha_{n}}\,. \tag{3.5}\]
**Remark 3.2**.:
* We note that formulae (3.5) and (3.3) give an alternative characterisation to representation (3.1). Hence, their combination can be considered instead as the definition of the (semi-classical) difference operators \(\Delta_{\hbar\mathbb{Z}^{n}}\).
* It is easy to verify that the difference operators satisfy many useful properties, including the Leibniz formula, summation by parts formula, and Taylor expansion formula; see [16] and [16, Section 3.3].
We point out that the representation formula (3.1) is applicable also to \(g\in\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\). Indeed, in this case we have \(\widehat{g}\in\mathcal{D}^{\prime}(\mathbb{T}^{n})\) and the formula (3.1) can be viewed in terms of the distributional duality on \(\mathbb{T}^{n}\); i.e., it reads as follows
\[\Delta_{h}^{\alpha}g(k)=\frac{1}{\hbar^{|\alpha|}}\langle\widehat{g},e^{2\pi i \frac{i}{\hbar}k\cdot\theta}(e^{2\pi i\theta}-1)^{\alpha}\rangle\,. \tag{3.6}\]
The following operators shall be used in the definition of symbol classes. Additionally, they are useful in the torodial analysis, and their precise form, see (3.7) is related to the Stirling numbers; see [14, Section 3.4] for a detailed discussion.
**Definition 3.3** (Partial derivatives on \(\mathbb{T}^{n}\)).: For our purposes it is useful to introduce the partial derivatives type operators on \(\mathbb{T}^{n}\) as follows. For \(\beta\in\mathbb{N}_{0}^{n}\) we define:
\[\begin{split} D^{(\beta)}_{h,\theta}&:=D^{(\beta _{1})}_{h,\theta_{1}}\cdots D^{(\beta_{n})}_{h,\theta_{n}}\,,\\ D^{\beta}_{h,\theta}&:=D^{\beta_{1}}_{h,\theta_{1 }}\cdots D^{\beta_{n}}_{h,\theta_{n}}\,,\end{split}\]
where for \(\beta_{j}\in\mathbb{N}_{0}\),
\[\begin{split} D^{(\beta_{j})}_{h,\theta_{j}}&:= \hbar^{\beta_{j}}\left(\prod_{\ell=0}^{\beta_{j}-1}\frac{1}{2\pi i}\frac{ \partial}{\partial\theta_{j}}-\ell\right)\,,\\ D^{\beta_{j}}_{h,\theta_{j}}&:=\hbar^{\beta_{j}} \left(\frac{1}{2\pi i}\frac{\partial}{\partial\theta_{j}}\right)^{\beta_{j}} \,.\end{split} \tag{3.7}\]
By the above it follows that
\[D^{(\beta)}_{h,\theta}=\hbar^{|\beta|}\left(\prod_{\ell=0}^{\beta_{1}-1}\frac {1}{2\pi i}\frac{\partial}{\partial\theta_{1}}-\ell\right)\cdots\left(\prod_{ \ell=0}^{\beta_{n}-1}\frac{1}{2\pi i}\frac{\partial}{\partial\theta_{n}}- \ell\right)\,.\]
As usual, we denote \(D^{0}_{h,\theta}=D^{(0)}_{h,\theta}=I\).
We can then proceed to the definition of the classes of symbols that correspond to the \(\hbar\mathbb{Z}^{n}\) quantization of operators.
**Definition 3.4** (Symbol Classes \(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\)).: Let \(\rho,\delta\in\mathbb{R}\). We say that a function \(\sigma_{h}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\) is a _symbol_ that belongs to the (semi-classical) symbol class \(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) if \(\sigma_{h}(k,\cdot)\in C^{\infty}(\mathbb{T}^{n})\) for all \(k\in\hbar\mathbb{Z}^{n}\), and for all multi-indices \(\alpha,\beta\in\mathbb{N}_{0}^{n}\), there exists a positive constant \(C_{\alpha,\beta}\) so that
\[|D^{(\beta)}_{h,\theta}\Delta^{\alpha}_{h,k}\sigma_{h}(k,\theta)|\leq C_{ \alpha,\beta}(1+|k|)^{\mu-\rho|\alpha|+\delta|\beta|}, \tag{3.8}\]
for all where \(k\in\hbar\mathbb{Z}^{n}\,,\theta\in\mathbb{T}^{n}\).
If \(\rho=1\) and \(\delta=0\), we will denote simply \(S^{\mu}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}):=S^{\mu}_{1,0}(\hbar\mathbb{ Z}^{n}\times\mathbb{T}^{n})\). As noted already in Section 2 we shall denote by \(\operatorname{Op}_{h}(\sigma_{h})\) the operator with symbol \(\sigma_{h}\) given by
\[\operatorname{Op}_{h}(\sigma_{h})f(k):=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{ \hbar}k\cdot\theta}\sigma_{h}(k,\theta)\mathcal{F}_{\hbar\mathbb{Z}^{n}}f( \theta)\,\mathrm{d}\theta\,.\]
The family of (semi-classical) pseudo-differential operators with symbols in the class \(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) will be denoted by \(\operatorname{Op}_{h}(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{ T}^{n}))\).
We sometimes denote \(\Delta^{\alpha}_{h}=\Delta^{\alpha}_{h,k}\) to underline the fact that these difference operators are acting with respect to the variable lattice variable \(k\in\hbar\mathbb{Z}^{n}\).
**Remark 3.5**.: The symbol classes \(S^{\mu}_{\rho,\delta}(\mathbb{T}^{n}\times\mathbb{Z}^{n})\) (that is, modulo interchanging the order of the lattice and toroidal variables \(k\) and \(\theta\)) have been extensively studied in [14] in the toroidal setting \(\mathbb{T}^{n}\). We also refer the interested reader to the monograph [14, Chapter 4] for a more thorough analysis of their properties. Additionally, we note that the equivalence of the \(S^{\mu}_{\rho,\delta}(\mathbb{T}^{n}\times\mathbb{Z}^{n})\) classes, in the toroidal and compact Lie group case, to the usual Hormander classes is proven in [14].
The _smoothing_ class of operators introduced below is related to the notion of invertibility of pseudo-differential operators that is discussed in Section 4.
**Definition 3.6** (Smoothing semi-classical pseudo-diefferential operators).: We say that a symbol \(\sigma_{\hbar}\) is of order \(-\infty\), and we write \(\sigma_{\hbar}\in S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\), if for all \((k,\theta)\in\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\) we have
\[|D^{(\beta)}_{\theta}\Delta^{\alpha}_{h,k}\sigma(k,\theta)|\leq C_{\alpha, \beta,N}(1+|k|)^{-N}\,,\]
for all \(N\in\mathbb{N}\). The latter condition is equivalent to writing that \(\sigma_{\hbar}\in S^{\mu}_{1,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) for all \(\mu\in\mathbb{R}\). Formally we have
\[S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}):=\bigcap_{\mu\in\mathbb{ R}}S^{\mu}_{1,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\,.\]
The corresponding (semi-classical) pseudo-differential operators \(\operatorname{Op}(\sigma_{\hbar})\) may be called _smoothing pseudo-differential operators_.2
Footnote 2: In the lattice setting the terminology “smoothing” is used abusively; in general in discrete setting smoothing operators are whose symbols that have rapid decay.
### Kernel of \(\Psi_{h}\)DO's
Using the Fourier transform (1.3) we deduce an alternative representation of a semi-classical pseudo-differential representation; the so-called _kernel representation_.
For suitable functions \(f\) we can write:
\[\mathrm{Op}_{h}(\sigma_{h})f(k) = \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{h}k\cdot\theta}\sigma_{h}(k, \theta)F_{h\mathbb{Z}^{n}}f(\theta)\,\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}\sum_{m\in\mathbb{H}\mathbb{Z}^{n}}e^{2\pi \frac{i}{h}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\,\mathrm{d}\theta\] \[= \sum_{m\in\mathbb{H}\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi \frac{i}{h}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\,\mathrm{d}\theta\] \[= \sum_{m\in\mathbb{H}\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi \frac{i}{h}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\,\mathrm{d}\theta\] \[= \sum_{l\in\mathbb{H}\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi \frac{i}{h}l\cdot\theta}\sigma_{h}(k,\theta)f(k-l)\,\mathrm{d}\theta\] \[= \sum_{l\in\mathbb{H}\mathbb{Z}^{n}}\kappa(k,l)f(k-l)\] \[= \sum_{m\in\mathbb{H}\mathbb{Z}^{n}}K(k,m)f(m)\,.\]
Thus the kernel of \(\mathrm{Op}_{h}(\sigma_{h})\) is given by
\[K(k,m)=\kappa(k,k-m)\quad\text{where}\quad\kappa(k,l)=\int_{\mathbb{T}^{n}}e^{ 2\pi\frac{i}{h}l\cdot\theta}\sigma_{h}(k,\theta)\,\mathrm{d}\theta. \tag{3.9}\]
The next theorem establishes an important property of the kernel \(K(k.m)\) of a semi-classical pseudo-differential operator in the class of operators \(\mathrm{Op}(S^{\mu}_{\rho,\delta})\).
**Theorem 3.7**.: _For \(\delta\geq 0\), let \(\sigma_{h}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then, the kernel \(K(k,m)\) of the pseudo-differential operator \(\mathrm{Op}(\sigma_{h})\) satisfies the following property_
\[\Big{|}K\Big{(}k,m\Big{)}\Big{|}\leq C_{Q}\Big{(}1+\big{|}k\big{|}\Big{)}^{\mu +2Q\delta}\Big{(}1+\frac{1}{\hbar}\big{|}k-m\Big{|}\Big{)}^{-2Q}\,,\quad \forall k,m\in\hbar\mathbb{Z}^{n}\,, \tag{3.10}\]
_for all \(Q\in\mathbb{N}_{0}\), and for some positive constant \(C_{Q}>0\)._
**Remark 3.8**.: Before turning over to prove Theorem 3.7 let us note that, in contrast to the case of pseudo-differential operators on \(\mathbb{R}^{n}\) or \(\mathbb{T}^{n}\), the kernel \(K(k,m)\) is well defined on the diagonal \(k=m\) due to the discrete nature of the lattice \(\hbar\mathbb{Z}^{n}\times\hbar\mathbb{Z}^{n}\).
Proof of Theorem 3.7.: Let us first assume that \(k=m\). Then, by definition of the kernel we have
\[K\Big{(}k,k\Big{)}=\kappa(k,0)=\int_{\mathbb{T}^{n}}\sigma_{h}(k,\theta) \mathrm{d}\theta, \tag{3.11}\]
which immediately satisfies (3.10) by the definition of the symbol class \(S^{\mu}_{\rho,\delta}\). In the case where \(k\neq m\), then also \(l=k-m\neq 0\). Let the Laplacian on the torus \(\mathbb{T}^{n}\) be denoted by \(\mathcal{L}_{\theta}\). Then straightforward computations give
\[(1-\mathcal{L}_{\theta})e^{2\pi\frac{i}{h}l\cdot\theta}=\left(1-\sum_{j=1}^{n} \frac{\partial^{2}}{\partial\theta_{j}^{2}}\right)e^{2\pi\frac{i}{h}l\cdot \theta}=\big{(}1+\frac{4\pi^{2}}{\hbar^{2}}\big{|}l\big{|}^{2}\big{)}e^{2\pi \frac{i}{h}l\cdot\theta}\]
which implies that
\[e^{2\pi\frac{i}{\hbar}l\cdot\theta}=\frac{(1-\mathcal{L}_{\theta})}{1+\frac{4\pi^ {2}}{\hbar^{2}}\big{|}l\big{|}^{2}}e^{2\pi\frac{i}{\hbar}l\cdot\theta}\,. \tag{3.12}\]
Substituting the above in the expression for \(\kappa\) as in (3.9) we have
\[\kappa\Big{(}k,l\Big{)} = \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}l\cdot\theta}\sigma_{ \hbar}(k,\theta)\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}\Bigg{(}\frac{(1-\mathcal{L}_{\theta})^{Q}} {\Big{(}1+\frac{4\pi^{2}}{\hbar^{2}}\big{|}l\big{|}^{2}\Big{)}^{Q}}e^{2\pi \frac{i}{\hbar}l\cdot\theta}\Bigg{)}\sigma_{\hbar}(k,\theta)\mathrm{d}\theta\] \[= \Big{(}1+\frac{4\pi^{2}}{\hbar^{2}}\big{|}l\big{|}^{2}\Big{)}^{-Q }\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}l\cdot\theta}\big{(}1-\mathcal{L} _{\theta}\big{)}^{Q}\sigma_{\hbar}(k,\theta)\mathrm{d}\theta.\]
Hence, by the avove, taking the modulus of \(\kappa(k,l)\) we have
\[|\kappa(k,l)|\leq\Big{(}1+\frac{4\pi^{2}}{\hbar^{2}}\big{|}l\big{|}^{2}\Big{)} ^{-Q}|\big{(}1-\mathcal{L}_{\theta}\big{)}^{Q}\sigma_{\hbar}(k,\theta)|\,,\]
which in turn by the assumption on the symbol \(\sigma_{\hbar}\) gives
\[\big{|}\kappa\Big{(}k,l\Big{)}\big{|}\leq C_{Q}\Big{(}1+\big{|}k\big{|}\Big{)} ^{\mu+2Q\delta}\Big{(}1+\frac{4\pi^{2}}{\hbar^{2}}\big{|}l\big{|}^{2}\Big{)}^{ -Q}\,,\]
for all \(Q\geq 0\). The latter gives the desired estimate if one takes into account (3.9). The proof of Theorem 3.7 is now complete.
Similarly to the classical cases, one can extract the symbol of a given semi-classical pseudo-differential operator on \(\hbar\mathbb{Z}^{n}\). The next result provides us with the corresponding formula.
**Proposition 3.9**.: _The symbol \(\sigma_{\hbar}\) of a semi-classical pseudo-difference operator \(T\) on \(\hbar\mathbb{Z}^{n}\) is given by_
\[\sigma_{\hbar}(k,\theta)=e^{-2\pi\frac{i}{\hbar}k\cdot\theta}Te_{\theta}(k), \tag{3.13}\]
_where \(e_{\theta}(k)=e^{2\pi\frac{i}{\hbar}k\cdot\theta},\) for all \(k\in\hbar\mathbb{Z}^{n}\) and for \(\theta\in\mathbb{T}^{n}\)._
Proof.: For \(\omega\in\mathbb{T}^{n}\) let \(e_{\omega}(l):=e^{2\pi\frac{i}{\hbar}l\cdot\omega}\) where \(l\in\hbar\mathbb{Z}^{n}\). Using (2.1), the Fourier transform of \(e_{\omega}\) is given by
\[\widehat{e_{\omega}}(\theta)=\sum_{l\in\hbar\mathbb{Z}^{n}}e^{-2\pi\frac{i}{ \hbar}l\cdot\theta}e^{2\pi\frac{i}{\hbar}l\cdot\omega},\]
Plugging in the last expression into the formula (2.4) for the symbol representation of the operator \(\operatorname{Op}_{\hbar}(\sigma_{\hbar})\) yields
\[\operatorname{Op}_{\hbar}(\sigma_{\hbar})e_{\omega}(k) = \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\sigma_{ \hbar}(k,\theta)\widehat{e_{\omega}}(\theta)\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}\sum_{l\in\hbar\mathbb{Z}^{n}}e^{2\pi\frac{ i}{\hbar}k\cdot\theta}\sigma_{\hbar}(k,\theta)\left[e^{-2\pi\frac{i}{\hbar}l \cdot\theta}e^{2\pi\frac{i}{\hbar}l\cdot\omega}\right]\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}\sum_{l\in\hbar\mathbb{Z}^{n}}e^{-2\pi\frac{ i}{\hbar}(l-k)\cdot\theta}\sigma_{\hbar}(k,\theta)e^{2\pi\frac{i}{\hbar}l \cdot\omega}\mathrm{d}\theta\] \[= \sum_{l\in\hbar\mathbb{Z}^{n}}\widehat{\sigma_{\hbar}}(k,l-k)e^ {2\pi\frac{i}{\hbar}l\cdot\omega}\] \[= \sum_{m\in\hbar\mathbb{Z}^{n}}\widehat{\sigma_{\hbar}}(k,m)e^{2 \pi\frac{i}{\hbar}m\cdot\omega}e^{2\pi\frac{i}{\hbar}k\cdot\omega}\qquad( \text{where }m=l-k)\] \[= \sigma_{\hbar}(k,\omega)e^{2\pi\frac{i}{\hbar}k\cdot\omega},\]
where \(\widehat{\sigma_{\hbar}}\) stands for the toroidal Fourier transform of \(\sigma_{\hbar}\) on the second variable, and for the last inequality we have used the formula (2.3). This gives the proof of formula (3.13).
### Semi-classical amplitudes
Writing out the semi-classical Fourier transform (2.1) as an inifite sum, suggests the following notation for the _amplitude representation_ of the pseudo-differential operator \(\operatorname{Op}_{\hbar}(\sigma_{\hbar})\):
\[\operatorname{Op}_{\hbar}(\sigma_{\hbar})f(k)=:\sum_{m\in\hbar\mathbb{Z}^{n}} \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}\sigma_{\hbar}(k, \theta)f(m)\mathrm{d}\theta\,. \tag{3.14}\]
Let us point out that the right-hand side of (3.14) should not be regarded as an integral operator, but rather as an operator arising via formal integration by parts. This consideration allows performing operations like exchange of summation and integral.
Formula (3.14) gives rise to a possible generalisation where we allow the symbol \(\sigma_{\hbar}\) to depend also on the variable \(m\in\hbar\mathbb{Z}^{n}\); such functions \(\sigma_{\hbar}\) shall be called _semi-classical amplitudes_. Formally we may also define operators of the form
\[Af(k)=\operatorname{Op}(a_{\hbar})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{ \mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}a_{\hbar}(k,m,\theta)f (m)\mathrm{d}\theta\,, \tag{3.15}\]
where \(a_{\hbar}:\hbar\mathbb{Z}^{n}\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n} \to\mathbb{C}\), for all \(f\in C^{\infty}(\hbar\mathbb{Z}^{n})\).
In the next definition we extend Definition 3.4 of the symbol classes \(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) to the semi-classical amplitudes depending on two lattice parameters say \(k,m\in\hbar\mathbb{Z}^{n}\). The usefulness of this extended symbol classes, called _amplitude classes_, becomes apparent in Theorem 4.2 on the adjoint of a semi-classical pseudo-differential operator since its symbol is given in terms of an amplitude.
**Definition 3.10** (Amplitude classes \(\mathcal{A}^{\mu_{1},\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n})\)).: Let \(\rho,\delta\in\mathbb{R}\). The _semi-classical amplitude class_\(\mathcal{A}^{\mu_{1},\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n})\) consists of the functions \(a_{\hbar}:\hbar\mathbb{Z}^{n}\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to \mathbb{C}\) for which we have \(a_{\hbar}(k,m,\cdot)\in C^{\infty}(\mathbb{T}^{n})\) for all \(k,m\in\hbar\mathbb{Z}^{n}\),
provided that for all multi-indices \(\alpha,\beta,\gamma\) there exist a positive \(C_{h,\alpha,\beta,\gamma}>0\) such that for some \(Q\in\mathbb{N}_{0}\) with \(Q\leq|\gamma|\) we have
\[|D_{\theta}^{(\gamma)}\Delta_{h,k}^{\alpha}\Delta_{h,m}^{\beta}a_{h}(k,m, \theta)|\leq C_{\alpha,\beta,\gamma}(1+|k|)^{\mu_{1}-\rho|\alpha|+\delta Q}(1+|m |)^{\mu_{2}-\rho|\beta|+\delta(|\gamma|-Q)}. \tag{3.16}\]
Such a function \(a_{h}\) is called a _semi-classical amplitude of order \((\mu_{1},\mu_{2})\) of type \((\rho,\delta)\)_. The operators with amplitudes in the amplitude class \(\mathcal{A}_{\rho,\delta}^{\mu_{1},\mu_{2}}(\hbar\mathbb{Z}^{n}\times\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n})\) will be denoted by \(\operatorname{Op}(\mathcal{A}_{\rho,\delta}^{\mu_{1},\mu_{2}}(\hbar\mathbb{Z }^{n}\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\). Moreover, by setting \(Q=|\gamma|\) in (3.16) it is evident that
\[\operatorname{Op}(S_{\rho,\delta}^{\mu_{1}}(\hbar\mathbb{Z}^{n}\times\mathbb{ T}^{n}))\subset\operatorname{Op}(\mathcal{A}_{\rho,\delta}^{\mu_{1},\mu_{2}}( \hbar\mathbb{Z}^{n}\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\,.\]
On the other hand semi-classical pseudo-differential operators arising from amplitudes are also pseudo-differential operators with symbols from some appropriate \(S_{\rho,\delta}^{\mu}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) class. In particular, we have the inclusion
\[\operatorname{Op}(\mathcal{A}_{\rho,\delta}^{\mu_{1},\mu_{2}}(\hbar\mathbb{Z }^{n}\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\subset\operatorname{Op}( S_{\rho,\delta}^{\mu_{1}+\mu_{2}}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\,,\]
that is proven in Theorem 3.14. For the proof of the latter, we first need an auxiliary result, see Lemma 3.12, which in turn makes use of a relation between generalized difference operators, see Definition 3.11 and the inverse of the Fourier transform operator in our setting.
**Definition 3.11** (Generalised semi-classical difference operators).: Let \(q\in C^{\infty}(\mathbb{T}^{n})\). Then for \(g:\hbar\mathbb{Z}^{n}\to\mathbb{C}\), the corresponding \(q\)-difference operator is defined by
\[\Delta_{h,q}g(k):=\frac{1}{\hbar}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k \cdot\theta}q(\theta)\widehat{g}(\theta)\mathrm{d}\theta. \tag{3.17}\]
Alternatively, one can get the following, useful for our purposes, expanded formula for (3.17) by writing out the Fourier transform of \(g\) using (2.1):
\[\Delta_{h,q}g(k)=\frac{1}{\hbar}\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-l)\cdot\theta}q(\theta)g(l)\mathrm{d}\theta= \frac{1}{\hbar}\sum_{l\in\hbar\mathbb{Z}^{n}}g(l)\mathcal{F}_{\hbar\mathbb{Z} ^{n}}^{-1}q(k-l)=\frac{1}{\hbar}(g\ast\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}q )(k). \tag{3.18}\]
Let us point out that as in the case of (standard) difference operators, see Definition 3.1, the generalized \(q\)-difference operator can be extended to \(g\in\mathcal{S}^{\prime}(\hbar\mathbb{Z}^{n})\). Finally, we note that the corresponding function \(q\) does not have to be smooth, provided suitable behaviours of \(g,q\). For instance formula (3.17) is well defined for \(g\in\ell^{2}(\hbar\mathbb{Z}^{n})\) and \(q\in L^{2}(\mathbb{T}^{n})\).
We can now state the following result on the behaviour of the \(\Delta_{h,q}\) acting on symbols in the classes \(S_{\rho,\delta}^{\mu}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\).
**Lemma 3.12**.: _Let \(0\leq\delta\leq 1\) and \(\mu\in\mathbb{R}\). Then, for \(\sigma_{h}\in S_{\rho,\delta}^{\mu}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\), \(q\in C^{\infty}(\mathbb{T}^{n})\) and any \(\beta\in\mathbb{N}_{0}^{n}\) we have_
\[|\Delta_{h,q}D_{h,\theta}^{(\beta)}\sigma_{h}(k,\theta)|\leq\frac{C_{q,\beta} }{\hbar}(1+|k|)^{\mu+\delta|\beta|}, \tag{3.19}\]
_for all \(k\in\hbar\mathbb{Z}^{n}\) and \(\theta\in\mathbb{T}^{n}\)._
Proof.: Using the expression (3.18), we can write
\[\Delta_{h,q}D^{(\beta)}_{h,\theta}\sigma_{h}(k,\theta) = \frac{1}{\hbar}\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e ^{2\pi\frac{i}{\hbar}(k-l)\cdot\omega}q(\omega)D^{(\beta)}_{h,\theta}\sigma_{h} (l,\theta)\mathrm{d}\omega\] \[= \frac{1}{\hbar}D^{(\beta)}_{h,\theta}\sigma_{h}(k,\theta)\int_{ \mathbb{T}^{n}}q(\omega)\mathrm{d}\omega\] \[+ \frac{1}{\hbar}\sum_{\begin{subarray}{c}l\in\hbar\mathbb{Z}^{n} \\ l\neq k\end{subarray}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-l)\cdot \omega}q(\omega)D^{(\beta)}_{h,\theta}\sigma_{h}(l,\theta)\mathrm{d}\omega\] \[=: T_{1}+T_{2},\]
where the first term is taken for \(l=k\). Thus, for the first term we have
\[|T_{1}|\leq\frac{C_{q,\beta}}{\hbar}(1+|k|)^{\mu+\delta|\beta|}\,,\]
by the assumption on the symbol \(\sigma_{h}\). Now, to estimate the second term, we first assume that \(\beta\in\mathbb{N}_{0}^{n}\) is such that \(\mu+\delta|\beta|\geq 0\). We rewrite the term \(T_{2}\) in terms of the \(M^{\mathrm{th}}\) power, with \(M\) to be chosen later, of the toroidal Laplace operator \(\mathcal{L}_{\omega}\) using integration by parts and the formula (3.12) as follows:
\[|T_{2}| =\frac{1}{\hbar}\left|\sum_{\begin{subarray}{c}l\in\hbar\mathbb{Z }^{n}\\ l\neq k\end{subarray}}\int_{\mathbb{T}^{n}}\frac{e^{2\pi\frac{i}{\hbar}(k-l) \cdot\omega}}{(2\pi\hbar^{-1})^{2M}|k-l|^{2M}}\left(\mathcal{L}_{\omega}^{M}q( \omega)\right)D^{(\beta)}_{\theta}\sigma_{h}(l,\theta)\mathrm{d}\theta\right|\] \[\leq\frac{1}{\hbar}\hbar^{2M}C_{q,\beta}\sum_{\begin{subarray}{c }l\in\hbar\mathbb{Z}^{n}\\ l\neq k\end{subarray}}\frac{1}{|k-l|^{2M}}(1+|l|)^{\mu+\delta|\beta|}\] \[\leq\frac{1}{\hbar}\hbar^{2M}C_{q,\beta}\sum_{m\neq 0}\frac{1}{|m|^{ 2M}}(1+|k-m|)^{\mu+\delta|\beta|}\qquad(\text{where }l=k-m)\] \[\leq\frac{1}{\hbar}\hbar^{2M}C_{q,\beta}\sum_{m\neq 0}\frac{1}{|m|^ {2M}}\Bigg{(}(1+|k|)^{\mu+\delta|\beta|}+|m|^{\mu+\delta|\beta|}\Bigg{)}\] \[=\frac{1}{\hbar}C_{q,\beta}\sum_{\tilde{m}\neq 0}\frac{1}{| \tilde{m}|^{2M}}\Bigg{(}(1+|k|)^{\mu+\delta|\beta|}+\hbar^{\mu+\delta|\beta|}| m|^{\mu+\delta|\beta|}\Bigg{)}\quad(\text{where }m=\hbar\tilde{m})\] \[\leq\frac{1}{\hbar}C_{q,\beta}(1+|k|)^{\mu}\quad(\text{since }\hbar^{ \mu+\delta|\beta|}\leq 1)\,,\]
where in the final estimate we have used the fact that \(\mu+\delta|\beta|\geq 0\), and \(M\) is chosen so that \(M>\frac{n+\mu+\delta|\beta|}{2}\) which allows for the series above to converge. On the other hand, for the case where \(\beta\) is such that \(\mu+\delta|\beta|<0\), we will use Peetre inequality; see [14, Proposition 3.3.31] which under suitable considerations implies:
\[(1+|k-m|)^{\mu+\delta|\beta|}\leq 2^{|\mu|+\delta|\beta|}(1+|k|)^{\mu+\delta| \beta|}(1+|m|)^{|\mu|+\delta|\beta|}\,,\]
where \(|\mu|\) stands for the absolute value of \(|\mu|\). Hence, for \(M\) such that \(2M-|\mu|-\delta|\beta|>n\), and reasoning as above we have
\[|T_{2}|\leq\frac{1}{\hbar}C_{q,\beta}\sum_{m\neq 0}\frac{1}{|m|^{2M}}(1+|k-m|)^{ \mu+\delta|\beta|}\leq\frac{1}{\hbar}C_{q,\beta}(1+|k|)^{\mu+\delta|\beta|}\,.\]
Summarising the above we have
\[|\Delta_{h,q}D^{(\beta)}_{h,\theta}\sigma_{h}(k,\theta)|\leq\frac{C_{q,\beta} }{\hbar}(1+|k|)^{\mu+\delta|\beta|}+\frac{C_{q,\beta}}{\hbar}(1+|k|)^{\mu+ \delta|\beta|}\leq\frac{C_{q,\beta}}{\hbar}(1+|k|)^{\mu+\delta|\beta|}\,,\]
since \(\hbar\leq 1\) and we have proved the estimate (3.19) for all cases. The proof of Lemma 3.12 is now complete.
Let us know present a toroidal Taylor expansion that is useful for our purposes, which is a re-scaled version of the toroidal Taylor expansion that appeared in [14, Theorem 3.4.4]:
**Theorem 3.13** (Equivalent formulation of the toroidal Taylor expansion on \(\mathbb{T}^{n}\)).: _For \(f:\mathbb{T}^{n}\to\mathbb{C}\), \(f\in C^{\infty}(\mathbb{T}^{n})\), we have the following equivalent formulation of the toroidal Taylor expansion:_
\[f(\theta)=\sum_{|\alpha|<N}\frac{\hbar^{-|\alpha|}}{\alpha!}(e^{2\pi\frac{i}{ \hbar}\theta}-1)^{\alpha}D^{(\alpha)}_{h,\omega}f(\omega)|_{\omega=0}+\sum_{| \alpha|=N}f_{\alpha}(\theta)(e^{2\pi\frac{i}{\hbar}\theta}-1)^{\alpha}\,, \tag{3.20}\]
_where \(D^{(\alpha)}_{h,\omega}\) is given in (3.7). The functions \(f_{\alpha}\in C^{\infty}(\mathbb{T}^{n})\), where \(|\alpha|\leq N\), are products of the one-dimensional functions \(f_{j}(\theta)\), \(\theta\in\mathbb{T}\), defined inductively by_
\[f_{j+1}(\theta):=\begin{cases}\frac{f_{j}(\theta)-f_{j}(0)}{e^{2\pi(i/\hbar) \theta}-1}&\text{if }\ \theta\neq 0,\\ D_{h,\theta}f_{j}(\theta)\,,&\text{if }\theta=0\,,\end{cases} \tag{3.21}\]
_where we have set \(f_{0}:=f\)._
Proof.: For simplicity we will prove (3.20) when \(n=1\). In this case (3.20) becomes
\[f(\theta)=\sum_{j=0}^{N-1}\frac{\hbar^{-j}}{j!}(e^{2\pi\frac{i}{\hbar}\theta}- 1)^{j}D^{(j)}_{h,\omega}f(\omega)|_{\omega=0}+f_{N}(\theta)(e^{2\pi\frac{i}{ \hbar}\theta}-1)^{j}\,. \tag{3.22}\]
For \(j\in\mathbb{N}_{0}\) we define
\[f_{j+1}(\theta):=\begin{cases}\frac{f_{j}(\theta)-f_{j}(0)}{e^{2\pi(i/\hbar) \theta}-1}&\text{if }\ \theta\neq 0,\\ D_{h,\theta}f_{j}(\theta)\,,&\text{if }\theta=0\,,\end{cases}\]
while if \(j=0\), then we set \(f_{0}(\theta):=f(\theta)\). Thus
\[f_{j+1}(\theta)=f_{j}(0)+f_{j+1}(\theta)(e^{2\pi(i/\hbar)\theta}-1)\,,\]
while also we have
\[f(\theta)=\sum_{j=0}^{N-1}(e^{2\pi(i/\hbar)\theta}-1)^{j}f_{j}(0)+f_{N}(\theta )(e^{2\pi(i/\hbar)\theta}-1)^{N}\,. \tag{3.23}\]
From the latter expression we see that it is enough to prove that
\[f_{j}(0)=\frac{\hbar^{-j}}{j!}D^{(j)}_{\hbar,\theta}f(\theta)|_{\theta=0}\,.\]
It is clear that if \(j<j_{0}\), then \(D^{(j)}_{\hbar,\theta}(e^{2\pi(i/\hbar)\theta}-1)^{j_{0}}|_{\theta=0}=0\). On the other hand when \(j>j_{0}\), if we make the change of variable \(\frac{\theta}{\hbar}=\tilde{\theta}\), then we have
\[D^{(1)}_{\hbar,\theta}(e^{2\pi(i/\hbar)\theta}-1)^{j_{0}} = \frac{1}{\hbar}\left(\frac{1}{2\pi i}\frac{\partial}{\partial \theta}-j_{0}\right)\left(e^{2\pi i\frac{\theta}{\hbar}\theta}-1\right)^{j_{0}}\] \[= \left(\frac{1}{2\pi i}\frac{\partial}{\partial\tilde{\theta}}-j_ {0}\right)\left(e^{2\pi i\tilde{\theta}}-1\right)^{j_{0}}\] \[= j_{0}\left(e^{2\pi i\tilde{\theta}}-1\right)^{j_{0}-1}\,.\]
The latter implies that
\[\left[\prod_{i=1}^{j_{0}}\left(\frac{1}{2\pi i}\frac{\partial}{\partial\tilde {\theta}}-1\right)\right]\left(e^{2\pi i\tilde{\theta}}-1\right)^{j_{0}}=j_{0 }!\,,\]
which in turn gives
\[\left[\prod_{i=0}^{j_{0}}\left(\frac{1}{2\pi i}\frac{\partial}{\partial\tilde {\theta}}-1\right)\right]\left(e^{2\pi i\tilde{\theta}}-1\right)^{j_{0}}\big{|} _{\tilde{\theta}=0}=j_{0}!\,.\]
Hence we get \(\left[\prod_{i=0}^{j}\left(\frac{1}{2\pi i}\frac{\partial}{\partial\theta}-1 \right)\right]\left(e^{2\pi i\tilde{\theta}}-1\right)^{j_{0}}\big{|}_{\tilde{ \theta}=0}=j!\delta_{j,j_{0}}\), or after substitution, \(D^{(j)}_{\hbar,\theta}\left(e^{2\pi\frac{i}{\hbar}\theta}-1\right)^{j_{0}} \big{|}_{\theta=0}=\hbar^{-j}j!\delta_{j,j_{0}}\). Finally an application of the operator \(D^{(j)}_{\hbar,\theta}\) to both sides of the equality (3.23) gives \(D^{(j)}_{\hbar,\theta}\), as desired, and the proof is complete.
In the next result we see that the (semi-classical) amplitude representations of the form (3.15) are indeed (semi-classical) pseudo-differential operators. Particularly, if \(a_{\hbar}\in\mathcal{A}^{\mu_{1},\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n} \times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\), then \(\mathrm{Op}(a_{\hbar})=\mathrm{Op}(\sigma_{\hbar,T})\) for some \(\sigma_{\hbar,T}\in S^{\mu_{1}+\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n} \times\mathbb{T}^{n})\). Formally we have:
**Theorem 3.14**.: _Let \(0\leq\delta<\rho\leq 1\). For \(a_{\hbar}\in\mathcal{A}^{\mu_{1},\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n} \times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) let the corresponding amplitude operator \(T\) be given by_
\[Tf(k)=\sum_{l\in\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-l) \cdot\theta}a_{\hbar}(k,l,\theta)f(l)\mathrm{d}\theta. \tag{3.24}\]
_Then we have \(T=\mathrm{Op}(\sigma_{\hbar,T})\) for some \(\sigma_{\hbar,T}\in S^{\mu_{1}+\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n} \times\mathbb{T}^{n})\). Moreover,_
\[\sigma_{\hbar,T}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}\Delta^{\alpha}_{ \hbar,l}D^{(\alpha)}_{\hbar,\theta}a_{\hbar}(k,l,\theta)\Big{|}_{l=k}\,; \tag{3.25}\]
_that is for all \(N\in\mathbb{N}\) we have_
\[\sigma_{\hbar,T}-\sum_{|\alpha|<N}\frac{1}{\alpha!}\Delta^{\alpha}_{\hbar,l}D^ {(\alpha)}_{\hbar,\theta}a_{\hbar}(k,l,\theta)\Big{|}_{l=k}\in S^{\mu_{1}+\mu_ {2}-N(\rho-\delta)}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\,. \tag{3.26}\]
Proof of Theorem 3.14.: We will apply Proposition 3.9, to find the formula for the symbol \(\sigma_{\hbar,T}\) of the operator \(T\) as in the hypothesis. We have
\[\sigma_{\hbar,T}(k,\theta) = e^{-2\pi\frac{i}{\hbar}k\cdot\theta}\sum_{l\in\mathbb{Z}^{n}} \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-l)\cdot\omega}a_{\hbar}(k,l, \omega)e^{2\pi\frac{i}{\hbar}l\cdot\theta}\mathrm{d}\omega \tag{3.27}\] \[= \sum_{l\in\mathbb{R}\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi \frac{i}{\hbar}(k-l)\cdot(\omega-\theta)}a_{\hbar}(k,l,\omega)\mathrm{d}\omega\] \[= \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot(\omega-\theta) }\widehat{a_{\hbar}}(k,\omega-\theta,\omega)\mathrm{d}\omega\] \[= \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\omega}\widehat{ a_{\hbar}}(k,\omega,\omega+\theta)\mathrm{d}\omega\,,\]
where \(\widehat{a_{\hbar}}\) stands for the the semi-classical Fourier transform of \(a_{\hbar}\) with respect to the second variable, and in the last inequality we have replaced \(\omega-\theta\) by \(\omega\). Now if we take the Taylor expansion (3.20) in our setting of \(\widehat{a}_{\hbar}(k,\omega,\omega+\theta)\) in the third variable \(\theta\) we get
\[\widehat{a}_{\hbar}(k,\omega,\omega+\theta)=\sum_{|\alpha|\leq N}\frac{\hbar^{ -|\alpha|}}{\alpha!}\left(e^{2\pi\frac{i}{\hbar}\omega}-1\right)^{\alpha}D^{( \alpha}_{\hbar,\theta}\widehat{a}_{\hbar}(k,\omega,\theta)+R_{0}\,, \tag{3.28}\]
where \(R_{0}\) is the remainder term and will be analysed in the sequel. Plugging (3.28) into (3.27) we obtain
\[\sigma_{\hbar,T}(k,\theta)=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot \omega}\sum_{|\alpha|\leq N}\frac{\hbar^{-|\alpha|}}{\alpha!}(e^{2\pi\frac{i}{ \hbar}\omega}-1)^{\alpha}D^{(\alpha)}_{\hbar,\theta}\widehat{a_{\hbar}}(k, \omega,\theta)\mathrm{d}\omega+R\,, \tag{3.29}\]
with \(R\) in terms of \(R_{0}\). Now since from (3.1) we have
\[\Delta^{\alpha}_{\hbar}g(k)=\frac{1}{\hbar^{|\alpha|}}\int_{\mathbb{T}^{n}}e^ {2\pi\frac{i}{\hbar}k\cdot y}(e^{2\pi\frac{i}{\hbar}y}-1)^{\alpha}\widehat{g} (y)\mathrm{d}y,\]
we can obtain the following alternative expression for \(\sigma_{\hbar,T}\) as follows by (3.29)
\[\sigma_{\hbar,T}(k,z)=\sum_{|\alpha|\leq N}\frac{1}{\alpha!}\Delta^{\alpha}_{ \hbar,l}D^{(\alpha)}_{\hbar,\theta}a_{\hbar}(k,l,\theta)\Big{|}_{l=k}+R\,,\]
which shows (3.25). Now, proving (3.26) amounts to analysing the remainder \(R\), which is a sum of terms of the form
\[R_{j}(k,\theta)=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot y}(e^{2\pi \frac{i}{\hbar}y}-1)^{\alpha}b_{\hbar,j}(k,y,z)\mathrm{d}y,\]
where \(|\alpha|=N\) and the \(b_{j}\)'s are with the use of (3.21) combinations of functions
\[D^{\alpha_{0}}_{\hbar,\theta}\mathcal{F}_{2}a_{\hbar}(k,y,z)\]
for some \(|\alpha_{0}|\leq N\), where \(\mathcal{F}_{2}\) stands for the Fourier transform with respect to the second variable multiplied by some smooth functions \(a_{j}\). Consequently, for any \(\beta\) we have that \(D^{(\beta)}_{z}R_{j}(k,z)\) are the sums of terms with the form
\[\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot y}a_{j}(y)(e^{2\pi\frac{i}{ \hbar}y}-1)^{\alpha}D^{(\beta)}_{\hbar,z}D^{(\alpha_{0})}_{\hbar,z}\mathcal{F} _{2}a_{\hbar}(k,y,z)\mathrm{d}y\,,\]
which in turn implies that \(D^{(\beta)}_{h,z}R_{j}(k,z)\) are the sums of terms of the form
\[\hbar\Delta_{h,a_{j}}\Delta^{\alpha}_{h,l}D^{(\alpha_{0}+\beta)}_{h,z}a_{h}(k,l, z)\Big{|}_{l=k}\,,\]
where the factor \(\hbar\) is due to the definition (3.17). Now, since \(a_{h}\in\mathcal{A}^{\mu_{1},\mu_{2}}_{\rho,\delta}\), an application of Lemma 3.12 yields that that \(R_{j}(k,z)\) satisfies
\[|R_{j}(k,z)|\leq C(1+\big{|}k\big{|})^{\mu_{1}}(1+\big{|}k\big{|})^{\mu_{2}- \rho|\alpha|+\delta|\alpha_{0}|+\delta|\beta|}\,,\]
for some \(\beta\) as in (3.16). Taking \(|\alpha|=N\) and \(|\alpha_{0}|\leq N\) we obtain that
\[|R_{j}(k,z)|\leq C(1+\big{|}k\big{|})^{\mu_{1}+\mu_{2}-(\rho-\delta)N+\delta| \beta|}.\]
On the other hand, the terms \(\Delta^{\beta}_{h,k}R_{j}(k,z)\) can be represented as sums of terms of the form
\[\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot w}(e^{2\pi iw}-1)^{\beta}a_ {j}(w)(e^{2\pi\frac{i}{\hbar}w}-1)^{\alpha}b_{j}(k,w,z)\mathrm{d}w,\]
where \(b_{j}\) and \(a_{j}\) as of the above form. Similar to the above arguments show that
\[|\Delta^{\beta}_{h,k}R_{j}(k,z)|\leq C(1+\big{|}k\big{|})^{\mu_{1}+\mu_{2}- \rho|\beta|-(\rho-\delta)N}.\]
Taking \(N\) large enough, and following standard arguments from the classical pseudo-differential calculus we deduce the expansion (3.25) and the proof is complete.
## 4. Semi-classical symbolic calculus
In this section we establish the symbolic calculus of semi-classical pseudo-differential operators on \(\hbar\mathbb{Z}^{n}\). In particular we develop the formulae for the composition of operators, adjoint and transpose operators. In the end of the section we establish the notion of ellipticity in our setting.
**Theorem 4.1** (Composition formula for \(\Psi_{h}DO\)s).: _Let \(0\leq\delta<\rho\leq 1.\) Let \(\sigma_{h}\in S^{\mu_{1}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) and \(\tau_{h}\in S^{\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then the composition \(\mathrm{Op}(\sigma_{h})\circ\mathrm{Op}(\tau_{h})\) is a pseudo-differential operator with symbol \(\varsigma_{h}\in S^{\mu_{1}+\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times \mathbb{T}^{n})\), given by the asymptotic sum_
\[\varsigma_{h}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}D^{(\alpha)}_{h, \theta}\sigma_{h}(k,\theta)\Delta^{\alpha}_{h,k}\tau_{h}(k,\theta). \tag{4.1}\]
Observe that the order of taking differences and derivatives in (4.1) is different from the analogous composition formulae on the classical cases \(\mathbb{R}^{n}\) and \(\mathbb{T}^{n}\), see [10, 11].
Proof of Theorem 4.1.: The semi-classical pseudo-differential operators with symbols \(\sigma_{h}\) and \(\tau_{h}\) are given respectively by
\[\mathrm{Op}_{h}(\sigma_{h})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\mathrm{ d}\theta,\]
\[\mathrm{Op}_{h}(\tau_{h})g(m)=\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(m-l)\cdot\omega}\tau_{h}(m,\omega)g(l)\mathrm{ d}\omega\,,\]
where \(f,g\in\mathcal{S}(\hbar\mathbb{Z}^{n})\). Consequently we have
\[\mathrm{Op}_{h}(\sigma_{h})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\mathrm{ d}\theta,\]
\[\mathrm{Op}_{h}(\sigma_{h})g(m)=\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(m-l)\cdot\omega}\tau_{h}(m,\omega)g(l)\mathrm{d} \omega\,,\]
where \(f,g\in\mathcal{S}(\hbar\mathbb{Z}^{n})\). Consequently we have
\[\mathrm{Op}_{h}(\sigma_{h})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\mathrm{ d}\theta,\]
\[\mathrm{Op}_{h}(\sigma_{h})g(m)=\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\omega}\tau_{h}(m,\omega)g(l)\mathrm{d} \omega\,,\]
where \(f,g\in\mathcal{S}(\hbar\mathbb{Z}^{n})\). Consequently we have
\[\mathrm{Op}_{h}(\sigma_{h})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\mathrm{ d}\theta,\]
\[\mathrm{Op}_{h}(\sigma_{h})g(m)=\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\omega}\tau_{h}(m,\omega)g(l)\mathrm{d} \omega\,,\]
where \(f,g\in\mathcal{S}(\hbar\mathbb{Z}^{n})\). Consequently we have
\[\mathrm{Op}_{h}(\sigma_{h})g(m)=\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\omega}\tau_{h}(m,\omega)g(l)\mathrm{d} \omega\,,\]
where \(\tau_{h}(m,\omega)\) is the first term in (4.1).
The proof of Theorem 4.1 is complete.
## 5. Proof of Theorem 4.2
In this section we prove the main result of the proof of Theorem 4.1.
**Theorem 5.1** (Composition formula for \(\Psi_{h}DO\)s).: _Let \(0\leq\delta<\rho\leq 1\). Let \(\sigma_{h}\in S^{\mu_{1}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) and \(\tau_{h}\in S^{\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then the composition \(\mathrm{Op}(\sigma_{h})\circ\mathrm{Op}(\tau_{h})\) is a pseudo-differential operator with symbol \(\varsigma_{h}\in S^{\mu_{1}+\mu_{2}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times \mathbb{T}^{n})\), given by the asymptotic sum_
\[\varsigma_{h}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}D^{(\alpha)}_{h, \theta}\sigma_{h}(k,\theta)\Delta^{\alpha}_{h,k}\tau_{h}(k,\theta). \tag{5.1}\]
Observe that the order of taking differences and derivatives in (5.1) is different from the analogous composition formulae on the classical cases \(\mathbb{R}^{n}\) and \(\mathbb{T}^{n}\), see [10, 11].
Proof of Theorem 4.1.: The semi-classical pseudo-differential operators with symbols \(\sigma_{h}\) and \(\tau_{h}\) are given respectively by
\[\mathrm{Op}_{h}(\sigma_{h})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta}\sigma_{h}(k,\theta)f(m)\mathrm{ d}\theta,\]
\[\mathrm{Op}_{h}(\tau_{h})g(m)=\sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2 \pi\frac{i}{\hbar}(m-l)\cdot\omega}\tau_{h}(m,\omega)g(l)\mathrm{d}\omega\,,\]
where \(f,g\in\mathcal{S}(\hbar\mathbb{Z}^{n})\). Consequently we have
\[\varsigma_{h}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}D^{(\alpha)}_{h, \theta}\sigma_{h}(k,\theta)\Delta^{\alpha}_{h,k}\tau_{h}(k,\theta). \tag{5.2}\]
Observe that the order of taking differences and derivatives in (5.1) is different from the analogous composition formulae on the classical cases \(\mathbb{R}^{n}\) and \(\mathbb{T}^{n}\), see [10, 11].
Proof of Theorem 4.1.: The semi-classical pseudo-differential operators with symbols \(\sigma_{h}\) and \(\tau_{h}\) are given respectively by
\[\mathrm{Op}_{h}(\sigma_{h})f(k)=\sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T }
\[\mathrm{Op}_{\hbar}(\sigma_{\hbar})\big{(}\mathrm{Op}_{\hbar}( \tau_{\hbar})g\big{)}(k) = \sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{ \hbar}(k-m)\cdot\theta}\sigma_{\hbar}(k,\theta)\mathrm{Op}(\tau_{\hbar})g(m) \mathrm{d}\theta\] \[= \sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{ i}{\hbar}(k-m)\cdot\theta}\sigma_{\hbar}(k\theta)\left[\sum_{l\in\hbar\mathbb{Z}^{n}} \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(m-l)\cdot\omega}\tau_{\hbar}(m, \omega)g(l)\mathrm{d}\omega\right]\mathrm{d}\theta\] \[= \sum_{l\in\hbar\mathbb{Z}^{n}}\sum_{m\in\hbar\mathbb{Z}^{n}}\int_ {\mathbb{T}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-m)\cdot\theta} \sigma_{\hbar}(k,\theta)e^{2\pi\frac{i}{\hbar}(m-l)\cdot\omega}\tau_{\hbar}(m, \omega)g(l)\mathrm{d}\omega\mathrm{d}\theta\] \[= \sum_{l\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{ i}{\hbar}(k-l)\cdot\omega}\varsigma_{\hbar}(k,\omega)g(l)\mathrm{d}\omega,\]
where
\[\varsigma_{\hbar}(k,\omega) = \sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{ i}{\hbar}(k-m)\cdot(\theta-\omega)}\sigma_{\hbar}(k,\theta)\tau_{\hbar}(m, \omega)\mathrm{d}\theta\] \[= \sum_{m\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{ i}{\hbar}k\cdot(\theta-\omega)}e^{-2\pi\frac{i}{\hbar}m\cdot(\theta-\omega)} \sigma_{\hbar}(k,\theta)\tau_{\hbar}(m,\omega)\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot(\theta-\omega)} \sigma_{\hbar}(k,\theta)\widehat{\tau}_{\hbar}(\theta-\omega,\omega)\mathrm{d}\theta\] \[= \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\sigma_{ \hbar}(k,\omega+\theta)\widehat{\tau}_{\hbar}(\theta,\omega)\mathrm{d}\theta \qquad\text{(replace $\theta-\omega$ by $\theta$)}\,,\]
where \(\widehat{\tau}_{\hbar}\) denotes the (semi-classical) Fourier transform of \(\tau_{\hbar}(m,\omega)\) in the first variable.
Employing the toroidal Taylor expansion given by (3.20) on the symbol \(\sigma_{\hbar}(k,\omega+\theta)\) gives
\[\varsigma_{\hbar}(k,y) = \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\omega}\sigma_{ \hbar}(k,\theta+\omega)\widehat{\tau}_{\hbar}(\omega,\theta)\mathrm{d}\omega\] \[= \int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\omega}\sum_{| \alpha|<N}\frac{\hbar^{-|\alpha|}}{\alpha!}(e^{2\pi\frac{i}{\hbar}\omega}-1)^{ \alpha}D^{(\alpha)}_{\hbar,\theta}\sigma_{\hbar}(k,\theta)\widehat{\tau}_{ \hbar}(\omega,\theta)\mathrm{d}\omega+R,\] \[= \sum_{|\alpha|<N}\frac{1}{\alpha!}D^{(\alpha)}_{\hbar,\theta} \sigma_{\hbar}(k,\theta)\Delta^{\alpha}_{\hbar,k}\tau_{\hbar}(k,\theta)+R,\]
where \(R\) is a remainder from the Taylor expansion and for the last equality we have used the expression (3.1). Now, since the difference operators satisfy the Leibniz rule we get
\[|D^{(\alpha)}_{\hbar,\theta}\sigma_{\hbar}(k,\theta)\Delta^{\alpha}_{\hbar,k} \tau_{\hbar}(k,\theta)|\leq C_{\alpha}(1+|k|)^{\mu_{2}+\delta|\alpha|}(1+|k|)^{ \mu_{1}-\rho|\alpha|}\,,\]
that is we have
\[D^{(\alpha)}_{\hbar,\theta}\sigma_{\hbar}(k,\theta)\Delta^{\alpha}_{\hbar,k} \tau_{\hbar}(k,\theta)\in S^{\mu_{1}+\mu_{2}-(\rho-\delta)|\alpha|}_{\rho, \delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}).\]
Finally, to estimate the remainder \(R\) we follow lines of Theorem 3.14.
In the following theorem we prove that in the lattice case \(\hbar\mathbb{Z}^{n}\) we still have the desired property for the adjoint of a semi-classical pseudo-differential operator. Before
doing so, let us point out that the adjoint operator makes sense in our setting for operators acting on the Hilbert space \(\ell^{2}(\hbar\mathbb{Z}^{n})\).
**Theorem 4.2** (Adjoint of a \(\Psi_{\hbar}DO\)).: _Let \(0\leq\delta<\rho\leq 1.\) Let \(\sigma_{\hbar}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then there exist a symbol \(\sigma_{\hbar}^{*}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{ T}^{n})\) such that the adjoint operator \(\operatorname{Op}(\sigma_{\hbar})^{*}\) is a pseudo-difference operator with symbol \(\sigma_{\hbar}^{*}\); that is we have \(\operatorname{Op}(\sigma_{\hbar})^{*}=\operatorname{Op}(\sigma_{\hbar}^{*})\). Moreover, we have the asymptotic expansion_
\[\sigma_{\hbar}^{*}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}\Delta_{\hbar,k} ^{\alpha}D_{\hbar,\theta}^{(\alpha)}\overline{\sigma_{\hbar}(k,\theta)}. \tag{4.2}\]
Proof.: Let \(f,g\in\ell^{2}(\hbar\mathbb{Z}^{n})\). We have
\[\left(\operatorname{Op}_{\hbar}(\sigma_{\hbar})f,g\right)_{ \ell^{2}(\hbar\mathbb{Z}^{n})} = \sum_{k\in\hbar\mathbb{Z}^{n}}\operatorname{Op}(\sigma_{\hbar}) f(k)\overline{g(k)}\] \[= \sum_{k\in\hbar\mathbb{Z}^{n}}\sum_{l\in\hbar\mathbb{Z}^{n}}\int_ {\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-l)\cdot\theta}\sigma_{\hbar}(k, \theta)f(l)\overline{g(k)}\mathrm{d}\theta\] \[= \sum_{l\in\hbar\mathbb{Z}^{n}}f(l)\overline{\left(\sum_{k\in \hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{-2\pi\frac{i}{\hbar}(k-l)\cdot \theta}\overline{\sigma_{\hbar}(k,\theta)}g(k)\mathrm{d}\theta\right)}\] \[= \sum_{l\in\hbar\mathbb{Z}^{n}}f(l)\overline{\operatorname{Op}( \sigma_{\hbar})^{*}g(l)}\,.\]
By the definition of the adjoint operator we must have
\[\operatorname{Op}_{\hbar}(\sigma_{\hbar})^{*}g(l)=\sum_{k\in\hbar\mathbb{Z}^ {n}}\int_{\mathbb{T}^{n}}e^{-2\pi\frac{i}{\hbar}(k-l)\cdot\theta}\overline{ \sigma_{\hbar}(k,\theta)}g(k)\mathrm{d}\theta\,,\]
or interchanging \(k\) with \(l\)
\[\operatorname{Op}_{\hbar}(\sigma_{\hbar})^{*}g(k)=\sum_{l\in\hbar\mathbb{Z}^ {n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-l)\cdot\theta}\overline{ \sigma_{\hbar}(l,\theta)}g(l)\mathrm{d}\theta\,,\]
i.e. \(\operatorname{Op}_{\hbar}(\sigma_{\hbar})^{*}\) is an amplitude operator with amplitude
\[a_{\hbar}(k,l,\theta)=\overline{\sigma_{\hbar}(l,\theta)}\in S^{\mu}_{\rho, \delta}(\hbar\mathbb{Z}^{n},\mathbb{T}^{n})=\mathcal{A}^{0,\mu}_{\rho,\delta }(\hbar\mathbb{Z}^{n}\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\,.\]
Hence \(\operatorname{Op}_{\hbar}(\sigma_{\hbar})^{*}=\operatorname{Op}_{\hbar}( \sigma_{\hbar}^{*})\), and Theorem 3.14 yields the asymptotic expansion
\[\sigma_{\hbar}^{*}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}\Delta_{\hbar,l }^{\alpha}D_{\hbar,\theta}^{(\alpha)}\overline{\sigma_{\hbar}(l,\theta)}\Big{|} _{l=k}\,.\]
The proof of Theorem 4.2 is now complete.
Before turning over to analyse the transpose (or algebraic adjoint) operator in our setting let us first recall how the transpose operator reads in our setting:
For \(f,g\in\mathcal{S}(\hbar\mathbb{Z}^{n})\), the transpose \(T^{t}\) of a linear operator \(T\) satisfies the distributional duality
\[\left\langle T^{t}f,g\right\rangle=\left\langle f,Tg\right\rangle;\]
that is for \(k\in\hbar\mathbb{Z}^{n}\) we have the equality
\[\sum_{k\in\hbar\mathbb{Z}^{n}}(T^{t}f)(k)g(k)=\sum_{k\in\hbar\mathbb{Z}^{n}}f (k)(Tg)(k).\]
**Theorem 4.3** (Transpose of a \(\Psi_{\hbar}DO\)).: _Let \(0\leq\delta<\rho\leq 1\) and let \(\sigma_{\hbar}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then there exists a symbol \(\sigma^{t}_{\hbar}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T} ^{n})\) so that the transpose operator \(\mathrm{Op}(\sigma_{\hbar})^{\mathrm{t}}\) is a semi-classical pseudo-differential operator with symbol \(\sigma^{t}_{\hbar}\); i.e., we have \(\mathrm{Op}(\sigma_{\hbar})^{\mathrm{t}}=\mathrm{Op}(\sigma^{\mathrm{t}}_{ \hbar})\). The asymptotic formula for the symbol \(\sigma^{t}_{\hbar}\) is given by_
\[\sigma^{t}_{\hbar}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}\Delta^{\alpha}_ {\hbar,k}D^{(\alpha)}_{\hbar,\theta}\sigma_{\hbar}(k,-\theta)\,. \tag{4.3}\]
Proof.: We have
\[\sum_{k\in\hbar\mathbb{Z}^{n}}f(k)(Tg)(k) = \sum_{k\in\hbar\mathbb{Z}^{n}}\sum_{l\in\hbar\mathbb{Z}^{n}}\int _{\mathbb{T}^{n}}f(k)e^{2\pi\frac{i}{\hbar}(k-l)\cdot\theta}\sigma_{\hbar}(k, \theta)g(l)\mathrm{d}\theta\] \[= \sum_{l\in\hbar\mathbb{Z}^{n}}g(l)\Bigg{(}\sum_{k\in\hbar \mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}(k-l)\cdot\theta} \sigma_{\hbar}(k,\theta)f(k)\mathrm{d}\theta\Bigg{)}.\]
By the definiton of the transpose operator we must have
\[T^{t}g(l)=\sum_{k\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{ \hbar}(k-l)\cdot\theta}\sigma_{\hbar}(k,\theta)g(k)\mathrm{d}\theta\,,\]
or equivalently
\[T^{t}g(l)=\sum_{k\in\hbar\mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{ \hbar}(l-k)\cdot\theta}\sigma_{\hbar}(k,-\theta)g(k)\mathrm{d}\theta\,.\]
The last formula corresponds to an amplitude operator with amplitude \(a_{\hbar}(l,k,\theta)=\sigma_{\hbar}(k,-\theta)\in\mathcal{A}^{0,\mu}_{\rho, \delta}(\hbar\mathbb{Z}^{n}\times\times\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Hence \(T^{t}=\mathrm{Op}(a^{t}_{\hbar})\), and Theorem 3.14 gives
\[\sigma^{t}_{\hbar}(k,\theta)\sim\sum_{\alpha}\frac{1}{\alpha!}\Delta^{\alpha}_ {\hbar,k}D^{(\alpha)}_{\hbar,\theta}\sigma_{\hbar}(k,-\theta)\,.\]
The proof of Theorem 4.3 is now complete.
The following result on the asymptotic sums of symbols is well-known in the classical cases as well. This is a utility tool that can simplify the process of solving the so-called "eliptic" partial differential equations.
**Lemma 4.4** (Asymptotic sums of symbols of \(\Psi_{\hbar}\)DOs).: _Let \(1\geq\rho>\delta\geq 0\), and let \(\left\{\mu_{j}\right\}_{j=0}^{\infty}\subset\mathbb{R}\) be a decreasing sequence such that \(\mu_{j}\to-\infty\) as \(j\to\infty\). If \(\sigma_{\hbar,j}\in S^{\mu_{j}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times \mathbb{T}^{n})\) for all \(j\in\mathbb{N}_{0}\), then there exists \(\sigma_{\hbar}\in S^{\mu_{0}}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{ T}^{n})\) such that_
\[\sigma_{\hbar}\sim\sum_{j=0}^{\infty}\sigma_{\hbar,j},\]
_that is for all \(N\in\mathbb{N}\) we have_
\[\sigma_{\hbar}-\sum_{j=0}^{N-1}\sigma_{\hbar,j}\in S^{\mu_{N}}_{\rho,\delta}( \hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\,.\]
Proof.: The proof is a direct consequence of [10, Theorem 4.4.1] taking into account that the symbol classes there are the same modulo swapping the order of variables.
Let now demonstrate the notion of ellipticity of semi-classical pseudo-differential operators with symbol in the symbol classes \(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). The following definition is an adaptation of the notion of ellipticity in the classical settings.
**Definition 4.5** (Elliptic operators).: A symbol \(\sigma_{\hbar}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{ n})\) shall be called _elliptic_ (of order \(\mu\)) if there exist \(C>0\) and \(M>0\) such that
\[|\sigma_{\hbar}(k,\theta)|\geq C(1+|k|)^{\mu}\]
for all \(\theta\in\mathbb{T}^{n}\) and for \(|k|\geq M\), \(k\in\hbar\mathbb{Z}^{n}\). The semi-classical pseudo-differential operators with elliptic symbols shall also be called elliptic.
In the next result we show that the ellipticity of the pseudo-differential operator on \(\hbar\mathbb{Z}^{n}\) is equivalent, as it happens also in the classical cases, to the invertibility of it in the algebra of operators \(\operatorname{Op}(S^{\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))/ \operatorname{Op}(S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\).3
Footnote 3: As usually, we define \(S^{\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})=\bigcup_{\mu\in\mathbb{ R}}S^{\mu}_{1,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\).
The notion of a _parametrix_ in our setting follows the lines of the general theory and reads as:
**Definition 4.6** (parametrix).: The operator \(T\) is called the right (resp. left) _parametrix_ of \(S\) if \(ST-I\in\operatorname{Op}(S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\) (resp. \(TS-I\in\operatorname{Op}(S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\)), where \(I\) is the identity operator 4.
Footnote 4: The notion of a parametrix is applicable to all pseudo-differential operators on \(\hbar\mathbb{Z}^{n}\)
**Theorem 4.7** (The ellipticity of a \(\Psi_{\hbar}\)DO is equivalent to the existence of its parametrix).: _Let \(0\leq\delta<\rho\leq 1.\) An operator \(U\in\operatorname{Op}(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{ T}^{n}))\) is elliptic if and only if there exists \(V\in\operatorname{Op}(S^{-\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\) such that_
\[VU\backsim I\backsim UV\ \ \text{modulo}\ \ \operatorname{Op}(S^{-\infty}( \hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))\,,\]
_i.e., the operator \(V\) is the left and right parametrix of \(U\)._
_Moreover, let \(U\sim\sum_{l=0}^{\infty}U_{l}\) be an expansion of the operator \(U\), where_
\[U_{l}\in\operatorname{Op}(S^{\mu-(\rho-\delta)l}_{\rho,\delta}(\hbar\mathbb{ Z}^{n}\times\mathbb{T}^{n}))\,.\]
_Then the corresponding asymptotic expansion of the operator \(V\) can be expressed via \(V\sim\sum_{j=0}^{\infty}V_{j}\) with_
\[V_{j}\in\operatorname{Op}(S^{-\mu-(\rho-\delta)j}_{\rho,\delta}(\hbar\mathbb{ Z}^{n}\times\mathbb{T}^{n}))\]
_can be obtained by setting \(\sigma_{\hbar,V_{0}}:=\frac{1}{\sigma_{\hbar,U_{0}}}\), and then recursively_
\[\sigma_{\hbar,V_{N}}(k,\theta)=\frac{-1}{\sigma_{\hbar,U_{0}}(k,\theta)}\sum_ {j=0}^{N-1}\sum_{l=0}^{N-1}\sum_{|\gamma|=N-j-l}\frac{1}{\gamma!}\Big{[}D^{( \gamma)}_{\hbar,\theta}\sigma_{\hbar,V_{j}}(k,\theta)\Big{]}\Delta^{\gamma}_ {\hbar,k}\sigma_{\hbar,U_{l}}(k,\theta). \tag{4.4}\]
Proof.: First we want to prove that given that the existence of an operator \(V\) as in the statement satisfying
\[I-UV=T\in S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\,,\]
the ellipticity of the operator \(U\in\operatorname{Op}(S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^ {n}))\) can be deduced. By the composition formula, see Theorem 4.1, we get
\[1-\sigma_{h,U}(k,\theta)\sigma_{h,V}(k,\theta)\in S^{-(\rho-\delta)}_{\rho, \delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\,.\]
The latter means that there exists a constant \(C>0\) such that
\[|1-\sigma_{h,U}(k,\theta)\sigma_{h,V}(k,\theta)|\leq C(1+|k|)^{-(\rho-\delta)}.\]
By choosing \(M\) so that \(C_{h}(1+|M|)^{-(\rho-\delta)}<\frac{1}{2}\), the last estimate yields
\[|\sigma_{h,U}(k,\theta)\sigma_{h,V}(k,\theta)|\geq\frac{1}{2}, \quad\text{for all}\ \ |k|\geq M\,. \tag{4.5}\]
Thus by the assumption on \(\sigma_{h,U}\) we get
\[|\sigma_{h,U}(k,\theta)|\geq\frac{1}{2|\sigma_{h,V}(k,\theta)|} \geq\frac{1}{2C_{h,V}}(1+|k|)^{\mu},,\]
and we have proved that the symbol \(\sigma_{h,U}\) is elliptic of order \(\mu\).
Conversely, let us define
\[\sigma_{h,V_{0}}(k,\theta):=\frac{1}{\sigma_{h,U}(k,\theta)}.\]
By the analogous of [14, Lemma 4.9.4] in the lattice \(\hbar\mathbb{Z}^{n}\) setting, and assuming that \(|k|>M\), for \(M\) as in (4.5), we get \(\sigma_{h,V_{0}}\in S^{-\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T }^{n})\). Hence by the composition formula
\[\sigma_{h,V_{0}U}=\sigma_{h,V_{0}}\sigma_{h,U}-\sigma_{h,T}\backsim 1-\sigma_{h,T},\]
for some \(T\in S^{-(\rho-\delta)}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\); that is \(V_{0}U=I-T\). The rest of the converse implication follows by the composition formula, see Theorem 4.1 and a functional analytic argument as appears in the proof of [14, Theorem 4.9.6]. It will then be omitted.
Finally let us sketch the proof of the formula (4.4). We note that \(I\backsim VU\) which implies that \(1\backsim\sigma_{h,VU}(k,\theta).\) An application of the composition formula as in Theorem 4.1 yields
\[1 \backsim \sum_{\gamma\geq 0}\frac{1}{\gamma!}\Big{[}D^{(\gamma)}_{h, \theta}\sigma_{h,V}(k,\theta)\Big{]}\Delta^{\gamma}_{h,k}\sigma_{h,U}(k,\theta) \tag{4.6}\] \[\backsim \sum_{\gamma\geq 0}\frac{1}{\gamma!}\Big{[}D^{(\gamma)}_{h, \theta}\sum_{j=0}^{\infty}\sigma_{h,V_{j}}(k,\theta)\Big{]}\Delta^{\gamma}_{h,k}\sum_{l=0}^{\infty}\sigma_{h,U_{l}}(k,\theta).\]
A combination of the formula (4.6) together with an argument similar to the one in the proof of [14, Theorem 4.9.13] for the formula for the parametrix on \(\mathbb{T}^{n}\), completes the proof.
## 5. Link between toroidal and semi-classical quantizations
The toroidal quantization [10, 11] gives rise to further developments and applications; see e.g. [13, 14, 15, 16] to mention only a few. Therefore, it is important to stress out its link with the semi-classical lattice quantization that is exactly the topic of the current section. The idea behind this, is then to establish a way in which results on \(\mathbb{T}^{n}\) can be transferred to \(\hbar\mathbb{Z}^{n}\) and vice versa. Similar investigation has been performed in [1] in case of \(\mathbb{Z}^{n}\), and here we verify that they still remain true after the addition of the semi-classical parameter \(\hbar\). The importance of this link will be demonstrated in Section 6. Precisely, it provides us with a characterisation of compact operators on \(\ell^{2}(\hbar\mathbb{Z}^{n})\), see Corollary 6.4; the semi-classical version of Gohberg lemma, see Corollary 6.5; and conditions for the semi-classical operators in the Schatten-von Neumann classes, see Theorem 6.6.
For \(\tau_{\mathbb{T}^{n}}:\mathbb{T}^{n}\times\mathbb{Z}^{n}\to\mathbb{C}\), and for \(v\in C^{\infty}(\mathbb{T}^{n})\), recall the _toroidal quantization_
\[\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\mathbb{T}^{n}})u(\theta)=\sum_{k \in\mathbb{Z}^{n}}e^{2\pi i\theta\cdot k}\tau_{\mathbb{T}^{n}}(\theta,k)( \mathcal{F}_{\mathbb{T}^{n}}u)(k)\,. \tag{5.1}\]
To distinguish between the toroidal and the semi-classical lattice quantization as in (2.4), we will denote them by \(\operatorname{Op}_{\mathbb{T}^{n}}\) and \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}\) (or \(\operatorname{Op}_{\hbar}\)), respectively. In both cases, we will use the notation \(\overline{l},\overline{l},\cdots\) for elements of the lattice \(\mathbb{Z}^{n}\), and the notation \(\theta,\omega,\cdots\) for the elements of the torus \(\mathbb{T}^{n}\).
**Remark 5.1** (Relation between the toroidal and semi-classical Fourier transform).: Recall that the toiroidal Fourier transform is defined by
\[\mathcal{F}_{\mathbb{T}^{n}}f(\overline{k})=\widehat{f}(\overline{k}):=\int_{ \mathbb{T}^{n}}e^{-i2\pi\theta\cdot\overline{k}}f(\theta)\,\mathrm{d}\theta\,,\]
where \(f\in C^{\infty}(\mathbb{T}^{n})\) and \(\overline{k}\in\mathbb{Z}^{n}\). In particular the operator \(\mathcal{F}_{\mathbb{T}^{n}}\) is a bijection, with inverse \(\mathcal{F}_{\mathbb{T}^{n}}^{-1}:\mathcal{S}(\mathbb{Z}^{n})\to C^{\infty}( \mathbb{T}^{n})\) given by
\[(\mathcal{F}_{\mathbb{T}^{n}}^{-1}f)(\theta)=\sum_{k\in\mathbb{Z}^{n}}e^{-2 \pi\theta\cdot k}f(\overline{k})\,.\]
Recall now the semi-classical Fourier inversion formula as in (2.3). Observe that for \(\overline{k}\in\mathbb{Z}^{n}\):
\[\mathcal{F}_{\mathbb{T}^{n}}f(\overline{k}) = \int_{\mathbb{T}^{n}}e^{-2\pi i\theta\cdot\overline{k}}f(\theta) \mathrm{d}\theta \tag{5.2}\] \[= \int_{\mathbb{T}^{n}}e^{-2\pi\frac{i}{\hbar}\theta\cdot k}f(\theta )\mathrm{d}\theta\quad(\text{where}\,k=\hbar\overline{k}\in\hbar\mathbb{Z}^{n})\] \[= (\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}f)(-k)\] \[= (\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}f)(-\hbar\overline{k})\,.\]
From now on we will be using the notation \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}\) to denote a semi-classical pseudo-differential operator, and the notation \(\operatorname{Op}_{\mathbb{T}}\) for the toroidal pseudo-differential operator in order to distinguish between the two.
The next result allows us to reduce certain properties of semi-classical pseudo-differential operators to properties of toroidal pseudo-differential operators.
**Theorem 5.2**.: _For a function \(\sigma_{\hbar}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\) we define \(\tau_{\hbar}:\mathbb{T}^{n}\times\mathbb{Z}^{n}\to\mathbb{C}\) by \(\tau_{\hbar}(\theta,\overline{k}):=\overline{\sigma_{\hbar}(-\hbar\overline{k},\theta)}\). Then we have the following relation_
\[\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})=\mathcal{F}_{\hbar \mathbb{Z}^{n}}^{-1}\circ\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar})^{*} \circ\mathcal{F}_{\hbar\mathbb{Z}^{n}}, \tag{5.3}\]
_where \(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar})^{*}\) is the adjoint of the toroidal pseudo-differential operator \(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar})\) with symbol depending on the semi-classical parameter \(\hbar\). Moreover, we have_
\[\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar})=\mathcal{F}_{\hbar\mathbb{Z} ^{n}}\circ\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})^{*}\circ \mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}, \tag{5.4}\]
_where \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})^{*}\) is the adjoint of the semi-classical pseudo-difference operator \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})\)._
Proof of Theorem 5.2.: For \(\varphi\in C^{\infty}(\mathbb{T}^{n})\) and for \(\sigma_{\hbar}\) as in the hypothesis, consider the operator
\[T\varphi(k):=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\sigma_{ \hbar}(k,\theta)\varphi(\theta)\mathrm{d}\theta\,,\]
where \(k=\hbar\overline{k}\in\hbar\mathbb{Z}^{n}\). Formula (2.4) rewritten in terms of the operator \(T\) yields
\[\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})=T\circ\mathcal{F}_{ \hbar\mathbb{Z}^{n}}. \tag{5.5}\]
The adjoint operator \(T^{*}\) must satisfy the relation
\[(T\varphi,\eta)_{\ell^{2}(\hbar\mathbb{Z}^{n})}=(\varphi,T^{*}\eta)_{L^{2}( \mathbb{T}^{n})}\,,\quad\varphi,\eta\in\ell^{2}(\hbar\mathbb{Z}^{n})\,. \tag{5.6}\]
Now, expanding the left-hand side of (5.6) we get
\[(T\varphi,\eta)_{\ell^{2}(\hbar\mathbb{Z}^{n})}\quad=\quad\sum_{k\in\hbar \mathbb{Z}^{n}}T\varphi(k)\overline{\eta(k)}\quad=\quad\sum_{k\in\hbar \mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}\sigma _{\hbar}(k,\theta)\varphi(\theta)\overline{\eta(k)}\mathrm{d}\theta\,,\]
or equivalently
\[(T\varphi,\eta)_{\ell^{2}(\hbar\mathbb{Z}^{n})}=\int_{\mathbb{T}^{n}}\varphi( \theta)\left(\sum_{k\in\hbar\mathbb{Z}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta} \sigma_{\hbar}(k,\theta)\overline{\eta(k)}\right)\mathrm{d}\theta\,.\]
The latter means that
\[\begin{split} T^{*}\eta(\theta)&=\sum_{k\in\hbar \mathbb{Z}^{n}}e^{-2\pi\frac{i}{\hbar}k\cdot\theta}\overline{\sigma_{\hbar}(k,\theta)}\eta(k)\\ &=\sum_{k\in\hbar\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot\theta} \overline{\sigma_{\hbar}(-\hbar\overline{k},\theta)}\eta(-k)\quad(\text{where }k=\hbar \overline{k})\\ &=\sum_{k\in\hbar\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot\theta} \tau_{\hbar}(\theta,\overline{k})\eta(-k)\\ &=\sum_{\overline{k}\in\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot \theta}\tau_{\hbar}(\theta,\overline{k})\eta(-\hbar\overline{k})\,.\end{split} \tag{5.7}\]
Now since \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}\) is a bijection, there exists \(v\) such that \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}^{-1}v(k)=\eta(k)\). Using the relation (5.2) the latter implies that \(\mathcal{F}_{\mathbb{T}^{n}}v(-\overline{k})=\mathcal{F}_{\hbar\mathbb{Z}^{n}}^ {-1}v(k)\). Hence using the formula (5.1) and equalities (5.7) we can write
\[T^{*}\eta(\theta)=\sum_{k\in\hbar\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot \theta}\tau_{\hbar}(\theta,\overline{k})\eta(-k)=\sum_{k\in\mathbb{Z}^{n}}e^{2 \pi i\overline{k}\cdot\theta}\tau_{\hbar}(\theta,\overline{k})\mathcal{F}_{ \mathbb{T}^{n}}v(\overline{k})=\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar}) v(\theta)\,. \tag{5.8}\]
On the other hand, we have
\[(\mathrm{Op}_{\mathbb{T}^{n}}(\tau_{h})(F_{h\mathbb{Z}^{n}}\eta)\,(\theta) = \sum_{\overline{k}\in\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot \theta}\tau_{h}(\theta,\overline{k})\mathcal{F}_{\mathbb{T}^{n}}F_{h\mathbb{Z}^ {n}}\eta(\overline{k}) \tag{5.9}\] \[= \sum_{\overline{k}\in\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot \theta}\tau_{h}(\theta,\overline{k})\mathcal{F}_{h\mathbb{Z}^{n}}^{-1} \mathcal{F}_{h\mathbb{Z}^{n}}\eta(-k)\] \[= \sum_{\overline{k}\in\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot \theta}\tau_{h}(\theta,\overline{k})\eta(-k)\] \[= \sum_{\overline{k}\in\mathbb{Z}^{n}}e^{2\pi i\overline{k}\cdot \theta}\tau_{h}(\theta,\overline{k})\mathcal{F}_{\mathbb{T}^{n}}v(\overline{ k})\] \[= \mathrm{Op}_{\mathbb{T}^{n}}(\tau_{h})v(\theta)\,,\]
since by the above \(\mathcal{F}_{\mathbb{T}^{n}}v(\overline{k})=\eta(-k)\). Now a combination of (5.8) and (5.9) yields
\[T^{*}=\mathrm{Op}_{\mathbb{T}^{n}}(\tau_{h})\circ\mathcal{F}_{h\mathbb{Z}^{n}}. \tag{5.10}\]
Consequently, the unitarity of the Fourier transform implies that
\[T=\mathcal{F}_{h\mathbb{Z}^{n}}^{*}\circ\mathrm{Op}_{\mathbb{T}^{n}}(\tau_{h} )^{*}=\mathcal{F}_{h\mathbb{Z}^{n}}^{-1}\circ\mathrm{Op}_{\mathbb{T}^{n}}( \tau_{h})^{*}\,. \tag{5.11}\]
Thus, combining (5.11) with (5.5), we get
\[\mathrm{Op}_{h\mathbb{Z}^{n}}(\sigma_{h})=\mathcal{F}_{h\mathbb{Z}^{n}}^{-1} \circ\mathrm{Op}_{\mathbb{T}^{n}}(\tau_{h})^{*}\circ\mathcal{F}_{h\mathbb{Z}^ {n}}\,. \tag{5.12}\]
Formula (5.4) follows by (5.12) using similar arguments. This proof is now complete.
## 6. Applications
In this section we investigate the conditions that can guarantee the boundedness of semi-classical pseudo-differential operators on \(\ell^{2}(h\mathbb{Z}^{n})\) and weighted \(\ell^{p}(h\mathbb{Z}^{n})\) spaces. Additionally the conditions for the membership in the Scahhhten classes are studied, as well as a condition for the pseudo-differential operators to be Hilbert-Schmidt.
### Continuity of semi-classical pseudo-differential operators
In this subsection we show results on the boundedness of the semi-classical pseudo-differential operators on different \(\ell^{p}(h\mathbb{Z}^{n})\) spaces. In particular, Proposition 6.1 gives a sufficient and necessary condition on a semi-classical symbol \(\sigma_{h}\) for the the corresponding pseudo-differential operator to be Hilbert-Schmidt. Curiously, it also gives a sufficient condition for the operator to be bounded from \(\ell^{p}(h\mathbb{Z}^{n})\) to \(\ell^{q}(h\mathbb{Z}^{n})\), where \((p,q)\) are conjugate exponents. Regarding the special case where \(p=q=2\) the sufficient condition for the boundedness of \(\mathrm{Op}_{\sigma_{h}}\) becomes significantly more relaxed, in the sense that finitely many derivatives have to be bounded; see Theorem 6.2.
Before moving on to prove our main results, let us recall that a bounded operator \(T:H\to H\) acting on a Hilbert space \(H\) is called _Hilbert-Schmidt_, and we write \(T\in\mathscr{L}(H)\), if it has finite Hilbert-Schmidt norm, i.e., if
\[\|T\|_{\mathbf{hs}}^{2}:=\sum_{i\in\mathcal{I}}\|Te_{i}\|_{H}^{2}<\infty\,,\]
where \(\{e_{i}:i\in\mathcal{I}\}\) is an orthonormal basis of \(H\).
**Proposition 6.1**.: _The semi-classical pseudo-differential operator \(\operatorname{Op}_{h}(\sigma_{h}):\ell^{2}(\hbar\mathbb{Z}^{n})\to\ell^{2}(\hbar \mathbb{Z}^{n})\) is a Hilbert-Schmidt operator if and only if \(\sigma_{h}\in L^{2}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). In this case, the Hilbert-Schmidt norm is given by_
\[\|\operatorname{Op}_{h}(\sigma_{h})\|_{\operatorname{HS}}=\|\sigma_{h}\|_{L^{2} (\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})}=\left(\sum_{k\in\hbar\mathbb{Z}^{n }}\int_{\mathbb{T}^{n}}|\sigma_{h}(k,\theta)|^{2}\mathrm{d}\theta\right)^{ \frac{1}{2}}. \tag{6.1}\]
_Furthermore, if \(\sigma_{h}\in L^{2}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\) then \(\operatorname{Op}(\sigma_{h}):\ell^{p}(\hbar\mathbb{Z}^{n})\to\ell^{q}(\hbar \mathbb{Z}^{n})\) is bounded for all \(1\leq p\leq 2\) and \(\frac{1}{p}+\frac{1}{q}=1\), and we get that_
\[\|\operatorname{Op}_{h}(\sigma_{h})\|_{\mathscr{L}(\ell^{p}(\hbar\mathbb{Z}^{ n})\to\ell^{q}(\hbar\mathbb{Z}^{n}))}\leq\|\sigma_{h}\|_{L^{2}(\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n})}. \tag{6.2}\]
Proof of Proposition 6.1.: Our first claim on the Hilbert-Schmidt norm is evident if one takes into account the Plancherel formula in this setting. The boundedness result follows the lines of [1, Proposition 5.1] and will be omitted.
**Theorem 6.2**.: _Let \(\varkappa\in\mathbb{N}\) and \(\varkappa>n/2\). Assume that the symbol \(\sigma_{h}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\) satisfies_
\[|D^{(\alpha)}_{h,\theta}\sigma_{h}(k,\theta)|\leq C,\quad\text{ for all }\ (k,\theta)\in\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}, \tag{6.3}\]
_for all \(|\alpha|\leq\varkappa\). Then the semi-classical pseudo-differential operator \(\operatorname{Op}(\sigma_{h})\) extends to a bounded operator on \(\ell^{2}(\hbar\mathbb{Z}^{n})\)._
Proof.: Let \(\sigma_{h}\) be as in the hypothesis. Then the symbol \(\tau_{h}\) related to \(\sigma_{h}\) as in Theorem 5.2 gives rise to the bounded on \(L^{2}(\mathbb{T}^{n})\) pseudo-differential operator \(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{h})\); see [16, Theorem 4.8.1]. On the other hand taking in to account that the operator \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}\) is an isometry from \(\ell^{2}(\hbar\mathbb{Z}^{n})\) to \(L^{2}(\mathbb{T}^{n})\), and the relation (5.3) in Theorem 5.2 we see that \(\operatorname{Op}(\sigma_{h})\equiv\operatorname{Op}_{\hbar\mathbb{Z}^{n}}( \sigma_{h})\) is bounded on \(\ell^{2}(\hbar\mathbb{Z}^{n})\) if and only if \(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{h})\) is bounded on \(L^{2}(\mathbb{T}^{n})\). This completes the proof of Theorem 6.2.
### Compactness, Gohberg lemma, and Schatten-von Neumann classes
In this section we study the compactness of the semi-classical pseudo-differential operators on \(\ell^{2}(\hbar\mathbb{Z}^{n})\), the distance between them and the space of compact operators on \(\ell^{2}(\hbar\mathbb{Z}^{n})\), and the sufficient conditions for the Schatten classes of semi-classical pseudo-differential operators; see Corollary 6.4, 6.5 and Theorem 6.6, respectively.
To ensure a self-contained presentation of our results, in the following remark we recall the necessary notions that are involved in the subsequent analysis.
**Remark 6.3**.: Let us recall some useful notions:
1. (Essential spectrum) Let \(T\) be a closed linear operator on a complex Hilbert space \(H\). The _essential spectrum_ of \(T\), usually denoted by \(\Sigma_{ess}(T)\) is the set of complex numbers \(\lambda\in\mathbb{C}\) such that \[T-\lambda I\] is not a Fredholm operator, where \(I\) is the identity operator.
2. (Schatten-von Neumann classes) Let \(T:H\to H\) be a compact (linear) operator, let \(|T|:=(T^{*}T)^{1/2}\) be the _absolute value of \(T\)_, and let let \(s_{n}(T)\) be
the _singular values of \(T\)_, i.e., the eigenvalues of \(|T|\). We say that the operator \(T\) belongs to the _Schatten-von Neumann class of operators_\(S_{p}(H)\), where \(1\leq p<\infty\), if \[\|T\|_{S_{p}}:=\left(\sum_{k=1}^{\infty}(s_{k}(T))^{p}\right)^{\frac{1}{p}}< \infty\,.\] The space \(S_{p}\) is a Banach space if endowed with the natural norm \(\|\cdot\|_{S_{p}}\)
3. (Trace class operators) The Banach space \(S_{1}(H)\) is the space of _trace-class operators_, while for \(T\in S_{1}\) the quantity \[\operatorname{Tr}(T):=\sum_{n=1}^{\infty}(Te_{n},e_{n})\,,\] where \((e_{n})\) is an orthonormal basis in \(H\), is well-defined and shall be called the _trace \(\operatorname{Tr}(T)\) of \(T\)_.
In the sequel we have defined by \(d\) the following quantity:
\[d:=\limsup_{|k|\to\infty}\sup_{\theta\in\mathbb{T}^{n}}|\sigma_{h}(k,\theta)|\,, \tag{6.4}\]
where \((k,\theta)\in\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\). Let us now present the main results of this subsection:
**Corollary 6.4**.: _Let \(\sigma_{h}\in S^{0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then the semi-classical pseudo-differential operator \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{h})\) is compact on \(\ell^{2}(\hbar\mathbb{Z}^{n})\) if and only if \(d=0\), where \(d\) is as in (6.4). Moreover, we have_
\[\Sigma_{ess}(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{h}))\subset\{ \lambda\in\mathbb{C}:|\lambda|\leq d\}.\]
Proof of Corollary 6.4.: The main idea is a combination, on the one hand, of the fact that the compactness, Fredholmness, and the index are invariant under the action of unitary and of the relation (5.3) in Theorem 5.2, and on the other hand of [11, Theorem 3.2] on the toroidal pseudo-differential operators. The rest of the arguments follows the lines of [1, Corollary 5.3] and are omitted.
The next result gives a lower bound for the distance between a given operator and the space of compact operators on \(\ell^{2}(\hbar\mathbb{Z}^{n})\). Such type of statements were first shown by Gohberg in [10], and are now bearing his name. We refer to [10, 12] for such a result on the circle \(\mathbb{T}^{1}\), and to [11] on general compact Lie groups. The analogous result in the lattice case was given in [1].
**Corollary 6.5**.: _Gohberg lemma Let \(\sigma_{h}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\) be such that_
\[|\sigma_{h}(k,\theta)|\leq C,\quad|\nabla_{h,\theta}\sigma_{h}(k,\theta)|\leq C,\quad|\Delta_{h,q}\sigma_{h}(k,\theta)|\leq C(1+|k|)^{-\rho}, \tag{6.5}\]
_for some \(\rho>0\) and for all \(q\in C^{\infty}(\mathbb{T}^{n})\) with \(q(0)=0\) and all \((k,\theta)\in\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\). Then for all compact operators \(K\) on \(\ell^{2}(\hbar\mathbb{Z}^{n})\) we have_
\[\|\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{h})-K\|_{\mathcal{L}(\ell^{ 2}(\hbar\mathbb{Z}^{n}))}\geq d.\]
_In particular, this conclusion holds for any \(\sigma_{h}\in S^{0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\)._
Proof.: The proof is a consequence of (5.3) in Theorem 5.2 and [11, Theorem 3.]. See also [1, Corollary 5.4].
The next theorem is an application of the developed calculus that presents the conditions ensuring that the corresponding operators belong to Schatten classes.
**Theorem 6.6**.: _Let \(0<p\leq 2\). We have the following implication_
\[\sum_{k\in\hbar\mathbb{Z}^{n}}\|\sigma_{\hbar}(k,\cdot)\|_{L^{2}(\mathbb{T}^{n} )}^{p}<\infty\Longrightarrow\text{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar}) \quad\text{is $p$-Schatten operator on}\quad\ell^{2}(\hbar\mathbb{Z}^{n})\,. \tag{6.6}\]
_In particular, if the left-hand side of (6.6) holds true for \(p=1\), then the operator \(\text{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})\) is trace class, and its trace can be calculated as follows:_
\[\operatorname{Tr}(\text{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar}))=\sum_{k\in \mathbb{Z}^{n}}\int_{\mathbb{T}^{n}}\sigma_{\hbar}(k,\theta)\,d\theta=\sum_{j \in\mathcal{J}}\lambda_{j}\,, \tag{6.7}\]
_where the set \(\{\lambda_{j},j\in\mathcal{J}\}\) is the set of eigenvalues of \(\text{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})\) (multiplicities counted)._
Proof of Theorem 6.6.: For the case \(p\in(0,1]\) the notion of \(p\)-nuclearity5 and \(p\)-Shatten classes coincide; see [11] and [10, Section 6.3.2.11]. Taking this into account, the result for the case \(p\in(0,1]\) follows as a consequence of (5.3) and [1, Corollary 3.12]. For the trace class operators, using the expression (3.11) for the kernel, one can prove the first equality in (6.7). The second equality in (6.7) is the well-known Lidskii formula [10]. For the case where \(p\in[1,2]\) the result follows by interpolation using (6.1). This completes the proof.
Footnote 5: The notion of \(p\)-nuclearity initiated by Grothedieck in [11]. We refer to the work [1] for a detailed discussion of it.
**Remark 6.7**.: In [16] the authors proved a result on the analysis of the Schatten classes of operators on locally compact separable unimodular groups of Type I, which in our case reads as follows: for \(p\in[2,\infty)\) and for \(p^{\prime}\) the conjugate exponent of \(p\) (i.e. \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\)) we have
\[\sum_{k\in\hbar\mathbb{Z}^{n}}\|\sigma_{\hbar}(k,\cdot)\|_{L^{p^{\prime}}( \mathbb{T}^{n})}^{p^{\prime}}<\infty\Longrightarrow\text{Op}_{\hbar\mathbb{Z} ^{n}}(\sigma_{\hbar})\quad\text{is $p$-Schatten operator on}\quad\ell^{2}(\hbar\mathbb{Z}^{n})\,.\]
### Weighted \(\ell^{2}\)-boundedness
In this subsection, we present a result on the boundedness of semi-classical pseudo-differential operators on weighted \(\ell^{2}(\hbar\mathbb{Z}^{n})\) spaces defined below:
**Definition 6.8**.: (Weighted \(\ell^{p}_{s}(\hbar\mathbb{Z}^{n})\) space) For \(s\in\mathbb{R}\) and \(1\leq p<\infty\) we define the _weighted space_\(\ell^{p}_{s}(\hbar\mathbb{Z}^{n})\) as the space of all \(f:\hbar\mathbb{Z}^{n}\to\mathbb{C}\) such that
\[\|f\|_{\ell^{p}_{s}(\hbar\mathbb{Z}^{n})}:=\left(\sum_{k\in\hbar\mathbb{Z}^{n} }(1+|k|)^{sp}|f(k)|^{p}\right)^{1/p}<\infty. \tag{6.8}\]
It is easy to check that the symbol \(a_{\hbar,s}(k)=(1+|k|)^{s}\) belongs to the semi-classical class of symbols \(S^{s}_{1,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\), while also that
\[f\in\ell^{p}_{s}(\hbar\mathbb{Z}^{n})\quad\text{if and only if}\quad\text{Op} (a_{s})f\in\ell^{p}(\hbar\mathbb{Z}^{n})\,.\]
The latter observation gives rise to the following identification:
\[\ell^{p}_{s}(\hbar\mathbb{Z}^{n})=\text{Op}(a_{-s})(\ell^{p}(\hbar\mathbb{Z}^ {n})). \tag{6.9}\]
**Corollary 6.9**.: _Let \(r\in\mathbb{R}\) and let \(\sigma_{\hbar}\in S^{r}_{0,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\). Then, the semi-classical pseudo-differential operator \(\operatorname{Op}(\sigma_{\hbar})\) is a bounded from \(\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\) to \(\ell_{s-r}^{2}(\hbar\mathbb{Z}^{n})\) for all \(s\in\mathbb{R}\)._
Proof.: If \(T=\operatorname{Op}(\sigma_{\hbar})\in\operatorname{Op}(S^{r}_{0,0}(\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n}))\), then by using the composition formula as in Theorem 4.1 we have
\[P=\operatorname{Op}(a_{\hbar,s-r})\circ T\circ\operatorname{Op}(a_{\hbar,-s}) \in\operatorname{Op}(S^{0}_{0,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}))^{6}\]
and the operator \(P\) is by Theorem 6.2 bounded on \(\ell^{2}(\hbar\mathbb{Z}^{n})\). We can write
\[Tf=\operatorname{Op}(a_{\hbar,r-s})\circ P\circ\operatorname{Op}(a_{\hbar,s}) f\,,\]
where \(P\) is as above. Now, if \(f\in\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\), then, since \(\operatorname{Op}(a_{\hbar,s})f,(P\circ\operatorname{Op}(a_{\hbar,s}))f\in \ell^{2}(\hbar\mathbb{Z}^{n})\), we also get that \(Tf\in\operatorname{Op}(a_{\mu-s})\ell^{2}(\hbar\mathbb{Z}^{n})\). The proof is now complete in view of the identification (6.9).
### Garding and sharp Garding inequalities on \(\hbar\mathbb{Z}^{n}\)
Let us recall the following result on the torus \(\mathbb{T}^{n}\) as in [10, Corollary 6.2]:
**Corollary 6.10**.: _(Garding inequality on \(\mathbb{T}^{n}\)) Let \(0\leq\delta<\rho\leq 1\) and \(m>0\). Let \(B\in\operatorname{Op}_{\mathbb{T}^{n}}S^{2m}_{\rho,\delta}(\mathbb{T}^{n} \times\mathbb{Z}^{n})\) be an elliptic toroidal pseudo-differential operator such that \(\sigma_{B}(\theta,\overline{k})\geq 0\), for all \(\theta\in\mathbb{T}^{n}\) and co-finitely many \(\overline{k}\in\mathbb{Z}^{n}\). Then there exist \(C_{0},C_{1}>0\) such that for all \(f\in H^{m}(\mathbb{T}^{n})\) we have_
\[\operatorname{Re}(Bf,f)_{L^{2}(\mathbb{T}^{n})}\geq C_{0}||f||^{2}_{H^{m}( \mathbb{T}^{n})}-C_{1}||f||^{2}_{L^{2}(\mathbb{T}^{n})}.\]
Let us now show the semi-classical analogue of Garding inequality on \(\hbar\mathbb{Z}^{n}\). As there is no regularity concept on the lattice, the statement is given in terms of weighted \(\ell^{2}(\hbar\mathbb{Z}^{n})\)-spaces.
**Theorem 6.11** (Garding inequality on \(\hbar\mathbb{Z}^{n}\)).: _Let \(0\leq\delta<\rho\leq 1\) and \(m>0\). Let \(P\in\operatorname{Op}_{\hbar\mathbb{Z}^{n}}S^{2m}_{\rho,\delta}(\hbar\mathbb{Z }^{n}\times\mathbb{T}^{n})\) be an elliptic semi-classical pseudo-differential operator such that \(\sigma_{\hbar,P}(k,\theta)\geq 0\) for all \(\theta\) and for co-finitely many \(k\in\hbar\mathbb{Z}^{n}\). Then there exist \(C_{1},C_{2}>0\) such that for all \(g\in\ell_{m}^{2}(\mathbb{T}^{n})\) we have_
\[\operatorname{Re}(Pg,g)_{\ell^{2}(\hbar\mathbb{Z}^{n})}\geq C_{0}||g||^{2}_{ \ell_{m}^{2}(\hbar\mathbb{Z}^{n})}-C_{1}||g||^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n })}. \tag{6.10}\]
Proof.: Let us define \(\tau_{\hbar}(\theta,\overline{k})=\overline{\sigma_{\hbar,P}(-k,\theta)}\), where \(k=\hbar\overline{k}\). Then, using Theorem 5.2 we have
\[P=\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar,P})=\mathcal{F}^{-1}_{ \hbar\mathbb{Z}^{n}}\circ\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar})^{* }\circ\mathcal{F}_{\hbar\mathbb{Z}^{n}}\,. \tag{6.11}\]
The latter implies that if \(\sigma_{\hbar,P}\geq 0\) is elliptic on \(\mathbb{T}^{n}\), then also \(\tau_{\hbar}\geq 0\) is elliptic on \(\hbar\mathbb{Z}^{n}\). Hence, using the Garding inequality on \(\mathbb{T}^{n}\), see Corollary 6.10, we get that for all \(f\in H^{m}(\mathbb{T}^{n})\)
\[\operatorname{Re}(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar} )^{*}f,f)_{L^{2}(\mathbb{T}^{n})} = \operatorname{Re}(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{\hbar} )f,f)_{L^{2}(\mathbb{T}^{n})} \tag{6.12}\] \[\geq C_{0}||f||^{2}_{H^{m}(\mathbb{T}^{n})}-C_{1}||f||^{2}_{L^{2}( \mathbb{T}^{n})}\] \[= C_{0}||g||^{2}_{\ell_{m}^{2}(\hbar\mathbb{Z}^{n})}-C_{1}||g||^{2}_ {\ell^{2}(\hbar\mathbb{Z}^{n})}\,,\]
where \(f\) is such that \(f=\mathcal{F}_{\hbar\mathbb{Z}^{n}}g\), so that
\[\|g\|_{H^{m}(\mathbb{T}^{n})}=\|g\|_{\ell^{2}_{m}(\hbar\mathbb{Z}^{n})}\qquad \text{and}\qquad\|g\|_{L^{2}(\mathbb{T}^{n})}=\|g\|_{\ell^{2}(\hbar\mathbb{Z}^{ n})}.\]
Now, by (6.14) we can write
\[Pg=\mathcal{F}^{-1}_{\hbar\mathbb{Z}^{n}}\circ\operatorname{Op}_{\mathbb{T}^{ n}}(\tau_{h})^{*}\circ\mathcal{F}_{\hbar\mathbb{Z}^{n}}g=\mathcal{F}^{-1}_{ \hbar\mathbb{Z}^{n}}\circ\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{h})^{*}f, \tag{6.13}\]
so that \(\mathcal{F}_{\hbar\mathbb{Z}^{n}}Pg=\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{ h})^{*}f\). Thus, by (6.12) we have
\[\operatorname{Re}(\mathcal{F}_{\hbar\mathbb{Z}^{n}}Pg,\mathcal{F}_{\hbar \mathbb{Z}^{n}}g)_{L^{2}(\mathbb{T}^{n})}\geq C_{0}||f||^{2}_{\ell^{2}_{m}( \hbar\mathbb{Z}^{n})}-C_{1}||f||^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\,,\]
where the last can be rewritten as
\[\operatorname{Re}(\mathcal{F}^{*}_{\hbar\mathbb{Z}^{n}}\mathcal{F}_{\hbar \mathbb{Z}^{n}}Pg,g)_{L^{2}(\mathbb{T}^{n})}\geq C_{0}||f||^{2}_{\ell^{2}_{m}( \hbar\mathbb{Z}^{n})}-C_{1}||f||^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\,.\]
Since \(\mathcal{F}^{*}_{\hbar\mathbb{Z}^{n}}\mathcal{F}_{\hbar\mathbb{Z}^{n}}=Id\), we obtain
\[\operatorname{Re}(Pg,g)_{\ell^{2}(\hbar\mathbb{Z}^{n})}\geq C_{0}||g||^{2}_{ \ell^{2}_{m}(\hbar\mathbb{Z}^{n})}-C_{1}||g||^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n })}\,.\]
This completes the proof of Theorem 6.11.
Next we show the sharp Garding inequality on \(\hbar\mathbb{Z}^{n}\). Before doing so, let us recall how the sharp Garding inequality on compact Lie groups, see [11, Theorem 2.1], reads in the case of the torus \(\mathbb{T}^{n}\).
**Theorem 6.12** (Sharp Garding inequality on \(\mathbb{T}^{n}\)).: _Let \(B\in\operatorname{Op}_{\mathbb{T}^{n}}S^{m}(\mathbb{T}^{n}\times\mathbb{Z}^{n})\) be a toroidal pseudo-differential operator with symbol \(\sigma_{h}(\theta,\overline{k})\geq 0\) for all \((\theta,\overline{k})\in\mathbb{T}^{n}\times\mathbb{Z}^{n}\). Then there exists \(C<\infty\) such that_
\[\operatorname{Re}(Bg,g)_{L^{2}(\mathbb{T}^{n})}\geq-C\|g\|_{H^{\frac{m-1}{2}}( \mathbb{T}^{n})},\]
_for all \(g\in H^{\frac{m-1}{2}}(\mathbb{T}^{n})\)._
Let us know prove the analogous result in the semi-classical setting \(\hbar\mathbb{Z}^{n}\).
**Theorem 6.13** (Sharp Garding inequality on \(\hbar\mathbb{Z}^{n}\)).: _Let \(P\in\operatorname{Op}_{\hbar\mathbb{Z}^{n}}S^{m}(\hbar\mathbb{Z}^{n}\times \mathbb{T}^{n})\) be a semi-classical pseudo-differential operator with symbol \(\sigma_{\hbar,P}(k,\theta)\geq 0\) for all \((k,\theta)\in\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\). Then there exists \(C<\infty\) such that_
\[\operatorname{Re}(Pg,g)_{\ell^{2}(\hbar\mathbb{Z}^{n})}\geq-C\|g\|_{\ell^{2}_{ \frac{m-1}{2}}(\hbar\mathbb{Z}^{n})}\]
_for all \(g\in\ell^{2}_{\frac{m-1}{2}}(\hbar\mathbb{Z}^{n})\)._
Proof.: Let \(\tau_{h}(\theta,\overline{k})=\overline{\sigma_{\hbar,P}(-k,\theta)}\), where \(k=\hbar\overline{k}\). Using Theorem 5.2 we can write
\[P=\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar,P})=\mathcal{F}^{-1}_ {\hbar\mathbb{Z}^{n}}\circ\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{h})^{*} \circ\mathcal{F}_{\hbar\mathbb{Z}^{n}}. \tag{6.14}\]
Following the lines of Theorem 6.11 and using the Sharp Garding inequality on \(\mathbb{T}^{n}\) we get
\[\operatorname{Re}(Pg,g)_{\ell^{2}(\hbar\mathbb{Z}^{n})} = \operatorname{Re}(\mathcal{F}^{-1}_{\hbar\mathbb{Z}^{n}}\! \operatorname{Op}_{\mathbb{T}^{n}}(\tau_{h})^{*}f,\mathcal{F}^{-1}_{\hbar \mathbb{Z}^{n}}f)_{\ell^{2}(\hbar\mathbb{Z}^{n})}\] \[= \operatorname{Re}(\operatorname{Op}_{\mathbb{T}^{n}}(\tau_{h})^{* }f,f)_{L^{2}(\mathbb{T}^{n})}\] \[\geq -C\|f\|_{H^{\frac{m-1}{2}}(\mathbb{T}^{n})}\] \[= -C\|g\|_{\ell^{2}_{\frac{m-1}{2}}(\hbar\mathbb{Z}^{n})}.\]
The proof of Theorem 6.13 is now complete.
### Existence and uniqueness of the solutions to parabolic equations on \(\hbar\mathbb{Z}^{n}\)
In this subsection we will apply the Garding inequalities in our semi-classical setting to prove the well-posedeness of the classical parabolic equation
\[\begin{cases}\frac{\partial w}{\partial t}-Dw&=g,\qquad t\in[0,T],\quad T>0,\\ w(0)&=w_{0}\,,\end{cases} \tag{6.15}\]
where in this case the classical differential operator is replaced by a semi-classical pseudo-differential operator denoted by \(D\) and has a symbol in the class \(S^{m}_{1,0}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n})\).
**Theorem 6.14**.: _Let \(r>0\) and \(D\in\operatorname{Op}_{\hbar\mathbb{Z}^{n}}S^{r}_{1,0}(\hbar\mathbb{Z}^{n} \times\mathbb{T}^{n})\) be a semi-classical pseudo-differential operator. Assume also that there exist \(C_{0}>0\) and \(R>0\) such that for all \(\theta\in\mathbb{T}^{n}\), we have_
\[-\sigma_{\hbar,D}(k,\theta)\geq C_{0}|k|^{r}\qquad\text{for }|k|\geq R. \tag{6.16}\]
_If for \(w_{0}\) and \(g\) as in (6.15), we have \(w_{0}\in\ell^{2}(\hbar\mathbb{Z}^{n})\) and \(g\in L^{1}([0,T],\ell^{2}(\hbar\mathbb{Z}^{n}))\), then the equation (6.15) has a unique solution \(w\in C([0,T],\ell^{2}(\hbar\mathbb{Z}^{n}))\) that satisfies the estimate_
\[\|u(t)\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\leq C\Big{(}\|u_{0}\|^{2}_{\ell^{ 2}(\hbar\mathbb{Z}^{n})}+\int_{0}^{t}\|f(s)\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{ n})}ds\Big{)}\,, \tag{6.17}\]
_for some \(C>0\) and for all \(t\in[0,T]\)._
Proof.: Let \(w\) be the solution to the equation (6.15). If \(\sigma^{*}_{\hbar,D}(k,\theta)\) stands for the symbol of the adjoint operator \(D^{*}\), then by condition (6.16) there exists \(C^{\prime}_{0}>0\) such that
\[|-(\sigma_{\hbar,D}+\sigma^{*}_{\hbar,D})(k,\theta)|\geq C^{\prime}_{0}|k|^{r }\qquad\text{for }|k|\geq R.\]
Then, using the the Garding inequality as in Theorem 6.11 and Theorem 4.2 on the adjoint operators, we get
\[-\Big{(}(D+D^{*})w,w\Big{)}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\geq C_{1}\|w\|^{2 }_{\ell^{2}_{\frac{r}{2}}(\hbar\mathbb{Z}^{n})}-C_{2}\|w\|^{2}_{\ell^{2}( \hbar\mathbb{Z}^{n})}. \tag{6.18}\]
On the other hand we have
\[\frac{\partial}{\partial t}\|w\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{ n})} = \frac{\partial}{\partial t}\bigg{(}w(t),w(t)\bigg{)}_{\ell^{2}( \hbar\mathbb{Z}^{n})}=\bigg{(}\frac{\partial w}{\partial t},w\bigg{)}_{\ell^ {2}(\hbar\mathbb{Z}^{n})}+\bigg{(}w,\frac{\partial w}{\partial t}\bigg{)}_{ \ell^{2}(\hbar\mathbb{Z}^{n})} \tag{6.19}\] \[= \bigg{(}Dw+g,w\bigg{)}_{\ell^{2}(\hbar\mathbb{Z}^{n})}+\bigg{(} w,Dw+g\bigg{)}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\] \[= \bigg{(}(D+D^{*})w,w\bigg{)}_{\ell^{2}(\hbar\mathbb{Z}^{n})}+2 \mathrm{Re}(w,g)_{\ell^{2}(\hbar\mathbb{Z}^{n})}.\]
Hence a combination of (6.18) together with (6.19) gives
\[\frac{\partial}{\partial t}\|w\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{ n})} \leq -C_{1}\|w(t)\|^{2}_{\ell^{2}_{\frac{r}{2}}(\hbar\mathbb{Z}^{n})}+C_ {2}\|w(t)\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}+\|w(t)\|^{2}_{\ell^{2}(\hbar \mathbb{Z}^{n})}+||g||^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\] \[\leq (C_{2}+1)\|w(t)\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}+\|g\|^{2}_{ \ell^{2}(\hbar\mathbb{Z}^{n})}\,.\]
An application of Gronwall's lemma to the latter gives
\[\|w(t)\|^{2}_{\ell^{2}(\hbar\mathbb{Z}^{n})}\leq C_{\hbar}\Big{(}\|w_{0}\|^{2 }_{\ell^{2}(\hbar\mathbb{Z}^{n})}+\int_{0}^{T}\|f(s)\|^{2}_{\ell^{2}(\hbar \mathbb{Z}^{n})}ds\Big{)}\,,\]
and we have proved (6.17).
The existence of a solution \(w\in C_{\hbar}([0,T],\ell^{2}(\hbar\mathbb{Z}^{n}))\) to the equation (6.15) follows by a modification of the standard Picard's theorem.
To prove the well-posedeness, let \(w,v\) be two solutions of (6.15). Then by setting \(u:=w-v\), we have
\[\begin{cases}\frac{\partial u}{\partial t}-Du&=0,\;t\in[0,T],\\ u(0)&=0.\end{cases}\]
Now, the estimate (6.17) implies that \(\|w(t)\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}=0\), which in turn gives that \(w(t)=v(t)\) for all \(t\in[0,T]\), completing the proof.
### Boundedness and compactness on \(\ell^{p}(\hbar\mathbb{Z}^{n})\)
In the result that follows we show the \(\ell^{p}(\hbar\mathbb{Z}^{n})\)-boundedness of a semi-classical pseudo-differential operator. Here the bound of the operator norm depends on the bound of the operator symbol which does not necessarily needs to be regular or obey a decay condition. An analogous result for pseudo-differential operators on the lattice \(\mathbb{Z}^{n}\) is established in [1, Proposition 5.12], while for the special case where \(n=1\) in [10].
**Proposition 6.15**.: _Let \(1\leq p<\infty.\) Let also \(\sigma_{\hbar}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\) be a measurable function such that_
\[|(\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar})(k,m)|\leq C_{|}\lambda(m)|,\quad \text{ for all }\;k,m\in\hbar\mathbb{Z}^{n},\]
_where \(C>0\) is a positive constant, \(\lambda\) is some function on \(\hbar\mathbb{Z}^{n}\) such that \(\lambda\in\ell^{1}(\hbar\mathbb{Z}^{n})\) and \(\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar}\) is the Fourier transform of \(\sigma_{\hbar}\) in the second variable. Then, \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar}):\ell^{p}(\hbar \mathbb{Z}^{n})\to\ell^{p}(\hbar\mathbb{Z}^{n})\) is a bounded linear operator and its norm is bounded from above. Particularly we have_
\[\|\operatorname{Op}(\sigma_{\hbar})\|_{\mathscr{L}(\ell^{p}(\hbar\mathbb{Z}^ {n}))}\leq C\|\lambda\|_{\ell^{1}(\hbar\mathbb{Z}^{n})}.\]
Proof of Proposition 6.15.: For \(g\in\ell^{1}(\hbar\mathbb{Z}^{n})\), and for \(k,m\in\hbar\mathbb{Z}^{n}\) we can write
\[\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})g(k) = \sum_{m\in\hbar\mathbb{Z}^{n}}g(m)\int_{\mathbb{T}^{n}}e^{-2\pi \frac{i}{\hbar}(m-k)\cdot\theta}\sigma_{\hbar}(k,\theta)\mathrm{d}\theta\] \[= \sum_{m\in\hbar\mathbb{Z}^{n}}g(m)(\mathcal{F}_{\mathbb{T}^{n}} \sigma_{\hbar})(k,m-k)\] \[= \sum_{m\in\hbar\mathbb{Z}^{n}}g(m)(\mathcal{F}_{\mathbb{T}^{n}} \sigma_{\hbar})^{\sim}(k,k-m)\] \[= ((\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar})^{\sim}(k,\cdot)*g)( k)\,,\]
where we have defined
\[(\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar})^{\sim}(k,m)=:(\mathcal{F}_{ \mathbb{T}^{n}}\sigma_{\hbar})(k,-m).\]
From the above we can estimate
\[\|\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})g\|_{\ell^{p}( \hbar\mathbb{Z}^{n})}^{p}=\sum_{k\in\hbar\mathbb{Z}^{n}}|((\mathcal{F}_{ \mathbb{T}^{n}}\sigma_{\hbar})^{\sim}(k,\cdot)*g)(k)|^{p}\leq\sum_{k\in\hbar \mathbb{Z}^{n}}((|(\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar})^{\sim}(k,\cdot) |*|g|)(k))^{p}. \tag{6.20}\]
Taking into account the assumption on \(\sigma_{\hbar}\), an application of Young's inequality for convolution yields
\[\sum_{k\in\hbar\mathbb{Z}^{n}}((|(\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar})^{ \sim}(k,\cdot)|\ast|g|)(k))^{p}\leq C^{p}\sum_{k\in\hbar\mathbb{Z}^{n}}\Big{(}( |\lambda|\!\ast\!|g|)(k)\Big{)}^{p}\leq C^{p}\|\lambda\|_{\ell^{1}(\hbar\mathbb{ Z}^{n})}^{p}\|g\|_{\ell^{p}(\hbar\mathbb{Z}^{n})}^{p}. \tag{6.21}\]
The latter combined with the density of \(\ell^{1}(\hbar\mathbb{Z}^{n})\) in \(\ell^{p}(\hbar\mathbb{Z}^{n})\), where \(1\leq p<\infty\), completes the proof.
In the next result we strengthen the assumption on the symbol \(\sigma_{\hbar}\) to guarantee that the corresponding semi-classical pseudo-difference operator \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar}):\ell^{p}(\hbar \mathbb{Z}^{n})\to\ell^{p}(\hbar\mathbb{Z}^{n})\) is bounded but also compact.
**Theorem 6.16**.: _Let \(\sigma_{\hbar}\) and \(\lambda\) be as in the hypothesis of Proposition 6.15. Let also \(\omega\) be a positive function on \(\hbar\mathbb{Z}^{n}\). Suppose also that \(\sigma_{\hbar}\) satisfies_
\[|(\mathcal{F}_{\mathbb{T}^{n}}\sigma_{\hbar})(k,m)|\leq\omega(k)|\lambda(m)|, \quad\text{ for all }\ m,k\in\hbar\mathbb{Z}^{n},\]
_where_
\[\lim_{|k|\to\infty}\omega(k)=0.\]
_Then the pseudo-difference operator \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar}):\ell^{p}(\hbar \mathbb{Z}^{n})\to\ell^{p}(\hbar\mathbb{Z}^{n})\) is a compact operator for all \(1\leq p<\infty\)._
Proof.: We will show that \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar})\) is the limit (in the operator norm sense) of a sequence of compact operators \(\operatorname{Op}_{\hbar\mathbb{Z}^{n}}(\sigma_{\hbar,n})\) on \(\ell^{p}(\hbar\mathbb{Z}^{n})\). To this end, we define the sequence of the corresponding symbols \(\sigma_{\hbar,n}\):
\[\sigma_{\hbar,n}(k,\theta):=\begin{cases}\sigma_{\hbar}(k,\theta),&|k|\leq n, \\ \qquad 0,&|k|>n.\end{cases}\]
For \(g\in\ell^{1}(\hbar\mathbb{Z}^{n})\) we have
\[\begin{split}\big{(}\operatorname{Op}_{\hbar\mathbb{Z}}( \sigma_{\hbar})-\operatorname{Op}_{\hbar\mathbb{Z}}(\sigma_{\hbar,N})\big{)} g(k)&=\int_{\mathbb{T}^{n}}e^{2\pi\frac{i}{\hbar}k\cdot\theta}(\sigma_{ \hbar}-\sigma_{\hbar,n})(k,\theta)\widehat{g}(\theta)\mathrm{d}\theta\\ &=\sum_{m\in\hbar\mathbb{Z}^{n}}g(m)\int_{\mathbb{T}^{n}}e^{-2\pi \frac{i}{\hbar}(m-k)\cdot\theta}(\sigma_{\hbar}-\sigma_{\hbar,n})(k,\theta) \mathrm{d}\theta\\ &=\sum_{m\in\hbar\mathbb{Z}^{n}}g(m)(\mathcal{F}_{\mathbb{T}^{n}}( \sigma_{\hbar}-\sigma_{\hbar,n}))(k,m-k).\end{split} \tag{6.22}\]
Then arguing as we did in Proposition 6.15 we get
\[\|\big{(}\operatorname{Op}_{\hbar\mathbb{Z}}(\sigma_{\hbar})- \operatorname{Op}_{\hbar\mathbb{Z}}(\sigma_{\hbar,n})\big{)}g\|_{\ell^{p}( \hbar\mathbb{Z}^{n})}^{p} \leq \sum_{k\in\hbar\mathbb{Z}^{n}}\Bigg{(}\Big{(}\big{|}\big{(} \mathcal{F}_{\mathbb{T}^{n}}(\sigma_{\hbar}-\sigma_{\hbar,n})\big{)}^{\sim}(k,\cdot)\big{|}\ast\big{|}g\big{|}\Big{)}(k)\Bigg{)}^{p}\] \[\leq \sum_{|k|>n}\Bigg{(}\Big{|}\big{(}\big{|}\big{(}\mathcal{F}_{ \mathbb{T}^{n}}\sigma_{\hbar}\big{)}^{\sim}(k,\cdot)\big{|}\ast\big{|}g\big{|} \Big{)}(k)\Bigg{)}^{p}\] \[\leq \sum_{|\overline{k}|>n_{0}}\Bigg{(}\Big{(}\varepsilon\big{|} \lambda\big{|}\ast\big{|}g\big{|}\Big{)}(k)\Bigg{)}^{p},\]
where for the estimate in the last line we have used the condition on the symbol \(\sigma_{h}\) as in the hypothesis, together with the fact that for \(\omega\) as in the hypothesis and for every \(\varepsilon\), there exists \(n_{0}\) such that \(|\omega(k)|<\varepsilon\) for all \(k>n_{0}\). Now, an application of Young's inequality to the latter gives
\[\|\big{(}\mathrm{Op}_{\hbar\mathbb{Z}}(\sigma_{h})-\mathrm{Op}_{ \hbar\mathbb{Z}}(\sigma_{h,n})\big{)}g\|_{\ell^{p}(\hbar\mathbb{Z}^{n})}^{p} \leq \sum_{|k|>n_{0}}\Bigg{(}\Big{(}\varepsilon\big{|}\lambda\big{|}* \big{|}g\big{|}\Big{)}(k)\Bigg{)}^{p}\] \[= \varepsilon^{p}\|\lambda*g\|_{\ell^{p}(\hbar\mathbb{Z}^{n})}^{p}\] \[\leq \varepsilon^{p}\|\lambda\|_{\ell^{1}(\hbar\mathbb{Z}^{n})}^{p}\|g \|_{\ell^{p}(\hbar\mathbb{Z}^{n})}^{p}.\]
Finally by the density of \(\ell^{1}(\hbar\mathbb{Z}^{n})\) in \(\ell^{p}(\hbar\mathbb{Z}^{n})\) we obtain
\[\|\mathrm{Op}_{\hbar}(\sigma_{h})-\mathrm{Op}_{\hbar}(\sigma_{h,n})\|_{ \mathscr{L}(\ell^{p}(\hbar\mathbb{Z}^{n}))}\leq\varepsilon\|\lambda\|_{\ell^{ 1}(\hbar\mathbb{Z}^{n})}\,,\]
and the proof is complete.
## 7. Approximation of the classical Euclidean case
In this section we recover known results on pseudo-differential operators in the Euclidean setting by allowing \(\hbar\to 0\) in the semi-classical setting. Observe that whenever \(\hbar\to 0\), the semi-classical setting \(\hbar\mathbb{Z}^{n}\) "approximates" the Euclidean space \(\mathbb{R}^{n}\). We shall use the notation \(x,\xi\) for elements in \(\mathbb{R}^{n}\), while for elements in the semi-classical setting \(\hbar\mathbb{Z}^{n}\) and for the toroidal elements we keep the same notation.
To start our analysis, note that the definition of the difference operators \(\Delta_{h,j}\) as in (3.3), when applied on function on \(\mathbb{R}^{n}\) can be regarded as follows: for fixed \(x\in\mathbb{R}^{n}\) we have
\[\Delta_{h,j}f(x):=\frac{f(x+e_{j}\hbar)-f(x)}{\hbar}\,,\]
where \(e_{j}=(0,\ldots,0,1,0,\ldots,0)\) is the vector with \(1\) is at the \(j^{th}\) position. Then, for a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) in the limiting case when \(\hbar\to 0\) the difference \(\Delta_{h,j}f(x)\) applied on \(f\) "approximates" the corresponding classical partial derivative \(\frac{\partial}{\partial x_{j}}f(x)\), or more generally we have the "approximation" for \(x\in\mathbb{R}^{n}\):
\[\Delta_{h}^{\alpha}f(x)=\Delta_{h,1}^{\alpha_{1}}\cdots\Delta_{h,n}^{\alpha_{ n}}f(x)\longrightarrow\frac{\partial^{\alpha_{1}}}{\partial{x_{1}}^{\alpha_{1}}} \cdots\frac{\partial^{\alpha_{n}}}{\partial{x_{n}}^{\alpha_{n}}}f(x)=\partial _{x}^{\alpha}f(x)\,,\quad\alpha\in\mathbb{N}^{n}\,. \tag{7.1}\]
We note that in the expression (7.1) we make abuse of the notation to describe the aforesaid notion of "approximation".
On the other hand, to ensure that the dual space of \(\hbar\mathbb{Z}^{n}\) "approximates" when \(\hbar\to 0\) the dual space of \(\mathbb{R}^{n}\) (which is \(\mathbb{R}^{n}\) itself) we need to make the following change of variable: for \(\theta\in\mathbb{T}^{n}\) we set \(\omega=\frac{1}{\hbar}\theta\in\frac{\mathbb{T}^{n}}{\hbar}:=\mathbb{T}^{n}_{h}\). It is then clear that in the limiting case the rescaled torus \(\mathbb{T}^{n}_{h}\) "approximates" the Euclidean space \(\mathbb{R}^{n}\). With this change
of variable, the partial-type derivatives \(D^{(\beta)}_{h,\theta}\) introduced in Definition 3.3 become:
\[D^{(\beta)}_{h,\theta} = D^{(\beta_{1})}_{h,\theta_{1}}\times\cdots\times D^{(\beta_{n})}_{h, \theta_{n}}\] \[= \hbar^{\beta_{1}}\left(\prod_{\ell=0}^{\beta_{1}-1}\frac{1}{2\pi i }\frac{\partial}{\partial\theta_{1}}-\ell\right)\times\cdots\times\hbar^{\beta _{1}}\left(\prod_{\ell=0}^{\beta_{m}-1}\frac{1}{2\pi i}\frac{\partial}{ \partial\theta_{n}}-\ell\right)\] \[= \left(\prod_{\ell=0}^{\beta_{1}-1}\frac{1}{2\pi i}\frac{\partial }{\partial\omega_{1}}-\hbar\ell\right)\times\cdots\times\left(\prod_{\ell=0}^{ \beta_{m}-1}\frac{1}{2\pi i}\frac{\partial}{\partial\omega_{n}}-\hbar\ell\right)\] \[=: d^{(\beta_{1})}_{h,\omega_{1}}\times\cdots\times d^{(\beta_{n})} _{h,\omega_{n}}=d^{(\beta)}_{h,\omega}\,,\]
for some \(\beta\in\mathbb{N}^{n}\), where \(\omega=(\omega_{1},\cdots,\omega_{n})\in\mathbb{T}^{n}_{h}\). Hence, when \(\hbar\to 0\), we have the following "approximation" for \(\xi\in\mathbb{R}^{n}\) and for a function \(f:\mathbb{R}^{n}\to\mathbb{R}\):
\[d^{(\beta)}_{h,\omega}f(x)\longrightarrow\left(\frac{1}{2\pi i}\right)^{| \beta|}\partial^{\beta}_{\xi}f(x)\,, \tag{7.2}\]
where \(|\beta|=\beta_{1}+\cdots+\beta_{n}\) stands for the length of \(\beta\in\mathbb{N}^{n}\).
Let us now discuss in which sense the semi-classical classes of symbols "approximate" in the limiting case the usual Euclidean Hormander classes of symbols introduced by Hormander in [10] and bearing his name. To begin with let us recap both definitions.
**Semi-classical classes of symbols:** Let \(\rho,\delta,\mu\in\mathbb{R}\), and let \(\sigma_{\hbar}:\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}\to\mathbb{C}\). We say that \(\sigma_{\hbar}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{ n})\) if for all \(\alpha,\beta\in\mathbb{N}^{n}_{0}\) there exists \(C_{\alpha,\beta}\) such that
\[|D^{(\beta)}_{h,\theta}\Delta^{\alpha}_{h,k}\sigma_{\hbar}(k,\theta)|\leq C_{ \alpha,\beta}(1+|k|)^{\mu-\rho|\alpha|+\delta|\beta|}\,. \tag{7.3}\]
**Hormander classes of symbols:** Let \(\mu\in\mathbb{R}\), \(\delta<1\), \(0\leq\delta\leq\rho\leq 1\) and \(\sigma:\mathbb{R}^{n}\overline{\times\mathbb{R}^{n}}\to\mathbb{C}\). We say that \(\sigma\in S^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) if for all \(\alpha,\beta\in\mathbb{N}^{n}_{0}\) there exists \(C_{\alpha,\beta}\) such that
\[|(\partial^{\alpha}_{\xi}\partial^{\beta}_{x}\sigma)(x,\xi)|\leq C_{\alpha, \beta}(1+|\xi|)^{\mu-\rho|\alpha|+\delta|\beta|}\,. \tag{7.4}\]
Let us now restate the condition (7.3) with respect to dual variable \(\omega\in\mathbb{T}^{n}_{\hbar}\) involving the partial-type derivatives \(d^{(\beta)}_{h,\omega}\):
\[\sigma_{\hbar}\in S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{ n}_{\hbar})\quad\text{if and only if}\quad|d^{(\beta)}_{h,\omega}\Delta^{(\alpha)}_{h,k}\sigma_{\hbar}(k,\omega)| \leq C_{\alpha,\beta}(1+|k|)^{\mu-\rho|\alpha|+\delta|\beta|}\,.\]
When \(\hbar\) approximates \(0\) then the latter condition becomes:
\[\sigma_{0}\in\tilde{S}^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n })\quad\text{if and only if}\quad|(\partial^{\beta}_{\xi}\partial^{\alpha}_{x} \sigma_{0})(x,\xi)|\leq C_{\alpha,\beta}(1+|x|)^{\mu-\rho|\alpha|+\delta|\beta| }\,, \tag{7.5}\]
where, with an abuse of notation, we have assumed that \(\sigma_{\hbar}(k,\omega)\xrightarrow[h\to 0]{}\sigma_{0}(x,\xi)\).
Observe that in the definition semi-classical classes of symbols (7.3) the order of derivatives do not follow the lines of the classical Hormander classes of symbols as in (7.4). This differentiation allows, after interchanging the role of \(x\) and \(\xi\) in (7.5), for the following "approximation" of the above symbols classes in the two different settings provided that \(0\leq\delta\leq\rho\leq 1\):
\[S^{\mu}_{\rho,\delta}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}_{\hbar}) \xrightarrow[h\to 0]{}\tilde{S}^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times \mathbb{R}^{n})\,,\]
where we use the notation \(\tilde{S}^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) for the associated symbol classes when the roles of \(x\) and \(\xi\) are interchanged; that is
\[\sigma\in\tilde{S}^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\quad \text{if and only if}\quad|(\partial_{\xi}^{\beta}\partial_{x}^{\alpha}\sigma)(x, \xi)|\leq C_{\alpha,\beta}(1+|x|)^{\mu-\rho|\alpha|+\delta|\beta|}\,.\]
With the above considerations we also have that \(S^{-\infty}(\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}_{\hbar})\xrightarrow[h\to 0]{} \tilde{S}^{-\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n})\); that is, in the limiting case, the smoothing semi-classical pseudo-differential operators as in Definition 3.6 can considered to be negligible in the sense that when applied to distributions they produce rapidly decaying functions.
To give a meaning to the above "aproximation" of symbol classes in the two settings, let us state how the composition formula, see Theorem 4.1, the formulae for the adjoint and transpose of a \(\Psi\)DO in the semi-classical setting, see Theorem 4.2 and Theorem 4.3, respectively, and the asymptotic sum of \(\Psi_{\hbar}\)DOs, see Lemma 4.4, become in the limiting case when \(\hbar\) approaches \(0\).
Below we discuss the above aspects of the symbolic calculus. With an abuse of notation we assume that for the \(\sigma_{\hbar},\tau_{\hbar}\) semi-classical symbols in the hypothesis of the aforesaid theorems, we have \(\sigma_{\hbar}(k,\omega)\xrightarrow[h\to 0]{}\sigma_{0}(x,\xi)\) and \(\tau_{\hbar}(k,\omega)\xrightarrow[h\to 0]{}\tau_{0}(x,\xi)\), where \((k,\omega)\in\hbar\mathbb{Z}^{n}\times\mathbb{T}^{n}_{\hbar}\) and \(x,\xi\in\mathbb{R}^{n}\).
**Composition formula when \(\hbar\to 0\):** As noted in the discussion that follows after Theorem 4.1 on the composition formula, the order of taking differences and derivatives in the corresponding asymptotic sum (4.1) is different from the one in the Euclidean case. However, this differentiation allows to recover the classical composition formula for \(\Psi\)DO in the Euclidean setting. Indeed, Theorem 4.1 when \(\hbar\to 0\) identifies with the Euclidean one and in particular can be regarded as: For \(\sigma_{0}\in\tilde{S}^{\mu_{1}}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^ {n})\) and \(\tau_{0}\in\tilde{S}^{\mu_{2}}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^ {n})\), the operator \(\operatorname{Op}(\sigma_{0})\circ\operatorname{Op}(\tau_{0})\) has symbol \(\varsigma_{0}\in\tilde{S}^{\mu_{1}+\mu_{2}}_{\rho,\delta}(\mathbb{R}^{n}\times \mathbb{R}^{n})\) given by the asymptotic sum
\[\varsigma_{0}(x,\xi)\sim\sum_{\alpha}\frac{(2\pi i)^{-|\alpha|}}{\alpha!}( \partial_{\xi}^{\alpha}\sigma_{0})(x,\xi)(\partial_{x}^{\alpha}\tau_{0})(x, \xi)\,,\]
where the factor \((2\pi i)^{-|\alpha|}\) is due to the "aproximation" (7.2) of partial-type derivatives, and is exactly the composition formula for \(\Psi\)DO with symbols in the Hormander classes in the Euclidean setting; see Theorem 2.5.1 in [14].
**Adjoint operator when \(\hbar\to 0\):** Before stating how the Theorem 4.2 in the semi-classical setting reads in the limiting case, let us point out that, reasoning as above, the \(\ell^{2}(\hbar\mathbb{Z}^{n})\)-adjoint should be regarded as the \(L^{2}(\mathbb{R}^{n})\)-adjoint as \(\hbar\) approaches \(0\). In this sense, Theorem 4.2 in the limiting case can be viewed as: For \(\sigma_{0}\in S^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\), there exists a symbol \(\sigma_{0}^{*}\in S^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) such that \(\operatorname{Op}(\sigma_{0})^{*}=\operatorname{Op}(\sigma_{0}^{*})\), where \(\operatorname{Op}(\sigma_{0})^{*}\) is the \(L^{2}(\mathbb{R}^{n})\)-adjoint of \(\operatorname{Op}(\sigma_{0})\), and we have the asymptotic expansion
\[\sigma_{0}^{*}(x,\xi)\sim\sum_{\alpha}\frac{(2\pi i)^{-|\alpha|}}{\alpha!} \partial_{\xi}^{\alpha}\partial_{x}^{\alpha}\overline{\sigma_{0}(x,\xi)}\,,\]
where as before the factor \((2\pi i)^{-|\alpha|}\) is due to (7.2) so that the above formula agrees with the Euclidean one; see Theorem 2.5.13 in [14].
**Transpose operator when \(\hbar\to 0\):** When \(\hbar\) approaches \(0\), Theorem 4.3 agrees with the corresponding result in the Euclidean setting, see Section 2.5 in [14], and can be regarded as follows: For \(\sigma_{0}\in S^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\), there exists a symbol \(\sigma_{0}^{t}\in S^{\mu}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) so that \(\operatorname{Op}(\sigma_{0})^{t}=\operatorname{Op}(\sigma_{0}^{t})\), and we have the following asymptotic expansion
\[\sigma_{0}^{t}(x,\xi)\sim\sum_{\alpha}\frac{(2\pi i)^{-|\alpha|}}{\alpha!} \partial_{\xi}^{\alpha}\partial_{x}^{\alpha}[\sigma_{0}(x,-\xi)]\,.\]
**Asymptotic sums when \(\hbar\to 0\):** When \(\hbar\) approaches \(0\), Lemma 4.4 can be viewed as follows: let \(\big{\{}\mu_{j}\big{\}}_{j=0}^{\infty}\subset\mathbb{R}\) be a decreasing sequence such that \(\mu_{j}\to-\infty\) as \(j\to\infty\), and let If \(\sigma_{0,j}\in S^{\mu_{j}}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) for all \(j\in\mathbb{N}_{0}\). Then, there exists \(\sigma_{0}\in S^{\mu_{0}}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) such that
\[\sigma_{\hbar}\sim\sum_{j=0}^{\infty}\sigma_{\hbar,j},\]
which means that for all \(N\in\mathbb{N}\) we have
\[\sigma_{0}-\sum_{j=0}^{N-1}\sigma_{0,j}\in S^{\mu_{N}}_{\rho,\delta}(\mathbb{R }^{n}\times\mathbb{R}^{n})\,.\]
The latter agrees with Proposition 2.5.33 in [14].
In the last part of the current subsection we discuss the \(\ell^{2}(\hbar\mathbb{Z}^{n})\)-boundedness of semi-classical operators in the limiting case when \(\hbar\) approaches zero where the latter statement translates into \(L^{2}(\mathbb{R}^{n})\)-boundedness.
Let us first recall a celebrated result on the boundedness of pseudo-differential operators in the Euclidean setting, see e.g. the monograph of Stein [11].
**Theorem 7.1**.: _If \(\sigma\in S^{0}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\), then \(\operatorname{Op}_{\mathbb{R}^{n}}(\sigma)\) is a bounded operator from \(L^{2}(\mathbb{R}^{n})\) to \(L^{2}(\mathbb{R}^{n})\)._
**On the boundedness of \(\Psi_{\hbar}\)DO when \(\hbar\to 0\):** Let us first see how Theorem 6.2 translates in the limiting case: Let \(\kappa\in\mathbb{N}\), \(\kappa>n/2\), and let \(\sigma_{0}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{C}\) be a symbol satisfying
\[|\partial_{\xi}^{\alpha}\sigma_{0}(x,\xi)|\leq C\,,\quad\text{for all }x,\xi\in \mathbb{R}^{n}\,, \tag{7.6}\]
where \(|\alpha|\leq\kappa\). Then, \(\operatorname{Op}_{\mathbb{R}^{n}}(\sigma_{0})\) extends to a bounded operator on \(L^{2}(\mathbb{R}^{n})\). Interestingly, going back to the analogous result on the Euclidean setting, we see that Theorem 7.1 implies the limiting condition (7.6) for all \(\alpha\in\mathbb{N}^{n}\). Indeed, let \(\sigma_{0}\in S^{0}_{\rho,\delta}(\mathbb{R}^{n}\times\mathbb{R}^{n})\). Then, by the assumption on \(\sigma_{0}\) and inequality (7.4), we have
\[|\partial_{\xi}^{\alpha}\sigma_{0}(x,\xi)|\leq C(1+|\xi|)^{-\rho|\alpha|}\leq C \,,\quad\text{for all }\alpha\in\mathbb{N}^{n}\,.\]
Conversely, condition (7.6) implies the assumption of Theorem 7.1 provided that \(\delta|\beta|\geq\rho|\alpha|\).
## 8. Examples
In this last section we consider certain examples of difference equations where semi-classical pseudo-differential operators are involved. In particular, in the subsequent examples, making use of the analysis above we study: the order of the corresponding semi-classical symbol, the boundedness of the operator, the ellipticity of the symbol and the existence, or even the exact formula, of the parametrix.
In what follows we denote by \(v_{j}\) the unit vector \(v_{j}=(0,\ldots,0,1,0,\ldots,0)\in\mathbb{Z}^{n}\), where \(1\) is the \(j^{th}\) entry of the vector.
**Example 8.1**.: Below we list several cases of semi-classical difference equations:
1. (Case of non-elliptic bounded operator of zero order) Let us define the operator \(D_{j}\) by \[D_{j}f(k)=f(k+\hbar v_{j})-f(k)\,,\quad\text{where}\quad k\in\hbar\mathbb{Z}^{ n}\,.\] For \(\theta\in\mathbb{T}^{n}\), let \(e_{\theta}:\hbar\mathbb{Z}^{n}\to\mathbb{R}\) be the function given by \(e_{\theta}(k)=:e^{2\pi\frac{i}{\hbar}k\cdot\theta}\,.\) Then since \[D_{j}e_{\theta}(k)=e^{2\pi\frac{i}{\hbar}(k+\hbar v_{j})\cdot\theta}-e^{2\pi \frac{i}{\hbar}k\cdot\theta},\] by Proposition 3.9 the symbol of \(D_{j}\) is given by \[\sigma_{\hbar,D_{j}}(k,\theta)=e^{2\pi iv_{j}\cdot\theta}-1=e^{2\pi i\theta_{ j}}-1.\] Thus, the operator \(D_{j}\) is not elliptic, we have \(\operatorname{Op}(\sigma_{\hbar,D_{j}})\in\operatorname{Op}(S^{0}(\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n}))\) and is bounded by Corollary 6.9 from \(\ell^{2}(\hbar\mathbb{Z})\) to \(\ell^{2}(\hbar\mathbb{Z})\).
2. (Case of elliptic bounded operator of positive order) Let us define the operator \[L_{j}f(k)=|k|^{r}(f(k+\hbar v_{j})+a)-|k|^{s}(f(k-\hbar v_{j})+b)\,,\] for some \(a,b\in\mathbb{R}\) such that \(|a|,|\beta|\geq 1\). Its symbol is given by \[\sigma_{\hbar,L_{j}}(k,\theta)=|k|^{r}(e^{2\pi i\theta_{j}}+e^{-2\pi\frac{i}{ \hbar}k\cdot\theta}a)-|k|^{s}(e^{-2\pi i\theta_{j}}+e^{-2\pi\frac{i}{\hbar}k \cdot\theta}b)\,.\] We have \(\sigma_{\hbar,L_{j}}(k,\theta)\in S^{\max\{s,r\}}(\hbar\mathbb{Z}^{n}\times \mathbb{T}^{n})\). This symbol is elliptic of order \(r\) if \(r\geq s\), and non-elliptic otherwise. Consequently, from Corollary 6.9 and for \(r\geq s\), the operator \(L_{j}\) is bounded from the weighted space \(\ell^{2}_{t+s}(\hbar\mathbb{Z}^{n})\) to the weighted space \(\ell^{2}_{t}(\hbar\mathbb{Z}^{n})\), for any \(t\in\mathbb{R}\).
3. (Case of elliptic bounded operator of zero order) Let us define the operator \[Tf(k):=\sum_{j=1}^{n}\left(f(k+\hbar v_{j})+f(k-\hbar v_{j})\right)+cf(k)\,, \quad\text{where}\quad c\in\mathbb{C}\,.\] The explicit formula of its symbol can be found as follows \[\sigma_{\hbar,T}(k,\theta)=\sum_{i=1}^{n}\left(e^{2\pi i\theta_{j}}-e^{-2\pi i \theta_{j}}\right)+c=2i\sum_{j=1}^{n}\sin(2\pi\theta_{j})+c\in S^{0}(\hbar \mathbb{Z}^{n}\times\mathbb{T}^{n})\,.\] and is elliptic in the case where \(\operatorname{Re}c\neq 0\) or \(\operatorname{Im}c\notin[-2n,2n]\). Under such assumptions on \(c\) the inverse operator \(T^{-1}\) is also of order zero, and its symbol that depends only on the toroidal variable \(\theta\) is given by \[\sigma_{\hbar,T^{-1}}(\theta)=\frac{1}{2i\sum_{j=1}^{n}\sin(2\pi\theta_{j})+c}\,.\]
Hence the solution to the equation
\[Tf(k)=g(k)\,, \tag{8.1}\]
is given by
\[T^{-1}g(k)=\operatorname{Op}(\sigma_{\hbar,T^{-1}}g(k)=\int_{\mathbb{T}^{n}}e^{2 \pi\frac{i}{\hbar}k\cdot\theta}\frac{1}{2i\sum_{j=1}^{n}\sin(2\pi\theta_{j})+c }\widehat{g}(\theta)\,\mathrm{d}\theta\,.\]
By Corollary 6.9 the operators \(T,T^{-1}\) are bounded from \(\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\) to \(\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\), which implies that if \(g\in\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\) then for the solution to the equation (8.1) we also have \(f\in\ell_{s}^{2}(\hbar\mathbb{Z}^{n})\).
|
2308.06883 | Superselection sectors in the 3d Toric Code | We rigorously define superselection sectors in the 3d (spatial dimensions)
Toric Code Model on the infinite lattice $\mathbb{Z}^3$. We begin by
constructing automorphisms that correspond to infinite flux strings, a
phenomenon that's only possible in open manifolds. We then classify all ground
state superselection sectors containing infinite flux strings, and find a rich
structure that depends on the geometry and number of strings in the
configuration. In particular, for a single infinite flux string configuration
to be a ground state, it must be monotonic. For configurations containing
multiple infinite flux strings, we define "infinity directions" and use that to
establish a necessary and sufficient condition for a state to be in a ground
state superselection sector. Notably, we also find that if a state contains
more than 3 infinite flux strings, then it is not in a ground state
superselection sector. | Siddharth Vadnerkar | 2023-08-14T01:28:27Z | http://arxiv.org/abs/2308.06883v1 | # Superselection sectors in the 3d Toric Code
###### Abstract
We rigorously define superselection sectors in the 3d (spatial dimensions) Toric Code Model on the infinite lattice \(\mathbb{Z}^{3}\). We begin by constructing automorphisms that correspond to infinite flux strings, a phenomenon that's only possible in open manifolds. We then classify all ground state superselection sectors containing infinite flux strings, and find a rich structure that depends on the geometry and number of strings in the configuration. In particular, for a single infinite flux string configuration to be a ground state, it must be monotonic. For configurations containing multiple infinite flux strings, we define "infinity directions" and use that to establish a necessary and sufficient condition for a state to be in a ground state superselection sector. Notably, we also find that if a state contains more than 3 infinite flux strings, then it is not in a ground state superselection sector.
###### Contents
* 1 Introduction
* 2 3d Toric Code model, superselection sectors, main results
* 2.1 3d Toric Code model
* 2.2 Constructing excited states
* 2.3 Superselection sectors
* 2.4 Main results
* 3 Purely charged/uncharged ground states
* 3.1 Finite string/surface operators
* 3.2 Constructing a charged sector \(\mathcal{H}_{\epsilon}\)
* 4 1 string configurations
* 4.1 Building an infinite flux string state
* 4.2 Necessary and sufficient conditions for a ground state
* 4.3 Infinity directions and a classification
* 5 2 string configurations
* 5.1 Performing surgery
* 5.2 Distinct solutions of the ground state condition
* 5.3 Classification of 2 infinite flux strings
* 5.3.1 When \(D(\overline{\gamma}_{1})=\{(r,\sigma),(r,\overline{\sigma})\}\)
* 5.3.2 When \(D(\overline{\gamma}_{1})=\{(r,\sigma),(r^{\prime},\sigma^{\prime})\}\) with \(r\neq r^{\prime}\)
* 6 Ground States on 3+ string configurations
* 6.1 3 string configurations
* 6.2 4+ string configurations
* 7 Discussion and outlook
* A Purity of the ground states of the 3d Toric Code
* A.1 \(\omega\) is pure
* B Lattice facts
* C Acknowledgements
## 1 Introduction
In recent years, the subject of topological phases has exploded in popularity. A number of works have been written exploring various facets of these phases. Topological phases exhibit robust ground state degeneracy, one that is independent of the microscopics of the system. They also have topological excitations like anyons and flux strings. These excitations can be used for topological quantum computation [21]. Various systems like the Fractional Quantum Hall state and the spin-liquid states exhibit the highlighting features of topological phases and have been discovered in nature [21, 22].
In particular, Kitaev's Toric Code [16] is an interesting 2d toy model that exhibits various features of topological phases. This model has been instrumental in understanding the nature of topological phases, as one can explicitly compute many quantities in it such as explicit string operators that detect and move topological charges, or the ground state degeneracy on a discrete manifold.
Kitaev originally studied this model on a discrete torus, obtaining a ground state degeneracy of 4. He then extended this analysis to find the ground state degeneracy on (the discretization of) a closed manifold of arbitrary genus. It was shown later that the ground state superselection sectors also exist when this model is placed on an open manifold such as a discretization \(\mathbb{Z}^{2}\) of the 2d plane \(\mathbb{R}^{2}\)[23].
Several generalisations of this model exist. In particular, the 3d Toric Code is a model in 3+1 spacetime dimensions. It is of theoretical interest with respect to error-correcting codes [20]. And a variety of interesting topological features of this model have been demonstrated [14, 21]. The ground state degeneracy of this model has already been shown on closed 3d manifolds with an arbitrary genus (see for example [20]). In this work, we analyse the ground state superselection sector structure of the 3d Toric Code on \(\mathbb{Z}^{3}\), which is a discretization of the open manifold \(\mathbb{R}^{3}\). We will now omit the usage of "the discretization of" and assume it implicitly, as the model is only defined on discrete lattices. Understanding this structure is important as it results in consistency conditions for when fusion and braiding of string-like objects can occur in non-compact manifolds.
Our reason for studying the 3d Toric Code on an open manifold is that it exhibits certain structures that are not found in the closed manifolds: infinite flux strings. In a closed manifold, there are 2 types of excitations in the 3d Toric Code - flux strings and charges. While charges are topological particle-like excitations, flux strings are topological string-like excitations. The flux strings must be closed and finite in energy. However in open manifolds they are not required to be closed. We call such excitations infinite flux strings, and they are "infinite energy"
excitations. Due to this, they cannot be physically obtained from the ground state using local operators. However they are still a physically relevant topic of study as they may be interpreted as boundary conditions of the model.
In studying these excitations, we find a rich structure. We find that some configurations of infinite string excitations are stable in the sense that their energy cannot be decreased arbitrarily, while some others are unstable. The configurations that are stable belong to ground state superselection sectors, which we precisely define in section 2. The aim of this paper will be to classify all ground state superselection sectors of the 3d Toric Code.
The layout of this paper is as follows: In section 2 we briefly recall the 3d Toric Code on \(\mathbb{Z}^{3}\) and construct automorphisms corresponding to infinite flux strings. We then present a summary of the main results of this paper. In section 3 we sketch the construction of charged sectors in the 3d Toric Code. In sections 4,5,6 we first construct infinite flux string sectors. Then we tackle the question of the necessary and sufficient conditions for a ground state configuration. We then introduce "Infinity directions" and use that as a basis for proving these conditions. Finally we classify all possible ground state sectors for any configuration with arbitrary number of flux strings.
## 2 3d Toric Code model, superselection sectors, main results
### 3d Toric Code model
We begin by describing the 3d Toric Code in the \(C^{*}\) algebraic framework [1]. Consider the lattice \(\Gamma=\mathbb{Z}^{3}\) which is an oriented cell complex. Denote the set of vertices, oriented edges, oriented faces in this cell complex as \(\mathcal{V}(\Gamma),\mathcal{E}(\Gamma),\mathcal{F}(\Gamma)\) respectively. We fix an orientation of the edges as follows: all edges are pointing in the positive \(x,y,z\) direction. Let \(\partial\) denote the standard boundary map of this cell complex. We place a qubit on each edge \(e\in\mathcal{E}(\Gamma)\), so there is an edge Hilbert space \(\mathcal{H}_{e}=\mathbb{C}^{2}\) with observables \(\mathcal{B}(\mathcal{H}_{e})=M_{2}(\mathbb{C})\). Let \(\Lambda_{f}\) denote the set of all finite subsets of \(\mathcal{E}(\Gamma)\). Then \(\mathcal{H}_{\Lambda}=\otimes_{e\in\Lambda}\mathcal{H}_{e}\) for \(\Lambda\in\Lambda_{f}\). We can define an algebra on this space as \(\mathfrak{A}_{\Lambda}=\mathcal{B}(\mathcal{H}_{\Lambda})\).
Let \(\Lambda_{1},\Lambda_{2}\in\Lambda_{f}\) and \(\mathfrak{A}_{\Lambda_{1}},\mathfrak{A}_{\Lambda_{2}}\) be algebras such that \(\Lambda_{1}\subset\Lambda_{2}\). \(\iota:\mathfrak{A}_{\Lambda_{1}}\hookrightarrow\mathfrak{A}_{\Lambda_{2}}\) is the embedding such that
\[\iota(\mathfrak{A}_{\Lambda_{1}})=\cdots\mathds{1}\otimes\mathds{1}\otimes \mathfrak{A}_{\Lambda_{1}}\otimes\mathds{1}\otimes\mathds{1}\cdots\]
where \(\mathds{1}\) is on all edges \(e\in\Lambda_{2}\setminus\Lambda_{1}\). We can now define
\[\mathfrak{A}_{loc}:=\bigcup_{\Lambda\in\Lambda_{f}}\mathfrak{A}_{\Lambda} \qquad\mathfrak{A}:=\overline{\mathfrak{A}_{loc}}\]
where \(\mathfrak{A}\) is the \(C^{*}\) algebra for the 3d Toric Code, the algebra of quasi-local operators in the 3D Toric Code.
The Hamiltonian for the 3D Toric Code inside a finite region \(\Lambda\) is given by:
\[H_{\Lambda} =\sum_{v\in\Lambda}(\mathds{1}-A_{v})+\sum_{f\in\Lambda}(\mathds{1 }-B_{f}) \tag{1}\] \[A_{v} :=\prod_{v\in e}\sigma_{e}^{x}\qquad B_{f}:=\prod_{e\in f}\sigma_{ e}^{z} \tag{2}\]
The operator \(A_{v}\) acts on the 6 edges that surround \(v\). There are 3 different \(B_{f}\) operators corresponding to the orientation of the face \(f\). Each \(B_{f}\) operators acts on the 4 edges that make up
the face \(f\). \(A_{v}\) and \(B_{f}\) operators commute. They both square to \(\mathds{1}\), thus having eigenvalues \(\pm 1\).
Since the interactions \(A_{v},B_{f}\) are translation invariant, there exists an action \(\alpha_{t}\) of \(\mathbb{R}\) on \(\mathfrak{A}\) describing the dynamics of the system, as well as a derivation \(\delta\) that is the generator of the dynamics [1].
Let us recall some properties of this model. Refer to [1, 1, 1, 2] for a full treatment in the case of the 2d Toric Code, which proceeds similarly. The Hamiltonian 1 is not convergent in norm, but still generates dynamics through a derivation, \(\delta(O):=\lim_{\Lambda\to\mathcal{E}(\Gamma)}i[H_{\Lambda},O]\) for all \(O\in\mathfrak{A}_{loc}\). \(\delta\) can be extended to a densely defined unbounded *-derivation on \(\mathfrak{A}\).
In the infinite lattice setting, states are described by positive linear functionals \(\omega:\mathfrak{A}\to\mathbb{C}\). Given \((\omega,\mathfrak{A})\), we can construct a unique GNS triple \((\pi,\mathcal{H},\Omega)\) up to unitary equivalence, such that \(\omega(O)=\langle\Omega,\pi(O)\Omega\rangle\). Here \(O\in\mathfrak{A}\), \(\pi\) is a representation of \(\mathfrak{A}\) on a Hilbert space \(\mathcal{H}\), and \(\Omega\in\mathcal{H}\) is a cyclic vector.
**Definition 2.1** (_ground state_).: A state \(\omega\) is a ground state if for all \(O\in\mathfrak{A}\) we have
\[-i\omega(O^{\dagger}\delta(O))\geq 0\]
The energy \(E\) of a finite region \(\Lambda\) is given by \(E_{\Lambda}:=\omega(H_{\Lambda})\).
_Remark_.: This definition of a ground state is equivalent to saying \(\omega(O^{\dagger}H_{\Lambda}O)/\omega(O^{\dagger}O)\geq\omega(H_{\Lambda})\) for all finite regions \(\Lambda\). This implies if the energy of a state \(\omega\) inside any finite region \(\Lambda\) is the lowest possible energy, then \(\omega\) is a ground state.
Choose a state \(\omega\) such that \(\omega(A_{v})=\omega(B_{f})=1\) for all \(v,f\). Call its GNS triple \((\pi_{\omega},\mathcal{H}_{\omega},\Omega_{\omega})\). \(\Omega_{\omega}\) then satisfies \(\pi_{\omega}(A_{v})\Omega_{\omega}=\pi_{\omega}(B_{f})\Omega_{\omega}=\Omega_ {\omega}\). We have the following fact for \(\omega\), which will be proved in the appendix A:
**Theorem 2.2**.: \(\omega\) _is the unique pure frustration free translation invariant ground state of the 3dTC._
### Constructing excited states
Let \(\partial_{0}e_{i}\) (\(\partial_{1}e_{i}\)) denote the start (end) vertex of \(e_{i}\), and let the boundary map \(\partial e_{i}:=\partial_{1}e_{i}-\partial_{0}e_{i}\).
**Definition 2.3** (_Finite path_).: A finite set \(\gamma:=\{e_{i}\}_{i=0}^{l-1}\subset\mathcal{E}(\Gamma)\) is a finite path on the lattice if it satisfies \(\partial_{1}e_{i}=\partial_{0}e_{i+1}\) for \(0\leq i<l\) and does not self-intersect (there does not exist \(\gamma^{\prime}\subset\gamma\) such that \(\sum_{e^{\prime}\in\gamma^{\prime}}\partial e^{\prime}=0\)). We call \(|\gamma|\) the length of the finite path. \(\gamma\) is a finite _open_ path if \(\gamma\) is a finite path and satisfies \(\sum_{i=0}^{l-1}\partial e_{i}=\partial_{1}e_{l-1}-\partial_{0}e_{0}\neq 0\). We denote the start of the path as \(\partial_{1}\gamma=\partial_{1}e_{l-1}\) and end of the path as \(\partial_{0}\gamma=\partial_{0}e_{0}\). Together we refer to the start and end of \(\gamma\) collectively as \(\partial\gamma\). \(\gamma_{c}\) is a finite _closed_ path if \(\gamma_{c}\) is a finite path and satisfies \(\sum_{i=0}^{d-1}\partial e_{i}=0\) and thus has no start or end. A finite path \(\overline{\gamma}\) is called _trivial_ if \(\gamma=\emptyset\).
Any finite self-intersecting path can be decomposed into an open path and finitely many closed paths. So we consider only paths that don't self intersect.
One can build charged states by considering the charge operators
\[F_{\gamma}=\prod_{e\in\gamma}\sigma_{e}^{z}\in\mathfrak{A}\]
where \(\gamma\) is a finite open path on the lattice. Consider an inner automorphism \(\alpha_{\gamma}(O):=F_{\gamma}OF_{\gamma}\). The representation \(\pi_{\omega}\circ\alpha_{\gamma}\) then gives an excited state \(\omega_{\gamma}:=\langle\Omega_{\omega},\pi_{\omega}\circ\alpha_{\gamma}(\cdot) \Omega_{\omega}\rangle\) in the same GNS Hilbert space. \(A_{v}\) commutes with \(F_{\gamma}\) on all \(v\notin\partial\gamma\). If \(v\in\partial\gamma\), we have \(A_{v}F_{\gamma}=-F_{\gamma}A_{v}\) implying \(\omega_{\gamma}(A_{v})=-1\). So the state has \(2\) excitations, one at each endpoint of \(\gamma\). These excitations are called charges.
One can similarly introduce another kind of excitation called a flux string. We first define \(\overline{\Gamma}=\mathbb{Z}^{3}\) as the dual lattice to \(\Gamma\) (indeed it is a cell complex dual to \(\Gamma\)). Denote the set of vertices, edges, faces in \(\overline{\Gamma}\) as \(\mathcal{V}(\overline{\Gamma}),\mathcal{E}(\overline{\Gamma}),\mathcal{F}( \overline{\Gamma})\). Of particular relevance is the fact that each edge \(e\in\mathcal{E}(\Gamma)\) has a unique dual face \(f\in\mathcal{F}(\overline{\Gamma})\) and similarly each face \(f\in\mathcal{F}(\Gamma)\) has a unique dual edge \(e\in\mathcal{E}(\overline{\Gamma})\). Flux loops can be defined as dual-paths on the lattice, or equivalently as paths on the dual lattice. We will adopt the latter nomenclature. To avoid confusion regarding the lattice a path or surface belongs to, we will use an overline \((\overline{\cdot})\) when talking about paths and surfaces on a dual lattice.
Let the boundary map on \(f\in\mathcal{F}(\Gamma)\) be given by \(\partial f=\sum_{e\in f}e\).
**Definition 2.4** (_Surface_).: A finite set \(S=\{f_{i}\}\subset\mathcal{F}(\Gamma)\) is a finite surface if it satisfies \(\sum_{i}\partial f_{i}=\sum_{e\in\overline{\gamma}_{c}}e\) for some finite closed non self-intersecting path \(\gamma_{c}\), and does not self-intersect (there does not exist \(S^{\prime}\subset S\) such that \(\sum_{f^{\prime}\in S^{\prime}}\partial f^{\prime}=0\)). \(\gamma_{c}\) is called the boundary of surface \(S\) and denoted as \(\partial S\). A dual surface is similarly defined on \(\overline{\Gamma}\). A surface \(S\) is called an _open_ surface if its boundary \(\overline{\gamma}_{c}\) is a non-trivial path. It is called a _closed_ surface if its boundary \(\overline{\gamma}_{c}\) is a trivial path.
Any self-intersecting surface can be decomposed into a open surface and finitely many closed surfaces. So we consider only surfaces that don't self-intersect. Similarly for surfaces with a self-intersecting boundary.
Consider the flux string operator
\[F_{\overline{S}}=\prod_{e\perp\overline{S}}\sigma_{e}^{x}\in\mathfrak{A}\]
where by \(e\perp\overline{S}\) we mean \(e\) is dual to a given \(\overline{f}\in\overline{S}\). Analogous to the case of charged excitations, consider an inner automorphism \(\alpha_{\overline{\gamma}}(O):=F_{\overline{S}}(O)F_{\overline{S}}\) such that \(\overline{\gamma}=\partial\overline{S}\). \(\pi_{\omega}\circ\alpha_{\overline{\gamma}}\) then gives us then gives us an excited state \(\omega_{\overline{\gamma}}:=\langle\Omega_{\omega},\pi_{\omega}\circ\alpha_{ \overline{\gamma}}(\cdot)\Omega_{\omega}\rangle\). We have \(B_{f}F_{\overline{S}}=-F_{\overline{S}}B_{f}\) if \(f\) is dual to some \(\overline{e}\in\overline{\gamma}\). They commute otherwise. This implies for such \(f\), \(\omega_{\overline{\gamma}}(B_{f})=-1\). So \(F_{\overline{S}}\) produce excitations along \(\overline{\gamma}\). So the energy of the excited state is proportional to the size of the boundary \(\overline{\gamma}\).
### Superselection sectors
**Definition 2.5** (_Equivalence_).: Given two representations \(\pi_{1},\pi_{2}\) of \(\mathfrak{A}\) on \(\mathcal{H}_{1},\mathcal{H}_{2}\) respectively, \(\pi_{1},\pi_{2}\) are equivalent if there exists a bounded linear unitary map \(U:\mathcal{H}_{1}\to\mathcal{H}_{2}\) such that \(\pi_{1}=U\pi_{2}U^{\dagger}\). We denote \([\pi]\) as the equivalence class of \(\pi\). We say the states \(\omega_{1},\omega_{2}\) are equivalent if their corresponding representations \(\pi_{1},\pi_{2}\) are equivalent.
In general, there are many equivalence classes of representations. A lot of such representations are physically uninteresting due to a variety of reasons (for example, the energy may be unbounded [10]). To restrict to a physically interesting class of representations, we additionally employ a _superselection criterion_. This criterion tells us which representations we should select.
Doplicher-Haag-Roberts (DHR) analysis in algebraic quantum field theory shows that with a physically motivated superselection criterion one can recover all the physically relevant properties like braiding and fusion of charges [1, 1]. A similar analysis has been done for the 2d Quantum Double models [14, 15].
We choose the following superselection criterion for selecting representations. Consider a continuous manifold \(\mathbb{R}^{3}\) and a differentiable curve \(L\) in it. Let plane \(P\) be the plane perpendicular to the tangent vector of \(L\) at point \(p\). Let \(\vec{f}(p)\) be a vector function that gives a "framing" vector \(\vec{f}(p)\) lying inside plane \(P\) for every point \(p\in L\) and varies smoothly from a point \(p\in L\) to any other point \(q\in L\). Figure 1 visualises this construction.
We adopt the vector notation for points on the plane \(P\). Let the point \(\vec{p}\) be the origin of plane \(P\). We can then define \(\Delta_{p,\vec{f}(p),\theta}\) as an infinite triangular section of the plane \(P\), with the starting vertex \(\vec{p}\in L\) and consisting of all points \(\vec{k}\in P\) such that \(0\leq\frac{(\vec{k}-\vec{p}).\vec{f}(p)}{|(\vec{k}-\vec{p}).\vec{f}(p)|}\leq \cos(\theta/2)\). We impose \(\theta<\pi\) to ensure the section is triangular (refer to 1). We call an infinite wedge as \(W_{L,\vec{f}(L),\theta}=\mathcal{E}(\Gamma)\bigcap(\bigcup_{p\in L}\Delta_{p,\vec{f}(p),\theta})\), i.e, the set of all edges \(e\in\mathcal{E}(\Gamma)\) that lie inside \(\bigcup_{p\in L}\Delta_{p,\vec{f}(p),\theta}\).
Now let \(W=\bigcup_{i}W_{i}\) be the union of a finite number of non-intersecting wedges \(W_{i}\) defined as above (keeping the variables implicit), and let \(W^{c}\) be the complementary region to \(W\). Let an irreducible representation \(\pi^{\prime}\) be such that there exists \(W\) with
\[[\pi_{\omega}\upharpoonright\mathfrak{A}_{W^{c}}]=[\pi^{\prime}\upharpoonright \mathfrak{A}_{W^{c}}]\]
Then we select all representations \(\pi\) which satisfy \([\pi]=[\pi^{\prime}]\). Here \(\pi_{\omega}\) is the irreducible representation of the unique frustration-free ground state.
_Remark_.: The superselection criterion is chosen to allow for representations with finitely many infinite flux strings (to be defined below). The criterion also allows for representations with finitely many charges (to be defined below).
We call the GNS Hilbert spaces of inequivalent representations \(\pi\) satisfying the superselection criterion as superselection sectors. We will refer to superselection sectors simply as sectors. We can construct sectors with different charges by considering the automorphism
\[\alpha_{v}(O):=\lim_{n\to\infty}F_{\gamma_{v;n}}OF_{\gamma_{v;n}}\qquad O\in \mathfrak{A} \tag{3}\]
where \(\gamma_{v;n}\) is a path starting at \(v\in\mathcal{V}(\Gamma)\) and stretching straight down to \(v-n\hat{z}\in\mathcal{V}(\Gamma)\). The representation \(\pi_{v}:=\pi_{\omega}\circ\alpha_{v}\) defines a representation of \(\mathfrak{A}\) on to a new sector we call \(\mathcal{H}_{\epsilon}\). Call
the corresponding state \(\omega_{v}\). It is a ground state, although of the sector \(\mathcal{H}_{\epsilon}\). A general charged state having charges at \(v_{1},\cdots v_{N}\) would be given by \(\omega_{v_{1},\cdots,v_{N}}:=\langle\Omega_{\omega},\pi_{v_{1},\cdots,v_{N}}( \cdot)\Omega_{\omega}\rangle\), where \(\pi_{v_{1},\cdots,v_{N}}:=\pi_{\omega}\circ\alpha_{v_{1}}\circ\cdots\circ \alpha_{v_{N}}\).
We can also construct an infinite flux string state through a similar construction. Consider an automorphism -
\[\alpha_{\overline{\gamma}}(O):=\lim_{n\to\infty}F_{\overline{S}_{\overline{ \gamma}_{n}}}OF_{\overline{S}_{\overline{\gamma}_{n}}}\qquad O\in\mathfrak{A} \tag{4}\]
where \(\overline{S}_{\overline{\gamma}_{n}}\) is a surface such that a finite path \(\overline{\gamma}_{n}\subset\partial\overline{S}_{\overline{\gamma}_{n}}\) and \(\overline{\gamma}=\lim_{n\to\infty}\overline{\gamma}_{n}\) is an infinite path in \(\overline{\Gamma}\) (defined below). \(\pi_{\overline{\gamma}}:=\pi_{\omega}\circ\alpha_{\overline{\gamma}}\) defines a representation of \(\mathfrak{A}\) to a new sector \(\mathcal{H}_{\overline{\gamma}}\) with \(\overline{\gamma}\) as an infinite flux string. Call the corresponding state \(\omega_{\overline{\gamma}}\). Again, composing multiple such automorphisms gives us a state \(\omega_{\overline{\gamma}_{1},\cdots,\overline{\gamma}_{N}}:=\langle\Omega_{ \omega},\pi_{\overline{\gamma}_{1},\cdots,\overline{\gamma}_{N}}(\cdot)\Omega_ {\omega}\rangle\) where \(\pi_{\overline{\gamma}_{1},\cdots,\overline{\gamma}_{N}}:=\pi_{\omega}\circ \alpha_{\overline{\gamma}_{1}}\circ\cdots\circ\alpha_{\overline{\gamma}_{N}}\).
**Theorem 2.6**.: _All ground states of the form \(\omega_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}}\) (including \(\omega\), the unique translation invariant ground state) are pure._
We will prove this theorem in appendix A. Purity of states is important since it implies the corresponding GNS representation is irreducible ([11], theorem 2.5.14), and one can then talk about equivalence of irreducible representations to be selected by the superselection criterion. A superposition of states in different sectors is necessarily a mixed state.
### Main results
We now provide a summary of the main results in the paper.
A state is purely charged if it only has flux excitations with a finite boundary. A purely charged sector is the sector containing only purely charged states.
**Theorem 2.7**.: _There are only 2 purely charged ground state sectors, given by \(\mathcal{H}_{\omega},\mathcal{H}_{\epsilon}\)._
We will call a state charged if it lies in the \(\mathcal{H}_{\epsilon}\) sector, and uncharged otherwise.
**Definition 2.8** (_Infinite path_).: An infinite path in \(\overline{\Gamma}\) is a function \(\overline{\gamma}:\mathbb{Z}\to\mathcal{E}(\overline{\Gamma})\) such that for any \(a,b\in\mathbb{Z}\), \(\overline{\gamma}[a,b]:=\{\overline{\gamma}(t)\}_{t=a}^{t=b}\) is a finite path. We say \(\overline{e}\in\overline{\gamma}\) if there exists \(t\in\mathbb{Z}\) such that \(\overline{\gamma}(t)=\overline{e}\).
_Remark_.: The definition of a finite path necessarily forces an infinite path to be non self-intersecting.
_Remark_.: An infinite flux string is mathematically the automorphism \(\alpha_{\overline{\gamma}}\) on an infinite path \(\overline{\gamma}\) in \(\overline{\Gamma}\).
Each oriented edge can point in the \((r,\sigma)\) direction, where \(r\in\{x,y,z\}\) and \(\sigma\in\{\pm\}\). Let \(\mathcal{E}^{r,\sigma}\) denote the set of all edges pointing in the \((r,\sigma)\) direction. Then we have \(\mathcal{E}(\overline{\Gamma})=\cup_{r,\sigma}\mathcal{E}^{r,\sigma}\).
**Definition 2.9** (_Monotonicity_).: An infinite path \(\overline{\gamma}\) is monotonic if for each \(r\) there exists \(\sigma_{r}\in\{\pm\}\) such that \(\overline{\gamma}\subseteq\cup_{r}\mathcal{E}^{r,\sigma_{r}}\).
_Remark_.: An example of a monotonic and non-monotonic path are shown in figure 2.
In what follows, we implicitly assume \(\overline{\gamma}\) is an infinite path, unless stated otherwise.
**Theorem 2.10**.: _A state \(\omega_{\overline{\gamma}}\) is a ground state iff \(\overline{\gamma}\) is a monotonic path in \(\overline{\Gamma}\)._
Performing local operations on an infinite flux string will not change its sector. Thus it is not the microscopic behaviour of \(\overline{\gamma}\) that matters to understand the sector it lies in, but the behaviour of \(\overline{\gamma}\) in the macroscopic or infinite limit. To this end, we define a few key terms.
**Definition 2.11** (_Infinity direction_).: Pick \(n_{0}\in\mathbb{Z}\) and a path \(\overline{\gamma}\).
\[D_{+}(\overline{\gamma}):=\left\{(i,\sigma)\bigg{|}i\in\{x,y,z\},\sigma\in\{\pm \},\#\{n\in\mathbb{Z}|n>n_{0},\overline{\gamma}(n)\subseteq\mathcal{E}^{i, \sigma}\}=\infty\right\}\]
\(D_{+}(\overline{\gamma})\) is called the set of positive infinity directions. Similarly,
\[D_{-}(\overline{\gamma}):=\left\{(j,\tau)\bigg{|}j\in\{x,y,z\},\tau\in\{\pm\}, \#\{n\in\mathbb{Z}|n<n_{0},-\overline{\gamma}(n)\subseteq\mathcal{E}^{j, \tau}\}=\infty\right\}\]
where \(-\overline{\gamma}(n)\) is the edge \(\overline{\gamma}(n)\) but pointing in the opposite direction. \(D_{-}(\overline{\gamma})\) is called the set of negative infinity directions. \(D(\overline{\gamma}):=D_{+}(\overline{\gamma})\cup D_{-}(\overline{\gamma})\) is the set of infinity directions.
_Remark_.: The definition of infinity directions is independent of the choice of \(n_{0}\), as can be checked by relabelling \(n\mapsto n+t\).
**Definition 2.12** (_Path equivalence_).: Let \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) be two paths. Define
\[r=min\{t|\overline{\gamma}_{1}(t)\notin\overline{\gamma}_{2}\}\] \[s=max\{t|\overline{\gamma}_{1}(t)\notin\overline{\gamma}_{2}\}\]
Then \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are path equivalent if \(-\infty<r,s<\infty\)
An example of two path equivalent configurations is given in figure 2(a).
**Theorem 2.13**.: _Let \(\{\overline{\gamma}_{n}\}_{n=1}^{N}\) be a set of monotonic paths. Then \(\omega_{\overline{\gamma}_{1},\cdots,\overline{\gamma}_{N}}\) lies in a ground state sector iff \(\bigcap_{i=1}^{N}D(\overline{\gamma}_{i})=\emptyset\)._
**Theorem 2.14**.: _All inequivalent ground states with 1 infinite flux string are labelled by \((g\in\mathbb{Z}_{2},\overline{\gamma}\in\mathcal{M})\) where \(g\) indicates if the ground state is charged or uncharged, \(\mathcal{M}\) is the set of all path inequivalent monotonic paths._
Figure 2: Examples of monotonic and non-monotonic paths
**Definition 2.15** (_Half-infinite path_).: A positive half-infinite path on \(\overline{\Gamma}\) is a function \(\overline{\gamma}_{v}:\mathbb{Z}_{+}\to\mathcal{E}(\overline{\Gamma})\) such that for any \([a,b]\in\mathbb{Z}_{+}\), \(\overline{\gamma}[a,b]:=\{\overline{\gamma}(t)\}_{t=a}^{t=b}\) is a finite path. We call \(\partial_{0}\overline{\gamma}_{v}=v\in\mathcal{V}(\overline{\Gamma})\) as the starting vertex of \(\overline{\gamma}_{v}\). A negative half-infinite path is similarly defined as a function \(\overline{\gamma}_{v}:\mathbb{Z}_{-}\to\mathcal{E}(\overline{\Gamma})\) that has \(\partial_{1}\overline{\gamma}_{v}=v\) as its ending vertex.
Let \(\overline{\gamma}_{v}^{(r,\sigma)}\) with \(r\in\{x,y,z\},\sigma\in\{\pm\}\) be a positive/negative half-infinite path that has its start/end point as \(v\) and \(\overline{\gamma}_{v}^{(r,\sigma)}(t)\in\mathcal{E}^{r,\sigma}\). This defines a half-infinite path parallel to the positive/negative \(r\) principal direction and starting/ending at \(v\).
Let \(p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}\) denote an infinite monotonic path such that \(p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}(t)\in\overline{ \gamma}_{\tau_{1}}^{(r_{1},\sigma_{1})}\) for all \(t>t_{+}\), and \(p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}(t)\in\overline{ \gamma}_{\tau_{2}}^{(r_{2},\sigma_{2})}\) for all \(t<t_{-}\) for integer constants \(t_{\pm}\). Let \(\mathcal{P}_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}\) denote the set of infinite paths that are path equivalent to \(p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}\).
**Theorem 2.16**.: _All inequivalent ground states containing 3 infinite flux strings \(\overline{\gamma}_{i}\) (for \(i=1,2,3\)) and a number of charges are labelled by \((g\in\mathbb{Z}_{2},\overline{\gamma}_{i}\in\mathcal{P}_{(r_{i},\sigma_{i}, \tau_{i}),(r_{i},\overline{\mathcal{G}}_{i},\tau_{i}^{\prime})})\) where \(r_{i}\) are unique elements of \(\{x,y,z\}\) and \(g\) indicates if the ground state is charged or uncharged._
**Theorem 2.17**.: _There does not exist any ground state with a configuration of infinite flux strings \(\{\overline{\gamma}_{n}\}_{n=1}^{N}\) for \(N\geq 4\)._
We wish to comment that a classification exists for sectors containing 2 infinite flux strings. Though the classification is more involved and needs to be split up into smaller cases. For this reason we do not state it in the summary. We will fully work out the classification in section 5.
## 3 Purely charged/uncharged ground states
### Finite string/surface operators
We first study the case when we have an uncharged ground state. This state \(\omega\) is defined to be \(\omega(A_{v})=\omega(B_{f})=1\) for all \(v,f\). It is not hard to see that this state has the lowest possible energy, since it has an eigenvalue \(+1\) for all \(A_{v},B_{f}\). The GNS triple for this state is denoted by \((\pi_{\omega},\mathcal{H}_{\omega},\Omega_{\omega})\), with \(\omega(O)=\langle\Omega_{\omega},\pi_{\omega}(O)\Omega_{\omega}\rangle\). We take \(\Omega_{\omega}\) as the vacuum vector, defined by the property \(\pi_{\omega}(A_{v})\Omega_{\omega}=\pi_{\omega}(B_{f})\Omega_{\omega}=\Omega_ {\omega}\) for all \(v,f\).
**Lemma 3.1**.: _Equivalence of representations implies the corresponding GNS vectors lie in the same sector._
Proof.: Let \(\omega_{1},\omega_{2}\) be two states of \(\mathfrak{A}\) with the GNS triples \((\pi_{1},\mathcal{H}_{1},\Omega_{1}),(\pi_{2},\mathcal{H}_{2},\Omega_{2})\) respectively. Equivalence of representations implies the existence of a unitary map \(U:\mathcal{H}_{1}\to\mathcal{H}_{2}\) such that \(\pi_{2}=U\pi_{1}U^{\dagger}\). Then we have,
\[\omega_{1}(O) =\langle\Omega_{1},\pi_{1}(O)\Omega_{1}\rangle\qquad O\in\mathfrak{A}\] \[=\langle\Omega_{1},U^{\dagger}\pi_{2}(O)U\Omega_{1}\rangle\] \[=\langle U\Omega_{1},\pi_{2}(O)U\Omega_{1}\rangle\] \[=\omega_{2}(O)\]
With the new GNS triple \((\pi_{2},\mathcal{H}_{1},U\Omega_{1})\), but such a triple is unique up to equivalence. We see that the states both live in the same sector.
A 2-charge state \(\omega_{\gamma}=\langle\Omega_{\omega},\pi_{\omega}\circ\alpha_{\gamma}(\cdot) \Omega_{\omega}\rangle\) can be built for a finite path \(\gamma\) following section 2.2. The unitary \(U=\pi_{\omega}(F_{\gamma})\) gives us the equivalence relation \(\pi_{\omega}=U(\pi_{\omega}\circ\alpha_{\gamma})U^{\dagger}\). Thus the two states lie in the same sector \(\mathcal{H}_{\omega}\). An analoguous analysis exists for the state with finite flux excitations. We recall \(\omega_{\overline{\gamma}}\) as the state with the representation \(\pi_{\overline{\gamma}}\) corresponding to a finite flux string along \(\overline{\gamma}\) as defined in section 2.2.
**Proposition 3.2**.: _The energy of a state \(\omega_{\overline{\gamma}}\) having a single finite flux string is proportional to \(|\overline{\gamma}|\). The energy of a flux string inside region \(\Lambda\) is proportional to the number of edges in \(\overline{\gamma}\) that lie inside \(\Lambda\)._
Proof.: Consider \(\omega_{\overline{\gamma}}\) having a finite flux string, with \(\overline{S}\) being the bounding surface of \(\overline{\gamma}\). We have \(B_{f}F_{\overline{S}}=-F_{\overline{S}}B_{f}\) if \(f\) is dual to some \(\overline{e}\in\overline{\gamma}\), and commute otherwise. This implies for such \(f\), \(\omega_{\overline{\gamma}}(B_{f})=-1\). Consider a finite region \(\overline{\Lambda}\subset\overline{\Gamma}\) such that \(\overline{S}\) is entirely contained in \(\overline{\Lambda}\). Then the energy in \(\overline{\Lambda}\) is given by \(\omega_{\overline{\gamma}}(H_{\overline{\Lambda}})=2|\overline{\gamma}|\). If \(\overline{\Lambda}\) does not entirely contain \(\overline{S}\), then the same calculation will give us \(\omega_{\overline{\gamma}}(H_{\overline{\Lambda}})=2|\overline{\gamma}|_{ \overline{\Lambda}}\) where \(|\overline{\gamma}|_{\overline{\Lambda}}:=2|\overline{\gamma}\setminus\mathcal{ E}(\overline{\Lambda}^{c})|\) is the \(\#\) of edges of \(\overline{\gamma}\) inside \(\overline{\Lambda}\) and \(\overline{\Lambda}^{c}\subset\overline{\Gamma}\) is the complement of \(\overline{\Lambda}\).
**Proposition 3.3**.: _Finite flux strings are always closed paths on the dual lattice._
Proof.: Let \(\overline{S}\) be a surface with the smallest boundary. It contains only a single face, and \(|\overline{S}|=1\). The boundary of a face is a closed path. Since all surfaces are products of faces, the boundary of products of faces is given by pathwise addition of boundaries of the individual faces. But the pathwise addition of closed paths is always closed. It follows that since finite flux strings are the boundary excitations of a surface operator \(F_{\overline{S}}\), they are closed paths on the dual lattice \(\overline{\Gamma}\).
### Constructing a charged sector \(\mathcal{H}_{\epsilon}\)
We can construct states in a charged sector by considering automorphisms \(\alpha_{v}\) defined in section 2. We restate the definition here:
\[\alpha_{v}(O)=\lim_{n\to\infty}F_{\gamma_{v}(n)}OF_{\gamma_{v}(n)}\qquad O\in \mathfrak{A} \tag{5}\]
Where \(\gamma_{v}(n)\) is a path that starts at \(v\), stretches down in the \(-\hat{z}\) direction and ends at \(v-n\hat{z}\).
_Remark_.: The particular limit of \(n\to\infty\) was an arbitrary choice. In the physics literature this is referred to as a gauge choice. In principle, any sufficiently nice path 1 stretching to infinity in an arbitrary direction would have worked.
Footnote 1: The representation \(\pi_{v}\) should lie inside a 3d conelike region. For a thorough 2d treatment, refer to [10].
_Remark_.: We prove the convergence of \(\alpha_{v}\) in lemma B.1 in appendix B.
**Theorem 3.4** ([10], proposition 3.2.8).: _Let \(\mathfrak{A}\) be a quasilocal algebra of some spin system, and suppose \(\omega_{1},\omega_{2}\) are pure states on \(\mathfrak{A}\). Then the following criteria are equivalent:_
* _The corresponding GNS representations_ \(\pi_{1},\pi_{2}\) _are equivalent._
* _For each_ \(\epsilon>0\)_, there is a_ \(\Lambda_{\epsilon}\in\Lambda_{f}\) _such that_ \[|\omega_{1}(O)-\omega_{2}(O)|<\epsilon||O||\] _for all_ \(O\in\mathfrak{A}_{\Lambda}\) _and_ \(\Lambda\) _a finite region in_ \(\Lambda_{\epsilon}^{c}\)_._
**Lemma 3.5**.: _Representations \(\pi_{v}:=\pi_{\omega}\circ\alpha_{v},\pi_{\omega}\) are inequivalent_
Proof.: Consider \(\omega_{v}:=\omega\circ\alpha_{v}\). Let its GNS representation be \(\pi_{v}\). We will use theorem 3.4 to prove this lemma. Consider a spherical region \(\Lambda_{\epsilon}\) centered at \(v\). We can then consider \(A=F_{\overline{S}}\in\mathfrak{A}_{\Lambda}\) as a flux operator on a closed surface \(\overline{S}\) going around \(\Lambda_{\epsilon}\), as shown in figure 3(a). Here \(\Lambda\) is a finite region in \(\Lambda_{\epsilon}^{c}\). We then have:
\[|\omega\circ\alpha_{v}(F_{\overline{S}})-\omega(F_{\overline{S}})| =|\lim_{n\to\infty}\omega(F_{\gamma_{v}(n)}F_{\overline{S}}F_{ \gamma_{v}(n)})-\omega(F_{\overline{S}})|\] \[=|\lim_{n\to\infty}-\omega(F_{\overline{S}}F_{\gamma_{v}(n)}F_{ \gamma_{v}(n)})-\omega(F_{\overline{S}})|\] \[=|-\omega(F_{\overline{S}})-\omega(F_{\overline{S}})|\] \[=2||F_{\overline{S}}||\]
Which is independent of \(\epsilon\). From theorem 3.4\([\pi_{v}]\neq[\pi_{\omega}]\). This concludes our proof.
These automorphisms are involutary (\(\alpha_{v}^{2}=\mathds{1}\)) and translation covariant (\(T_{x}\circ\alpha_{v}=\alpha_{v+x}\circ T_{x}\)) for \(x\in\mathcal{V}\). Different automorphisms are related to each other via a unitary transformation:
\[U\alpha_{v}U^{\dagger}=\alpha_{v+x}\]
where \(U=F_{\gamma(v,v+x)}\) and \(\gamma(v,v+x)\) is a path from \(v\) to \(v+x\).
There is a family of purely charged ground states in \(\mathcal{H}_{\epsilon}\), given by
\[\omega_{v}(O):=\langle\Omega_{\omega},\pi_{v}(O)\Omega_{\omega}\rangle\]
the ground states are distinguished by \(A_{v^{\prime}}\): \(\omega_{v}(A_{v^{\prime}})=1-2\delta_{v,v^{\prime}}\).
_Remark_.: \(\alpha_{v}\circ\alpha_{v^{\prime}}=\alpha_{v^{\prime}}\circ\alpha_{v}\) since charge operators commute. Similarly, \(\alpha_{\overline{\gamma}}\circ\alpha_{\overline{\gamma}^{\prime}}=\alpha_{ \overline{\gamma}^{\prime}}\circ\alpha_{\overline{\gamma}}\) since the flux strings commute.
_Remark_.: \(\alpha_{v}\circ\alpha_{\overline{\gamma}}=\alpha_{\overline{\gamma}}\circ \alpha_{v}\) for a finite flux string \(\overline{\gamma}\). This is a special case of lemma 4.12, which we will prove in the next subsection.
We now recall and prove the theorem 2.7 for purely charged states.
**Theorem 2.7**.: _There are only 2 purely charged ground state sectors, given by \(\mathcal{H}_{\omega},\mathcal{H}_{\epsilon}\)._
Proof.: Let a general state with \(n\) charges and \(m\) finite flux strings \(\overline{\gamma}_{i}\) be given by
\[\omega_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}} (O):=\langle\Omega_{\omega},\pi_{v_{1},\cdots,v_{n};\overline{\gamma}_{1}, \cdots,\overline{\gamma}_{m}}O\Omega_{\omega}\rangle\qquad(O)\in\mathfrak{A}\]
where \(\pi_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}} :=\pi_{\omega}\circ\alpha_{v_{1}}\circ\cdots\circ\alpha_{v_{n}}\circ\alpha_{ \overline{\gamma}_{1}}\circ\cdots\circ\alpha_{\overline{\gamma}_{m}}\).
We have \([\pi_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}} ]=[\pi_{v_{1},\cdots,v_{n}}]\) as the flux strings are finite. If \(n\) is even, then \([\pi_{v_{1},\cdots,v_{n}}]=[\pi_{\omega}]\) with the unitary given by \(U=\pi_{\omega}(\prod_{i=2}^{n}F_{\gamma(v_{i},v_{1})})\). If \(n\) is odd, \([\pi_{v_{1},\cdots,v_{n}}]=[\pi_{v}]\) with the unitary given by \(U=\pi_{\omega}(\prod_{i=1}^{n}F_{\gamma(v_{i},v)})\). Since \([\pi_{\omega}]\neq[\pi_{v}]\), there are two possible ground state sectors: \(\mathcal{H}_{\epsilon},\mathcal{H}_{\omega}\).
## 4 1 string configurations
Through the remainder of this paper, we will work with the dual lattice \(\overline{\Gamma}\) as it is more convenient for constructing the infinite flux string states. Since a dual-path in \(\Gamma\) is just a path in \(\overline{\Gamma}\), we will be referring to \(\overline{\gamma}\in\overline{\Gamma}\) as paths for the sake of brevity. However, to not confuse the reader, we still choose to denote objects in the dual lattice by (\(\tilde{\cdot}\)).
In this section, we will aim to first build an infinite flux string state \(\omega_{\overline{\gamma}}\) using infinite surface automorphisms. We will then focus on determining the necessary and sufficient conditions for \(\omega_{\overline{\gamma}}\) to be a ground state. Finally we will try to classify the ground state sectors with a single path \(\overline{\gamma}\).
### Building an infinite flux string state
Building a new sector is a little more involved for flux strings. Let's first understand how to build a state in the new sector using an example. Consider an infinite path \(\overline{\gamma}\) as defined in definition 2.8 going in the \(+\hat{z}\) direction, as depicted in figure 4(a). We will refer to figure 4(b) for the proceeding construction of the infinite flux string \(\overline{\gamma}\).
We start by defining a series of finite surface operators \(\overline{S}_{\overline{\gamma}_{n}}\), that have a finite section \(\overline{\gamma}_{n}\) of \(\overline{\gamma}\) as a part of their boundary. We also have \(\overline{S}_{\overline{\gamma}_{n}}\subset\overline{S}_{\overline{\gamma}_{n+ 1}}\). Upon taking the limit \(n\to\infty\) we obtain \(\overline{\gamma}\) as the only finite boundary of \(\lim_{n\to\infty}\overline{S}_{\overline{\gamma}_{n}}\).
Consider now the automorphism
\[\alpha_{\overline{\gamma}}(O):=\lim_{n\to\infty}F_{\overline{S}_{\overline{ \gamma}_{n}}}OF_{\overline{S}_{\overline{\gamma}_{n}}}\qquad O\in\mathfrak{A}\]
we can obtain a new state \(\omega_{\overline{\gamma}}:=\langle\Omega_{\omega},\pi_{\overline{\gamma}}( \cdot)\Omega_{\omega}\rangle\) where \(\pi_{\overline{\gamma}}:=\pi_{\omega}\circ\alpha_{\overline{\gamma}}\).
_Remark_.: We prove the convergence of \(\alpha_{\overline{\gamma}}\) in lemma B.3 in appendix B.
**Lemma 4.1**.: _Representations \(\pi_{\overline{\gamma}},\pi_{\omega}\) are inequivalent._
Proof.: Consider \(\omega_{\overline{\gamma}}:=\omega\circ\alpha_{\overline{\gamma}}\). Let its GNS representation be \(\pi_{\overline{\gamma}}\). Consider a cylindrical region \(\Lambda_{\epsilon}\) centered around \(\overline{\gamma}\). We can then consider \(A=F_{\gamma_{c}}\in\mathfrak{A}_{\Lambda}\) as a charge operator on a closed string \(\gamma_{c}\) going around \(\Lambda_{\epsilon}\), as shown in figure 3(b). Here \(\Lambda\) is a finite region in \(\Lambda_{\epsilon}^{c}\). We then have:
\[|\omega\circ\alpha_{\overline{\gamma}}(F_{\gamma_{c}})-\omega(F_ {\gamma_{c}})| =|\lim_{n\to\infty}\omega(F_{\overline{S}_{\overline{\gamma}_{n}}}F _{\gamma_{c}}F_{\overline{S}_{\overline{\gamma}_{n}}})-\omega(F_{\gamma_{c}})|\] \[=|\lim_{n\to\infty}-\omega(F_{\gamma_{c}}F_{\overline{S}_{\overline {\gamma}_{n}}}F_{\overline{S}_{\overline{\gamma}_{n}}})-\omega(F_{\gamma_{c}})|\] \[=|-\omega(F_{\gamma_{c}})-\omega(F_{\gamma_{c}})|\] \[=2||F_{\gamma_{c}}||\]
Which is independent of \(\epsilon\). From theorem 3.4\([\pi\overline{\gamma}]\neq[\pi_{\omega}]\). This concludes our proof.
\(\omega_{\overline{\gamma}}\) thus lies in a new sector, which we denote by \(\mathcal{H}_{\overline{\gamma}}\).
The bounding surface needs to be defined for the specific infinite flux line \(\overline{\gamma}\). This is known as choosing the gauge. Picking a surface is a matter of choice, as the operators in \(\mathfrak{A}\) cannot physically detect the bounding surface. This can always be done as long as one can find a 3d wedge that \(\overline{\gamma}\) lies entirely within. There is no universally "nice" gauge choice for a generic infinite path \(\overline{\gamma}\) contrasting the case for charged excitations.
### Necessary and sufficient conditions for a ground state
**Lemma 4.2**.: _If two paths \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are path equivalent, then \([\pi_{\overline{\gamma}_{1}}]=[\pi_{\overline{\gamma}_{2}}]\)._
Proof.: If \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are path equivalent, then we can construct a finite surface \(\overline{S}\) whose boundary is \(\partial\overline{S}=(\overline{\gamma}_{1}\cup\overline{\gamma}_{2})\setminus (\overline{\gamma}_{1}\cap\overline{\gamma}_{2})\). We then have the required unitary \(U=\pi_{\omega}(F_{\overline{S}})\) such that \(\pi_{\overline{\gamma}_{1}}=U\pi_{\overline{\gamma}_{2}}U^{\dagger}\).
**Lemma 4.3**.: _If two paths \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are path equivalent, then one can choose a finite region \(\overline{\Lambda}\) which satisfies \(\overline{\gamma}_{1}(t)\in\overline{\gamma}_{2}\) for all \(\overline{\gamma}_{1}(t)\in\mathcal{E}(\overline{\Lambda}^{C})\). The energy difference \(\Delta E\) between \(\omega_{\overline{\gamma}_{1}},\omega_{\overline{\gamma}_{2}}\) is given by \(\Delta E=\omega_{\overline{\gamma}_{1}}(H_{\overline{\Lambda}})-\omega_{ \overline{\gamma}_{2}}(H_{\overline{\Lambda}})\). \(\Delta E\) is finite and independent of the choice of region \(\overline{\Lambda}\)._
Proof.: Let \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) be path equivalent. Then there exist \(r,s\) with \(r<s\) such that for all \(t<r,t>s\), \(\overline{\gamma}_{1}(t)\in\overline{\gamma}_{2}\). We can similarly define \(r^{\prime},s^{\prime}\) with \(r^{\prime}<s^{\prime}\) such that \(\overline{\gamma}_{2}(t^{\prime})\in\overline{\gamma}_{1}\) for all \(t^{\prime}<r^{\prime},t^{\prime}>s^{\prime}\). Choose a finite region \(\overline{\Lambda}\) such that \(\overline{\gamma}_{1}(t),\overline{\gamma}_{2}(t^{\prime})\in\mathcal{E}( \overline{\Lambda})\) for all \(r<t<s\) and \(r^{\prime}<t^{\prime}<s^{\prime}\) and \(\overline{\gamma}_{1}(t),\overline{\gamma}_{2}(t^{\prime})\in\mathcal{E}( \overline{\Lambda}^{C})\) otherwise. This is the required region where \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) share all edges outside of \(\overline{\Lambda}\). It is minimal in the sense that only the edges in \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) that are different are in \(\mathcal{E}(\overline{\Lambda})\) and the shared edges are in \(\mathcal{E}(\overline{\Lambda}^{C})\).
Choose a region \(\overline{\Lambda}^{\prime}\supset\overline{\Lambda}\). This region will also satisfy \(\overline{\gamma}_{1}(t)\in\overline{\gamma}_{2}\) for all \(\overline{\gamma}_{1}(t)\in\mathcal{E}(\overline{\Lambda}^{\prime})\). We can always split this into two disjoint subsets: \(\overline{\Lambda}^{\prime}=\overline{\Lambda}\cup\overline{\Lambda}^{\prime \prime}\). The energy difference inside \(\overline{\Lambda}^{\prime}\) is given by
\[\Delta E_{\overline{\Lambda}^{\prime}} =\omega_{\overline{\gamma}_{1}}(H_{\overline{\Lambda}^{\prime}}) -\omega_{\overline{\gamma}_{2}}(H_{\overline{\Lambda}^{\prime}})\] \[=|\overline{\gamma}_{1}|_{\overline{\Lambda}^{\prime}}-|\overline {\gamma}_{2}|_{\overline{\Lambda}^{\prime}}\] \[=|\overline{\gamma}_{1}|_{\overline{\Lambda}}-|\overline{\gamma}_{ 2}|_{\overline{\Lambda}}\]
Where \(|\overline{\gamma}|_{\overline{\Lambda}}\) was defined in proposition 3.2. The last equality follows from the fact that \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) share all their edges inside \(\overline{\Lambda}^{\prime\prime}\). So the energy difference between \(\omega_{\overline{\gamma}_{1}},\omega_{\overline{\gamma}_{2}}\) is independent of the region \(\overline{\Lambda}^{\prime}\), and is finite.
We can now begin the program of proving the theorem 2.10. Recall definition 2.9 of a monotonic infinite path. Figure 6 shows some examples of finite sections of monotonic and non-monotonic paths.
**Lemma 4.4**.: _Consider a monotonic infinite path \(\overline{\gamma}\). Choose a finite cuboidal region \(\overline{\Lambda}\) such that \(\partial_{0}\overline{\gamma}(n)\) and \(\partial_{1}\overline{\gamma}(m)\) lie on the longest diagonal of \(\overline{\Lambda}\) for arbitrary integer constants \(m,n\). Inside \(\overline{\Lambda}\), all monotonic deformations \(\overline{\gamma}^{\prime}\) of \(\overline{\gamma}\) have the same energy. All such \(\omega_{\overline{\gamma}^{\prime}}\) have the lowest energy inside \(\overline{\Lambda}\)._
Proof.: WLOG assume \(\overline{\Lambda}\) is a cuboidal region \([0,a]\times[0,b]\times[0,c]\). Consider a finite monotonic path such that \(\partial_{0}\overline{\gamma}(0)=(0,0,0)\) and \(\partial_{1}\overline{\gamma}(N)=(a,b,c)\). Let \(N_{r}=\#\{\overline{\gamma}(t)|\overline{\gamma}(t)\in\mathcal{E}(r,\pm)\}\) for
\(r\in\{x,y,z\}\). To go from \((0,0,0)\) to \((a,b,c)\) one needs at least \(a+b+c\) edges.
Since \(\overline{\gamma}\) is monotonic in \(x,y,z\) directions, it takes the least amount of edges to get to \((a,b,c)\). So \(N_{x}=a,N_{y}=b,N_{z}=c\). If there are any edges that don't go towards \((a,b,c)\), as is the case in a non-monotonic path, that path would be longer. All monotonic deformations \(\overline{\gamma}^{\prime}\) of \(\overline{\gamma}\) inside \(\overline{\Lambda}\) will also take the least number of edges on account of being monotonic. \(\omega_{\overline{\gamma}}(H_{\overline{\Lambda}})=|\overline{\gamma}|_{ \overline{\Lambda}}\) from proposition 3.2. As all \(\overline{\gamma}^{\prime}\) already have the least \(\#\) of edges, we have that \(\omega_{\overline{\gamma}^{\prime}}\) all have the lowest energy inside \(\overline{\Lambda}\) if the endpoints are held fixed.
**Theorem 4.5**.: _For a given path \(\overline{\gamma}\), \(\omega_{\overline{\gamma}}\) is not a ground state if \(\overline{\gamma}\) is non-monotonic._
Proof.: Suppose \(\overline{\gamma}\) is a non-monotonic path inside a finite region \(\overline{\Lambda}\) (and possibly outside \(\overline{\Lambda}\) as well). Then one can instead consider a path \(\overline{\gamma}^{\prime}\) which is the same as \(\overline{\gamma}\) outside \(\overline{\Lambda}\), but monotonic inside \(\overline{\Lambda}\). This can always be done because there always exists at least one monotonic path between any two points inside \(\overline{\Gamma}\). Due to proposition 3.2, \(\overline{\gamma}^{\prime}\) has a lower energy than \(\overline{\gamma}\), so \(\omega_{\overline{\gamma}}\) is not a ground state.
Let a non-monotonic path \(\overline{\gamma}\) be given. We explicitly sketch the construction of a shorter path \(\overline{\gamma}^{\prime}\). We will call this construction the "straightening procedure", or "straighten" in short.
Let \(\mu,\nu\in\{x,y,z\}\) be two directions. We can divide the straightening procedure into 3 cases of increasing complexity:
(I) First consider the case where \(\overline{\gamma}\) is parallel to one of the principal planes, call it the \(\nu\)-plane, such that \(\nu\) is the normal to the plane. Assume that it is non-monotonic only along the \(\mu\) direction. Then there must exist two edges \(\overline{\gamma}(m),\overline{\gamma}(n)\) such that \((\partial_{0}\overline{\gamma}(m))_{\mu}=(\partial_{1}\overline{\gamma}(n))_{\mu}\). Then we can construct a new path \(\overline{\gamma}^{\prime}\) such that \(\overline{\gamma}^{\prime}(t)\in\overline{\gamma}\) for all \(t<m,t>n\), and it is monotonic
Figure 6: An example of different monotonic and non-monotonic paths between two endpoints of \(\overline{\Lambda}\). \(\overline{\gamma}_{3}\) is non-monotonic while \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are monotonic.
Figure 7: Straightening an example path that is non-monotonic in 2 directions (x,z in this figure). First project to \(x-y\) plane. Then straighten in that plane. Then lift back up to entire lattice.
between points \(\partial_{0}\overline{\gamma}(m),\partial_{1}\overline{\gamma}(n)\). \(\overline{\gamma}^{\prime}\) is path equivalent to \(\overline{\gamma}\) and is monotonic in this region. Thus \(\omega_{\overline{\gamma}^{\prime}}\) must have a lower energy than \(\omega_{\overline{\gamma}}\) so it couldn't have been the ground state.
(II) Now we consider the case where \(\overline{\gamma}\) is still non-monotonic along the \(\mu\) direction, but is now no longer restricted to lie parallel to the \(\nu\) plane. In this case we can define a projection \(P_{\nu}\) that projects \(\overline{\gamma}\) on the \(\nu\) plane. This can be done by simply throwing out the edges in the \(\nu\) direction. The path \(\overline{\gamma}_{\nu}\) thus obtained may now be finite. We can apply the same procedure as (I) to obtain a monotonic path \(\overline{\gamma}^{\prime}_{\nu}\). We can then lift this path back into the \(\nu\) direction by reintroducing the edges we threw out to obtain \(\overline{\gamma}^{\prime}\), which is path equivalent to \(\overline{\gamma}\) but is monotonic inside this region.
(III) If \(\overline{\gamma}\) is non-monotonic along more than one direction, then one can always choose a smaller region where it is monotonic in 2 of the 3 principal directions.
**Corollary 4.6**.: _We can use the straightening procedure to construct another state \(\omega_{\overline{\gamma}^{\prime}}\) with lower energy inside \(\overline{\Lambda}\) than a state \(\omega_{\overline{\gamma}}\) with a non-monotonic path \(\overline{\gamma}\). If we end up with a monotonic path \(\overline{\gamma}\) after straightening a finite number of times in different regions, \(\overline{\gamma}^{\prime}\) is path-equivalent to \(\overline{\gamma}\), and by lemma 4.2\(\omega_{\overline{\gamma}_{1}},\omega_{\overline{\gamma}_{2}}\) are equivalent._
Proof.: Let \(\overline{\gamma}^{\prime}\) be a result of straightening \(\overline{\gamma}\) N times. Let \(\overline{\Lambda}_{n}\) be the finite region that encompasses the section of \(\overline{\gamma}\) that was straightened in the \(n\)th time. Then we can consider a region \(\overline{\Lambda}=\cup_{n=1}^{N}\overline{\Lambda}_{n}\). Since \(\overline{\gamma}^{\prime}\) was obtained from \(\overline{\gamma}\) through modifications inside \(\overline{\Lambda}\), we have \(\overline{\gamma}^{\prime}(r)\in\overline{\gamma}\) for all \(\overline{\gamma}^{\prime}(r)\notin\mathcal{E}(\overline{\Lambda})\). Hence \(\overline{\gamma}^{\prime}\) is path equivalent to \(\overline{\gamma}\).
Let us now recall and prove theorem 2.10:
**Theorem 2.10**.: _A state \(\omega_{\overline{\gamma}}\) is a ground state iff \(\overline{\gamma}\) is a monotonic path in \(\overline{\Gamma}\)._
Proof.: Consider a path \(\overline{\gamma}\). If it is non-monotonic, theorem 4.5 shows \(\omega_{\overline{\gamma}}\) is not a ground state. If it is monotonic, then from lemma 4.4 it has the lowest energy inside any given cuboidal region \(\overline{\Lambda}\) such that \(\partial_{0}\overline{\gamma}(n)\), \(\partial_{1}\overline{\gamma}(m)\) lie on the endpoints of a longest diagonal of \(\overline{\Lambda}\). Since we can freely choose said \(\overline{\Lambda}\), \(\omega_{\overline{\gamma}}\) must be a ground state.
Conversely, If \(\omega_{\overline{\gamma}}\) is a ground state, then it must have the lowest energy and thus least number of edges. Through lemma 4.4 we know that a path that takes the fewest edges to get from start to end point inside \(\overline{\Lambda}\) must be monotonic. So \(\omega_{\overline{\gamma}}\) being a ground state implies \(\overline{\gamma}\) is monotonic.
As an example of the straightening procedure, let us explicitly start from a non-monotonic state construct a new state with lower energy. Consider a state \(\omega_{\overline{\gamma}}\) with a U shaped path \(\overline{\gamma}\) as shown in figure 8. Straighten \(\overline{\gamma}\) inside region \(\overline{\Lambda}\) as shown in the figure to build a new state \(\omega_{\overline{\gamma}(n)}\) with a path \(\overline{\gamma}(n)\). We can indefinitely straighten it by choosing a different region \(\overline{\Lambda}(n)\). The limit in \(n\) of this procedure will give us \(\omega\), the state with no flux strings. However, the
Figure 8: We can keep straightening this configuration
representations \(\pi_{\overline{\gamma}}\) and \(\pi_{\omega}\) are inequivalent from lemma 4.1. So \(\omega_{\overline{\gamma}}\) does in-fact belong to a different sector. However this is not a ground state sector as the energy of \(\omega_{\overline{\gamma}}\) can be lowered indefinitely.
### Infinity directions and a classification
We talk about a classification of ground states. We start by considering Infinity directions (recall definition 2.11) as our tool for understanding which states could be reduced to ground states.
**Lemma 4.7**.: _The state \(\omega_{\overline{\gamma}}\) does not have a canonical labelling of positive or negative infinity directions. Positive and negative infinity directions are thus interchangeable for any path \(\overline{\gamma}\)._
Proof.: Consider a path \(\overline{\gamma}\) whose edges are labelled by an integer parameter \(t\). The transformation \(t\mapsto-t\) reverses the orientation of \(\overline{\gamma}\) and maps \(D_{+}(\overline{\gamma})\) to \(D_{-}(\overline{\gamma})\) and \(D_{-}(\overline{\gamma})\) to \(D_{+}(\overline{\gamma})\). However the orientation of \(\overline{\gamma}\) is irrelevant since \(\alpha_{\overline{\gamma}}^{\dagger}=\alpha_{\overline{\gamma}}\). So there is no canonical labelling and \(D_{-}(\overline{\gamma})\) and \(D_{+}(\overline{\gamma})\) are interchangeable.
**Lemma 4.8**.: _Let \(D(\overline{\gamma})\) be the infinity directions for path \(\overline{\gamma}\). One can always choose a finite region \(\overline{\Lambda}\) such that for all \(\overline{\gamma}(t)\in\mathcal{E}(\overline{\Lambda}^{\circ})\) we have \(\overline{\gamma}(t)\in\cup_{(r,\sigma)\in D(\overline{\gamma})}\mathcal{E}^{r,\sigma}\)_
Proof.: Since for all \((r,\sigma)\notin D(\overline{\gamma})\) we have \(\#\{\overline{\gamma}(t)|\overline{\gamma}(t)\in\mathcal{E}^{r,\sigma}\}<\infty\), we can choose a finite region \(\overline{\Lambda}\) such that \(\overline{\gamma}(t)\in\mathcal{E}(\overline{\Lambda})\) for all \(\overline{\gamma}(t)\in\cup_{r,\sigma\notin D(\overline{\gamma})}\mathcal{E}^{r,\sigma}\). It follows that for all \(\overline{\gamma}(t)\in\mathcal{E}(\overline{\Lambda}^{\circ})\) we have \(\overline{\gamma}(t)\in\cup_{(r,\sigma)\in D(\overline{\gamma})}\mathcal{E}^{r,\sigma}\).
**Lemma 4.9**.: \(\omega_{\overline{\gamma}}\) _is in a ground state sector iff \(D_{+}(\overline{\gamma})\cap D_{-}(\overline{\gamma})=\emptyset\)_
Proof.: Consider a state \(\omega_{\overline{\gamma}}\) such that \(\overline{\gamma}\) has a direction \((r,\sigma)\in D_{+}(\overline{\gamma})\cap D_{-}(\overline{\gamma})\).
There will then exist an integer \(c=(\partial_{0}\overline{\gamma}(n))_{r}=(\partial_{1}\overline{\gamma}(m))_{r}\). This means \(\overline{\gamma}\) is non-monotonic in at least the \(r\) direction. We can then straighten it to a path \(\overline{\gamma}^{\prime}\) with a lower energy. But since there are infinite such edges, one can always consider another integer \(c^{\prime}=(\partial_{0}\overline{\gamma}(n^{\prime}))_{r}=(\partial_{1} \overline{\gamma}(m^{\prime}))_{r}\) such that \(\sigma c^{\prime}>\sigma c\). Thus we can construct paths of lower energy indefinitely, implying we cannot reach a path \(\overline{\gamma}^{\prime}\) for which \(\omega_{\overline{\gamma}}\) is a ground state.
We can easily see the converse from theorem 2.10. If \(\omega_{\overline{\gamma}}\) is a ground state, it must be monotonic. So there exists \(\sigma_{r}\in\{\pm\}\) such that \(\overline{\gamma}\subseteq\cup_{r=1}^{3}\mathcal{E}^{r,\sigma_{r}}\). Then \(D_{+}(\overline{\gamma})\subseteq\{(r,\sigma_{r})\}_{r\in\{x,y,z\}}\). Whereas \(D_{-}(\overline{\gamma})\subseteq\{(r,\sigma_{r}^{c})\}_{r\in\{x,y,z\}}\) where \(\sigma_{r}^{c}\in\{\pm\}\) is the complement of \(\sigma_{r}\). It follows that \(D_{+}(\overline{\gamma})\cap D_{-}(\overline{\gamma})=\emptyset\).
**Definition 4.10** (_Pathologically non-monotonic_).: A path \(\overline{\gamma}\) is pathologically non-monotonic if there exists an infinite sequence \(\{\overline{\gamma}_{n}\}_{n=1}^{\infty}\) such that \(\overline{\gamma}_{1}=\overline{\gamma}\), \(\overline{\gamma}_{n}\) is path equivalent to \(\overline{\gamma}_{m}\) for all \(m,n\in\mathbb{Z}_{+}\), and we can straighten \(\overline{\gamma}_{n}\) to \(\overline{\gamma}_{n+1}\) for all \(n\geq 1\) resulting in a state \(\omega_{\overline{\gamma}_{n+1}}\) with lower energy than \(\omega_{\overline{\gamma}_{n}}\).
**Corollary 4.11**.: _If \(\overline{\gamma}\) is pathologically non-monotonic, then \(D_{+}(\overline{\gamma})\cap D_{-}(\overline{\gamma})\neq\emptyset\). Consequently, \(\omega_{\overline{\gamma}}\) is not in a ground state sector._
Proof.: If \(\overline{\gamma}\) is pathologically monotonic, then we have an infinite sequence \(\{\overline{\gamma}_{n}\}\) such that \(\overline{\gamma}_{n}\) can be straightened to \(\overline{\gamma}_{n+1}\) such that \(\omega_{\overline{\gamma}_{n+1}}\) has a lower energy than \(\omega_{\overline{\gamma}_{n}}\). Call the direction along which we straighten \(\overline{\gamma}_{n}\) to \(\overline{\gamma}_{n+1}\) as \((r_{n},\sigma_{n})\).
Let \(N_{(r,\sigma)}=\#\{(r_{n},\sigma_{n})|n\in\mathbb{Z}_{+},r_{n}=r,\sigma_{n}=\sigma\}\), where \(r\in\{x,y,z\},\sigma\in\pm\). Since \(\#\{(r_{n},\sigma_{n})\}=\infty\), there must exist at least one direction \((r,\sigma)\) such that \(N_{(r,\sigma)}=\infty\).
Pick \(\overline{\gamma}_{m}\) such that \((r_{m},\sigma_{m})=(r,\sigma)\). Follow the straightening procedure from \(\overline{\gamma}\) to \(\overline{\gamma}_{m}\). Since \(\overline{\gamma}\) can be straightened to \(\overline{\gamma}_{m}\), there must exist two edges \(\overline{\gamma}_{m}(a),\overline{\gamma}_{m}(b)\in\overline{\gamma}\) such that \((\partial_{0}\overline{\gamma}_{m}(a))_{r}=(\partial_{1}\overline{\gamma}_{m}( b))_{r}\). Let \(g,h\in\mathbb{Z}\) be such that \(\overline{\gamma}(g)=\overline{\gamma}_{m}(a),\overline{\gamma}(h)=\overline{ \gamma}_{m}(b)\). Now pick an edge \(\overline{\gamma}(t)\) of \(\overline{\gamma}\) such that \(g<t<h\).
Since \(N_{r,\sigma}=\infty\), we must have another \(\overline{\gamma}(m^{\prime})\) with \(m^{\prime}>m\) such that \(\overline{\gamma}\) can be straightened to it. We now have \(\overline{\gamma}_{m^{\prime}}(a),\overline{\gamma}_{m^{\prime}}(b)\in \overline{\gamma}\) such that \((\partial_{0}\overline{\gamma}_{m^{\prime}}(a))_{r}=(\partial_{1}\overline{ \gamma}_{m^{\prime}}(b))_{r}\). Let \(g^{\prime},h^{\prime}\in\mathbb{Z}\) be such that \(\overline{\gamma}(g^{\prime})=\overline{\gamma}_{m^{\prime}}(a^{\prime}), \overline{\gamma}(h^{\prime})=\overline{\gamma}_{m^{\prime}}(b^{\prime})\). We then have \(g^{\prime}<t<h^{\prime}\). As \(\overline{\gamma}_{m}\) can also be straightened to \(\overline{\gamma}_{m^{\prime}}\), we have \(g^{\prime}<g,h^{\prime}>h\). This implies the direction \((r,\sigma)\) belongs to both \(D_{+}(\overline{\gamma}),D_{-}(\overline{\gamma})\). Hence \(D_{+}(\overline{\gamma})\cap D_{-}(\overline{\gamma})\neq\emptyset\).
Using lemma 4.9, it follows that \(\omega_{\overline{\gamma}}\) is not in a ground state sector. This concludes our proof.
**Lemma 4.12**.: \(\alpha_{v}\circ\alpha_{\overline{\gamma}}=\alpha_{\overline{\gamma}}\circ \alpha_{v}\)_, that is the automorphisms \(\alpha_{v},\alpha_{\overline{\gamma}}\) commute._
Proof.: Let \(O\in\mathfrak{A}\). Then we have,
\[\alpha_{v}\circ\alpha_{\overline{\gamma}}(O) =\lim_{n\to\infty}\lim_{m\to\infty}F_{\gamma_{v}(n)}F_{\overline{ \gamma}_{\overline{\gamma}(m)}}OF_{\overline{\gamma}_{\overline{\gamma}(m)}}F_ {\gamma_{v}(n)}\] \[=(-1)^{2\{\text{linking number}\}}\lim_{n\to\infty}\lim_{m\to \infty}F_{\overline{\gamma}_{\overline{\gamma}(m)}}F_{\gamma_{v}(n)}OF_{\gamma _{v}(n)}F_{\overline{\gamma}_{\overline{\gamma}(m)}}\] \[=\alpha_{\overline{\gamma}}\circ\alpha_{v}(O)\]
Where we have used the property that \(F_{\gamma_{v}(n)}F_{\overline{S}_{\overline{\gamma}(m)}}=(-1)^{\{\text{linking number}\}}F_{\overline{S}_{\overline{\gamma}(m)}}F_{\gamma_{v}(n)}\).
Lemma 4.12 implies that one does not need to worry about the order of the automorphisms, and to build a state with charges \(v_{i}\) and fluxes \(\overline{\gamma}_{i}\), one may choose a convention to first apply \(\alpha_{\overline{\gamma}_{i}}\), then \(\alpha_{v_{i}}\).
We now use infinity directions to attempt to classify the different ground state sectors of the 3d Toric Code. Let recall and prove theorem 2.14:
**Theorem 2.14**.: _All inequivalent ground states with 1 infinite flux string are labelled by \((g\in\mathbb{Z}_{2},\overline{\gamma}\in\mathcal{M})\) where \(g\) indicates if the ground state is charged or uncharged, \(\mathcal{M}\) is the set of all path inequivalent monotonic paths._
Proof.: Consider a state \(\omega_{\{v_{i}\},\overline{\gamma}}\) with a number of charges at \(v_{i}\) and an infinite flux string denoted by path \(\overline{\gamma}\). We can immediately see (with the same reasoning as theorem 2.7) that \([\pi_{\{v_{i}\},\overline{\gamma}}]=[\pi_{\overline{\gamma}}]\) if \(|\{v_{i}\}|\) is even, and \([\pi_{\{v_{i}\},\overline{\gamma}}]=[\pi_{v_{i}\overline{\gamma}}]\) if \(|\{v_{i}\}|\) is odd. \(g\in\mathbb{Z}_{2}\) is the parity of \(|\{v_{i}\}|\), indicating if the state is charged. From theorem 2.10, if \(\overline{\gamma}\) is a ground state then necessarily \(\overline{\gamma}\in\mathcal{M}^{\prime}\) where \(\mathcal{M}^{\prime}\) is the set of all monotonic paths. From 4.2, any two states are equivalent if their paths are path equivalent. So to find all inequivalent ground states, we need \(\overline{\gamma}\in\mathcal{M}^{\prime}/\sim=:\mathcal{M}\) where \(\sim\) indicates the equivalence class of path equivalent paths. Hence the inequivalent ground states are completely labelled by \((g\in\mathbb{Z}_{2},\,\overline{\gamma}\in\mathcal{M})\).
## 5 2 string configurations
Using the formalism developed in section 4 we can readily tackle the classification of 2 infinite flux string states after first defining the concepts of surgery and truncation. From theorem 2.10 we already know that for \(\omega_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) to be a ground state, we have a prerequisite condition that \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) must individually be monotonic. However there are additional conditions that we will explore now.
### Performing surgery
Consider as an example state \(\omega_{\overline{\gamma}_{1},\,\overline{\gamma}_{2}}\) with \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) as two infinite monotonic paths as shown in figure 8(a). Let us choose a rectangular finite surface \(\overline{S}\) as shown in the figure 8(b) such that we get \(\omega_{\overline{\gamma},\overline{\gamma}_{1},\overline{\gamma}_{2}}=\omega_{ \overline{\gamma}_{1}^{\prime},\overline{\gamma}_{2}^{\prime}}\) where \(\overline{\gamma}_{1}^{\prime},\overline{\gamma}_{2}^{\prime}\) are two disconnected U shaped paths and \(\overline{\gamma}=\partial\overline{S}\). Since \(\overline{\gamma}_{1}^{\prime},\overline{\gamma}_{2}^{\prime}\) are both pathologically non-monotonic, using corollary 4.11\(\mathcal{H}_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) is not a ground state sector.
The above example can be called a "surgery" as we are in effect cutting the strings and reattaching them differently. In general we may perform surgery on any N infinite path configuration. It amounts to the following algorithm:
* Let \(\{\overline{\gamma}_{n}\}_{n=1}^{N}\) be a set of non-overlapping paths in the configuration.
* Consider a finite simply connected convex surface \(\overline{S}\) such that \(\partial\overline{S}\cap\overline{\gamma}_{n}\neq\emptyset\) for at least a single \(n\).
* Performing surgery geometrically amounts to considering the set \((\partial\overline{S}\bigcup_{n=1}^{N}\overline{\gamma}_{n})\setminus\bigcup_ {n=1}^{N}(\partial\overline{S}\cap\overline{\gamma}_{n})\)
* The set \((\partial\overline{S}\bigcup_{n=1}^{N}\overline{\gamma}_{n})\setminus\bigcup_ {n=1}^{N}(\partial\overline{S}\cap\overline{\gamma}_{n})\) is a set of new paths \(\{\overline{\gamma}_{n}^{\prime}\}_{n=1}^{N}\). We prove this fact in theorem B.5.
_Remark_.: \(\overline{S}\) is finite, so surgery is performed using \(\alpha_{\overline{\gamma}}\in\mathfrak{A}\) and does not change the sector of configuration.
_Remark_.: The set \((\partial\overline{S}\bigcup_{n=1}^{N}\overline{\gamma}_{n})\setminus\bigcup_ {n=1}^{N}(\partial\overline{S}\cap\overline{\gamma}_{n})\) is specifically considered to reflect the \(\mathbb{Z}_{2}\) grading of the flux operators \(F_{\overline{S}}\). This property geometrically means that two flux strings going over the same edge cancel out on that edge.
**Theorem 5.1**.: _A state \(\omega_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) for \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) as monotonic paths is in a ground state sector iff \(D(\overline{\gamma}_{1})\cap D(\overline{\gamma}_{2})=\emptyset\)_
Proof.: We first choose a convenient prescription for \(D_{\pm}(\overline{\gamma}_{1}),D_{\pm}(\overline{\gamma}_{2})\) taking advantage of lemma 4.7 such that if there exists \((r,\sigma)\in D(\overline{\gamma}_{1})\cap D(\overline{\gamma}_{2})\), then \((r,\sigma)\in D_{+}(\overline{\gamma}_{1}),D_{+}(\overline{\gamma}_{2})\). Since \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are monotonic, we have from lemma 4.9,
\[D_{+}(\overline{\gamma}_{1})\cap D_{-}(\overline{\gamma}_{1}) =\emptyset D_{+}(\overline{\gamma}_{2})\cap D_{-}(\overline{\gamma}_{2}) =\emptyset \tag{6}\]
Let us perform surgery on \(\overline{\gamma}_{1},\overline{\gamma}_{2}\). Consider a finite region \(\overline{\Lambda}\) in accordance with lemma 4.8. Choose a finite \(\overline{S}\) within \(\overline{\Lambda}\) such that \(\partial\overline{S}\cap\overline{\gamma}_{i}\neq\emptyset\) with \(i=1,2\). If the paths are non-overlapping
then we will have \(\overline{S}\setminus(\overline{\gamma}_{1}\cup\overline{\gamma}_{2})\neq\emptyset\). If the paths are overlapping then we can always consider instead a finite deformation of these paths which makes them non-overlapping. Performing surgery will give us paths \(\overline{\gamma}^{\prime}_{1},\overline{\gamma}^{\prime}_{2}\) such that (upto relabelling of \(D_{\pm}(\overline{\gamma}^{\prime}_{i})\) using lemma 4.7) \(D_{+}(\overline{\gamma}^{\prime}_{1})=D_{+}(\overline{\gamma}_{1}),D_{-}( \overline{\gamma}^{\prime}_{1})=D_{+}(\overline{\gamma}_{2})\) and \(D_{+}(\overline{\gamma}^{\prime}_{2})=D_{-}(\overline{\gamma}_{1}),D_{-}( \overline{\gamma}^{\prime}_{2})=D_{-}(\overline{\gamma}_{2})\). If as previously assumed \((r,\sigma)\in D_{+}(\overline{\gamma}_{1}),D_{+}(\overline{\gamma}_{2})\), then \(D_{+}(\overline{\gamma}^{\prime}_{1})\cap D_{-}(\overline{\gamma}^{\prime}_{1})\neq\emptyset\). From lemma 4.9, \(\overline{\gamma}^{\prime}_{1}\) is pathologically non-monotonic. So \(\omega_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) is not in a ground state sector.
To prove the converse, we have \(D(\overline{\gamma}_{1})\cap D(\overline{\gamma}_{2})=\emptyset\). Since \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are monotonic, we also have eqn 6. So even after performing surgery, we still have \(D(\overline{\gamma}^{\prime}_{1})\cap D(\overline{\gamma}^{\prime}_{2})=\emptyset\). So we can only lower the energy of \(\omega_{\overline{\gamma}^{\prime}_{1},\overline{\gamma}^{\prime}_{2}}\) at most in a finite region \(\overline{\Lambda}\). It follows that \(\omega_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) is in a ground state sector.
Indeed, if we have a ground state \(\omega_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) where \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are two infinite monotonic paths, then \(\omega_{\overline{\gamma}_{1},\overline{\gamma}_{2}}\) lies in a ground state sector iff \(D(\overline{\gamma}_{1})\cap D(\overline{\gamma}_{2})=\emptyset\). However, we have multiple cases corresponding to the different infinity directions of \(\overline{\gamma}_{1},\overline{\gamma}_{2}\), and we can say a little more about these cases if we consider them individually.
To classify the \(2\) infinity flux string case, we will have to first understand the solutions of \(D(\overline{\gamma}_{1})\cap D(\overline{\gamma}_{2})=\emptyset\). We call the equation \(D(\overline{\gamma}_{1})\cap D(\overline{\gamma}_{2})=\emptyset\) together with \(D_{+}(\overline{\gamma}_{i})\cap D_{-}(\overline{\gamma}_{i})=\emptyset\) the ground state condition, GSC in short.
### Distinct solutions of the ground state condition
Let's divide the solutions into cases with increasing \(|D(\overline{\gamma}_{1})|,|D(\overline{\gamma}_{2})|\). We also have the conditions \(D_{+}(\overline{\gamma}_{i})\cap D_{-}(\overline{\gamma}_{i})=\emptyset\), and further \(|D_{\pm}(\overline{\gamma}_{i})|>0\) with \(i=1,2\). Let \(\sigma\in\{\pm\}\). Define \(\overline{\sigma}=\{\pm\}\setminus\sigma\).
We supplement each solution with a figure. We interpret the figures in the following way: there are \(6\) axes, each corresponding to a direction \((r,\sigma)\). The infinity directions for \(\overline{\gamma}_{1}\) are given in red, while the ones for \(\overline{\gamma}_{2}\) are given in blue. A red shaded plane or region means all infinity directions touching it belong to \(\overline{\gamma}_{1}\) and similarly for the blue shaded region they belong to \(\overline{\gamma}_{2}\). A shaded kite means it has \(2\) infinity directions touching it, and a shaded cube means it has \(3\) infinity directions touching it.
Case I: \(|D(\overline{\gamma}_{i})|=2\): This is the easiest case. We first assume \(D(\overline{\gamma}_{1})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2})\}\) and \(D(\overline{\gamma}_{2})=\{(r_{3},\sigma_{3}),(r_{4},\sigma_{4})\}\). Solving for GSC gives us \(r_{4}=r_{1},\sigma_{4}=\overline{\sigma}_{1}\). Notice that after surgery, simplification and relabelling, we can always obtain \(D(\overline{\gamma}_{1})=\{(r_{1},\sigma_{1}),(r_{1},\overline{\sigma}_{1})\}\). Figure 10 shows the solutions for case I.
Case II: \(|D(\overline{\gamma}_{1})|=2,|D(\overline{\gamma}_{2})|=3\): In this case, we have \(D(\overline{\gamma}_{1})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2})\}\) and \(D(\overline{\gamma}_{2})=\{(r_{3},\sigma_{3}),(r_{4},\sigma_{4}),(r_{5},\sigma_ {5})\}\). Solving out for GSC will give us \(3\) separate solutions:
* A: \(r_{1}\neq r_{2}\), \(r_{4}=r_{1},\sigma_{4}=\overline{\sigma}_{1}\), \(r_{5}=r_{2},\sigma_{5}=\overline{\sigma}_{2}\). Figure 11 shows \(3\) solutions for case A. There are \(3\) additional solutions that are mirrored along the \(x-z\) plane.
Figure 10: Solution for case I
* B: \(r_{1}\neq r_{2}\), \(r_{4}=r_{3},\sigma_{4}=\overline{\sigma}_{3}\), \(r_{5}=r_{1},\sigma_{5}=\overline{\sigma}_{1}\). Figure 11(a) shows the solutions of case \(B\).
* C: \(r_{1}=r_{2},\sigma_{2}=\overline{\sigma}_{1}\), \(r_{4}=r_{3},\sigma_{4}=\overline{\sigma}_{3}\). Figure 11(b) (left) shows the solutions for case C. After performing surgery it can be turned into 11(b) (right). So we can always reduce case II to one where \(\overline{\gamma}_{1}\) is has an L shape.
Case III: \(|D(\overline{\gamma}_{1})|=2\), \(|D(\overline{\gamma}_{2})|=4\): In this case, we have \(D(\overline{\gamma}_{1})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2})\}\) and \(D(\overline{\gamma}_{2})=\{(r_{3},\sigma_{3}),(r_{4},\sigma_{4}),(r_{5}, \sigma_{5}),(r_{6},\sigma_{6})\}\). Solving out for GSC gives us 2 distinct solutions:
* A: \(r_{2}=r_{1}\), \(\sigma_{2}=\overline{\sigma}_{1}\), \(r_{4}=r_{3}\), \(\sigma_{4}=\overline{\sigma}_{3}\), \(r_{6}=r_{5}\), \(\sigma_{6}=\overline{\sigma}_{5}\). Figure 13 shows the solutions to this case.
* B: \(r_{3}=r_{1}\), \(\sigma_{3}=\overline{\sigma}_{1}\), \(r_{4}=r_{2}\), \(\sigma_{4}=\overline{\sigma}_{2}\), \(r_{6}=r_{5}\), \(\sigma_{6}=\overline{\sigma}_{5}\). There are two distinct solutions for this case corresponding to different sets \(D_{\pm}(\overline{\gamma}_{2})\), shown in Figure 13.
Case IV: \(|D(\overline{\gamma}_{1})|=3\), \(|D(\overline{\gamma}_{2})|=3\): We of course have \(D(\overline{\gamma}_{1})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2})\}\) and \(D(\overline{\gamma}_{2})=\{(r_{3},\sigma_{3}),(r_{4},\sigma_{4}),(r_{5},\sigma_ {5}),(r_{6},\sigma_{6})\}\). Solving out for GSC gives us only a single solution:
* A: \(r_{4}=r_{1}\), \(\sigma_{4}=\overline{\sigma}_{1}\), \(r_{3}=r_{2}\), \(\sigma_{3}=\overline{\sigma}_{2}\), \(r_{6}=r_{5}\), \(\sigma_{6}=\overline{\sigma}_{5}\). Figure 14 shows the possible solutions for this case. But notice that figure 14 (left) after surgery can be reduced to case IIIA, while 14 (right) can be reduced to case IIIB. So there are no distinct solutions in this case that haven't been covered in previous cases.
Note that we will also have discrete \(\pi/2\) rotations and reflections about the \(x,y,z\) axis of figures 10 - 14 as solutions.
Figure 11: Solutions for case II.A (left), B (center), C (right).
Figure 13: Solutions for case III.A (left) and III.B (center),(right).
### Classification of 2 infinite flux strings
We see from section 5.2 that all possible cases of infinity directions are covered in cases I,II,III. Notice that in all cases, after surgery, simplifcation and relabelling, we have \(|D(\overline{\gamma}_{1})|=2\). We will divide the classification according to whether \(D(\overline{\gamma}_{1})=\{(r,\sigma),(r,\overline{\sigma})\}\) or \(D(\overline{\gamma}_{1})=\{(r,\sigma),(r^{\prime},\sigma^{\prime})\}\) with \(r\neq r^{\prime}\).
Let us first establish some useful defintions in the classification. Recall the definition 2.15 of a half-path \(\overline{\gamma}_{v}^{(r,\sigma)}\) as having its start/end point as \(v\) and \(\overline{\gamma}_{v}^{(r,\sigma)}(t)\in\mathcal{E}^{r,\sigma}\).
Let \(p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}\) denote a monotonic path such that
\[p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}(t) \in\overline{\gamma}_{\tau_{1}}^{(r_{1},\sigma_{1})} t>t_{+} \tag{7}\] \[-p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}(t) \in\overline{\gamma}_{\tau_{2}}^{(r_{2},\sigma_{2})} t<t_{-} \tag{8}\]
And let \(\mathcal{P}_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}(t)\) denote the set of paths that are path equivalent to \(p_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})}(t)\).
Before the classification of the 2 string sectors we will need a few results:
**Proposition 5.2**.: _If \(\overline{\gamma}\) is a monotonic path with \(D(\overline{\gamma})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2})\}\), then we have \(\overline{\gamma}\in\mathcal{P}_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2 },\tau_{2})}\)_
Proof.: Since \(\overline{\gamma}\) monotonic, \(D_{+}(\overline{\gamma})=(r_{1},\sigma_{1}),D_{-}(\overline{\gamma})=(r_{2}, \sigma_{2})\) (upto relabelling of \(D_{\pm}(\overline{\gamma})\)). Using lemma 4.8 we have a region \(\overline{\Lambda}\) such that \(\overline{\gamma}(t)\in\mathcal{E}^{r_{1},\sigma_{1}}\) and \(-\overline{\gamma}(t)\in\mathcal{E}^{r_{2},\sigma_{2}}\) for all \(\overline{\gamma}(t)\in\mathcal{E}(\overline{\Lambda}^{c})\).
So there exist \(t_{\pm}\) such that \(\overline{\gamma}(t)\in\overline{\gamma}_{\tau_{1}}^{(r_{1},\sigma_{1})}\) for all \(t>t_{+}\) and \(-\overline{\gamma}(t)\in\overline{\gamma}_{\tau_{2}}^{(r_{2},\sigma_{2})}\) for all \(t<t_{-}\) for particular \(\tau_{1},\tau_{2}\). This implies \(\overline{\gamma}\in\mathcal{P}_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2 },\tau_{2})}\).
We term \(\mathcal{Q}_{r}^{D(\overline{\gamma})}\) as the set of paths \(\overline{\gamma}\) that have infinity directions \(D(\overline{\gamma})\) and there exists a half-path \(\overline{\gamma}_{\tau}^{(r,\sigma)}\subset\overline{\gamma}\) for \((r,\sigma)\in D(\overline{\gamma})\).
We also term \(\mathcal{R}^{D(\overline{\gamma})}\) as the set of paths \(\overline{\gamma}\) that have infinity directions \(D(\overline{\gamma})\) with no additional restrictions.
#### 5.3.1 When \(D(\overline{\gamma}_{1})=\{(r,\sigma),(r,\overline{\sigma})\}\)
* \(D(\overline{\gamma}_{2})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2})\}\): Here GSC implies \(r\neq r_{1},r\neq r_{2}\). In this case the sectors are labelled by \[(g\in\mathbb{Z}_{2},\overline{\gamma}_{1}\in\mathcal{P}_{(r,\sigma,\tau),(r, \overline{\sigma},\tau^{\prime})},\overline{\gamma}_{2}\in\mathcal{P}_{(r_{1},\sigma_{1},\tau_{1}),(r_{2},\sigma_{2},\tau_{2})})\] \(\overline{\gamma}_{1},\overline{\gamma}_{2}\) are labelled using proposition 5.2. We include \(g\in\mathbb{Z}_{2}\) to take into account if the sector is charged or uncharged (which are the only two possibilities from the same
Figure 14: Solutions for case IV.A.
reasoning as theorem 2.14). We will omit this reasoning for the proceeding cases. There is a special subcase when \(r_{1}=r_{2}\). Then GSC imposes \(\sigma_{1}=\overline{\sigma}_{2}\) and we have a slightly tighter constraint, \(\overline{\tau}_{2}\in\mathcal{P}_{(r_{1},\sigma_{1},\tau_{1}),(r_{1}, \overline{\sigma}_{1},\tau_{2})}\)
* \(D(\overline{\tau}_{2})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2}),(r_{3},\sigma_{ 3}),(r_{4},\sigma_{4})\}\) Here GSC implies \(r\neq r_{1}\neq r_{2}\neq r_{4}\neq r_{3}\), \(r_{2}=r_{1},\sigma_{2}=\overline{\sigma}_{1}\), \(r_{3}=r_{4},\sigma_{4}=\overline{\sigma}_{3}\). This simplifies to \(D(\overline{\tau}_{2})=\{(r_{1},\sigma_{1}),(r_{1},\overline{\sigma}_{1}),(r_ {2},\sigma_{2}),(r_{2},\overline{\sigma}_{2})\}\). For this case we have, \[(g\in\mathbb{Z}_{2},\overline{\gamma}_{1}\in\mathcal{P}_{(r,\sigma,\tau),(r^{ \prime},\overline{\sigma},\tau^{\prime})},\overline{\gamma}_{2}\in\mathcal{R} ^{D(\overline{\tau}_{2})})\] Here \(\overline{\gamma}_{2}\in\mathcal{R}^{D(\overline{\tau}_{2})}\) follows by definition, and the rest of the labelling is exactly the same as the previous cases.
3.2 When \(D(\overline{\gamma}_{1})=\{(r,\sigma),(r^{\prime},\sigma^{\prime})\}\) with \(r\neq r^{\prime}\)
* \(D(\overline{\tau}_{2})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2}),(r_{3},\sigma_{ 3})\}\): Here GSC implies \(r\neq r_{1}\neq r_{2}\neq r_{3}\), \(r_{2}=r_{1}\), \(\sigma_{2}=\overline{\sigma}_{1}\). This simplifies as \(D(\overline{\gamma}_{2})=\{(r_{1},\sigma_{1}),(r_{1},\overline{\sigma}_{1}),(r_ {3},\sigma_{3})\}\). In this case we have the labelling \[(g\in\mathbb{Z}_{2},\overline{\gamma}_{1}\in\mathcal{P}_{(r,\sigma,\tau),(r^{ \prime},\overline{\sigma},\tau^{\prime})},\overline{\gamma}_{2}\in\mathcal{Q} ^{D(\overline{\gamma}_{2})}_{\tau_{2}})\] Of course \(\overline{\gamma}_{1}\) labelling is given by proposition 5.2. With the simplification of \(D(\overline{\gamma}_{2})\), it needs to be divided into \(D_{\pm}(\overline{\gamma}_{2})\). Since \(D_{\pm}(\overline{\gamma}_{2})\neq\emptyset\) we are left only with the possibility that \(|D_{+}(\overline{\gamma}_{2})|=2\) and \(|D_{-}(\overline{\gamma}_{2})|=1\) (upto relabelling of \(D_{\pm}(\overline{\gamma}_{2})\)). This implies \(\overline{\gamma}_{\tau}^{(s,\rho)}\subset-\overline{\gamma}_{2}\) for \((s,\rho)\in D_{-}(\overline{\gamma}_{2})\). So it follows \(\overline{\gamma}_{2}\in\mathcal{Q}^{D(\overline{\gamma}_{2})}_{\tau_{2}}\).
* \(D(\overline{\gamma}_{2})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2}),(r_{3}, \sigma_{3})\}\) GSC implies \(r_{1}=r,\sigma_{1}=\overline{\sigma},r_{3}=r_{2},\sigma_{3}=\overline{\sigma }_{2}\). In this case we have the labelling of sector as \[(g\in\mathbb{Z}_{2},\overline{\gamma}_{1}\in\mathcal{P}_{(r,\sigma,\tau),(r^{ \prime},\sigma^{\prime},\tau^{\prime})},\overline{\gamma}_{2}\in\mathcal{Q} _{\tau^{\prime\prime}})\].
* \(D(\overline{\gamma}_{2})=\{(r_{1},\sigma_{1}),(r_{2},\sigma_{2}),(r_{3},\sigma_ {3}),(r_{4},\sigma_{4})\}\) GSC gives us \(r_{1}=r,\sigma_{1}=\overline{\sigma}\),\(r_{2}=r^{\prime},\sigma_{2}=\overline{\sigma}^{\prime}\),\(r_{4}=r_{3},\sigma_{4}=\overline{\sigma}_{3}\). This simplifies \(D(\overline{\gamma}_{2})\) to \[D(\overline{\gamma}_{2})=\{(r,\overline{\sigma}),(r^{\prime},\overline{\sigma} ^{\prime}),(r_{3},\sigma_{3}),(r_{3},\overline{\sigma}_{3})\}\] Now we have two possibilities: \(|D_{+}(\overline{\gamma}_{2})|=3,|D_{-}(\overline{\gamma}_{2})|=1\) or \(|D_{+}(\overline{\gamma}_{2})|=2,|D_{-}(\overline{\gamma}_{2})|=2\) (upto relabelling of course). When \(|D_{+}(\overline{\gamma}_{2})|=3,|D_{-}(\overline{\gamma}_{2})|=1\) our labelling of sector is \[(g\in\mathbb{Z}_{2},\overline{\gamma}_{1}\in\mathcal{P}_{(r,\sigma,\tau),(r^{ \prime},\sigma^{\prime},\tau^{\prime})},\overline{\gamma}_{2}\in\mathcal{Q} ^{D(\overline{\gamma}_{2})}_{\tau_{2}})\]. When \(|D_{+}(\overline{\gamma}_{2})|=2,|D_{-}(\overline{\gamma}_{2})|=2\) our labelling of sector is \[(g\in\mathbb{Z}_{2},\overline{\gamma}_{1}\in\mathcal{P}_{(r,\sigma,\tau),(r^{ \prime},\sigma^{\prime},\tau^{\prime})},\overline{\gamma}_{2}\in\mathcal{R} ^{D(\overline{\gamma}_{2})})\] The labelling \(\overline{\gamma}_{2}\in\mathcal{R}^{D(\overline{\gamma}_{2})}\) follows straightforwardly from the definition.
## 6 Ground States on \(3+\) string configurations
### 3 string configurations
The classification of states with 3 infinite flux strings proceeds similarly to that of 2 strings, but turns out to be much simpler.
**Theorem 2.13**.: _Let \(\{\overline{\gamma}_{n}\}_{n=1}^{N}\) be a set of monotonic paths. Then \(\omega_{\overline{\gamma}_{1},\cdots,\overline{\gamma}_{N}}\) lies in a ground state sector iff \(\bigcap_{i=1}^{N}D(\overline{\gamma}_{i})=\emptyset\)._
Proof.: The proof proceeds in the same way as theorem 5.1. If after performing surgery we find any \(\overline{\gamma}_{n}^{\prime}\) that's pathologically non-monotonic, then \(\omega_{\overline{\gamma}_{1},\cdots,\overline{\gamma}_{N}}\) is not in a ground state sector.
For the case of 3 infinite flux strings, we must have \(|D(\overline{\gamma}_{i})|=2\) for \(i=1,2,3\). Let \(D(\overline{\gamma}_{i})=\{(r_{2i-1},\sigma_{2i-1}),(r_{2i},\sigma_{2i})\}\). Then solving for GSC leads to 3 cases:
* case A: \(r_{2i-1}=r_{2i}\), \(\sigma_{2i-1}=\overline{\sigma}_{2i}\)
* case B: \(r_{3}=r_{1}\), \(\sigma_{3}=\overline{\sigma}_{1}\), \(r_{4}=r_{2}\), \(\sigma_{4}=\overline{\sigma}_{2}\), \(r_{6}=r_{5}\), \(\sigma_{6}=\overline{\sigma}_{5}\)
* case C: \(r_{3}=r_{1}\), \(\sigma_{3}=\overline{\sigma}_{1}\), \(r_{5}=r_{2}\), \(\sigma_{5}=\overline{\sigma}_{2}\), \(r_{6}=r_{4}\), \(\sigma_{6}=\overline{\sigma}_{4}\)
All 3 cases are shown in 15. Cases B,C can, after surgery and relabelling, be reduced to case A.
**Theorem 2.16**.: _All inequivalent ground states containing 3 infinite flux strings \(\overline{\gamma}_{i}\) (for \(i=1,2,3\)) and a number of charges are labelled by \((g\in\mathbb{Z}_{2},\overline{\gamma}_{i}\in\mathcal{P}_{(r_{i},\sigma_{i}, \tau_{i}),(r_{i},\overline{\sigma}_{i},\tau_{i}^{\prime})})\) where \(r_{i}\) are unique elements of \(\{x,y,z\}\) and \(g\) indicates if the ground state is charged or uncharged._
Proof.: We already know from solving for GSC that all paths \(\overline{\gamma}_{i}\) can be simplified to the case of \(D(\overline{\gamma}_{i})=\{(r_{i},\sigma_{i}),(r_{i},\overline{\sigma}_{i})\}\). Using proposition 5.2 we know \(\overline{\gamma}_{i}\in\mathcal{P}_{(r_{i},\sigma_{i},\tau_{i}),(r_{i}, \overline{\sigma}_{i},\tau_{i}^{\prime})}\). Once again, \(g\) indicates whether the sector is charged or uncharged, using the same reasoning as theorem 2.10. Here we have simply proven that \(\mathcal{H}_{\{\overline{\gamma}_{i}\}}\) is a ground state sector. If \(\omega_{\{\overline{\gamma}_{i}\}}\in\mathcal{H}_{\{\overline{\gamma}_{i}\}}\) is not a ground state, then using lemma 4.8 we can choose \(\overline{\Lambda}\) outside which \(\overline{\gamma}_{i}(t)\in\cup_{(r,\sigma)\in D(\overline{\gamma}_{i})} \mathcal{C}^{r,\sigma}\). Using surgery and straightening inside \(\overline{\Lambda}\) will lead to a new configuration of paths \(\{\overline{\gamma}_{i}\}\) and give us a ground state \(\omega_{\{\overline{\gamma}^{\prime}\}}\in\mathcal{H}_{\{\overline{\gamma}_{i }\}}\).
### \(4+\) string configurations
Classification of \(4+\) strings is the easiest.
**Theorem 2.17**.: _There does not exist any ground state with a configuration of infinite flux strings \(\{\overline{\gamma}_{n}\}_{n=1}^{N}\) for \(N\geq 4\)._
Proof.: From theorem 2.13, we must have \(\bigcap_{i=1}^{N}D(\overline{\gamma}_{i})=\emptyset\). For a monotonic path \(\overline{\gamma}_{i}\), \(|D(\overline{\gamma}_{i})|\geq 2\). If \(\bigcap_{i=1}^{N}D(\overline{\gamma}_{i})=\emptyset\), then \(|\bigcap_{i=1}^{N}D(\overline{\gamma}_{i})|\geq 2N\). However for \(\Gamma=\mathbb{Z}^{3}\), there only 6 unique Infinity directions given by \((r,\sigma)\) with \(r\in\{x,y,z\}\) and \(\sigma\in\{\pm\}\). So \(\bigcap_{i=1}^{N}D(\overline{\gamma}_{i})\neq\emptyset\) for \(N\geq 4\). This concludes our proof.
Figure 15: 3 strings case A (left), B (middle), C(right)
Discussion and outlook
We have found configurations containing infinite flux strings to be genuine infinite energy physical states of the 3d Toric Code Model on a \(\mathbb{Z}^{3}\) lattice. These configurations cannot be obtained by the action on the translation invariant ground state \(\omega\) of any operator belonging to \(\mathfrak{A}\), the algebra of quasi-local operators in the Toric Code model. However, they are stable ground states of the model.
In the bulk of this paper, we have established a necessary and sufficient criterion for a configuration with multiple infinite flux strings to belong to a ground state sector. The criterion is a twofold statement: (a) the infinite flux strings must individually be path equivalent to a monotonic infinite flux string, (b) they must not have any infinity directions in common. If either of these conditions is not met, we have devised a "straightening procedure" in a finite region to obtain an explicit new state with a lower energy. If after applying the straightening procedure a finite amount of times one obtains a ground state, then this configuration lies in a ground state sector.
We then classified the different ground state sectors for different infinite flux string configurations, and find in particular, that any configuration with more than 3 infinite flux strings cannot be a ground state.
It is easy to generalise the construction to arbitrary finite Abelian groups, and also for different types of infinite oriented lattices. It would be interesting to understand the infinite flux strings from the perspective of fusion 2-categories. While the fusion 2-category structure has been explored in the case of the 3d Toric Code Model on a 3-Torus [13], it remains an open question to define string-braiding and fusion for infinite flux strings. Since the infinite flux string is not "transportable" (in the sense that a translated infinite flux string configuration is not path-equivalent to the original infinite flux string), braiding or fusion of 2 infinite flux strings is a seemingly meaningless question. To resolve this difficulty will be the scope of our work in the near future.
## Appendix A Purity of the ground states of the 3d Toric Code
In this section we will prove the purity of all ground states of the form \(\omega_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}}\), working closely along the lines of [1].
### \(\omega\) is pure
We first begin by using the string-net construction of this ground state. Here in order to avoid confusion between state vectors and states, we choose to denote states as \(\psi\), vector states as \(\ket{\psi}\), and \(\bra{\psi,\phi}\) for inner product.
Define \(\Pi_{n}^{V}\) as the set containing all vertices in \(\mathcal{V}(\Gamma)\) inside the \(n\times n\times n\) region centered at the origin. Let \(\Pi_{n}^{E}\) be the region consisting of all edges with at least one boundary vertex in \(\Pi_{n}^{V}\), \(\Pi_{n}^{F}\) be the set of all \(f\) with at least one vertex in \(\Pi_{n}^{V}\), \(\partial\Pi_{n}^{F}\) as the boundary edges of \(\Pi_{n}^{F}\).
Let \(\mathcal{H}_{n}:=\otimes_{e\in\Pi_{n}^{E}\cup\partial\Pi_{n}^{F}}\mathcal{H}_{e}\). Let \(B^{n}:=\prod_{f\in\Pi_{n}^{E}}(1+B_{f})/2\); \(A^{n}:=\prod_{v\in\Pi_{n}^{V}}(1+A_{v})/2\) be projectors acting on \(\Pi_{n}^{E}\). Define \(\mathcal{W}_{n}:=B^{n}\mathcal{H}_{n}\) and \(\mathcal{V}_{n}:=A^{n}B^{n}\mathcal{H}_{n}\). So a vector \(\ket{\psi}\in\mathcal{V}_{n}\) satisfies \(A_{v}\ket{\psi}=B_{f}\ket{\psi}=\ket{\psi}\) for all \(v\in\Pi_{n}^{V},f\in\Pi_{n}^{F}\), while \(\ket{\psi}\in\mathcal{W}_{n}\) satisfies \(B_{f}\ket{\psi}=\ket{\psi}\) for all \(f\in\Pi_{n}^{F}\).
Start with a product vector state \(\left|\Omega_{n}^{0}\right\rangle\in\mathcal{H}_{n}\) with \(\sigma_{e}^{z}\left|\Omega_{n}^{0}\right\rangle=\left|\Omega_{n}^{0}\right\rangle\) for all \(e\in\Pi_{n}^{E}\). We then have \(B_{f}\!\left|\Omega_{n}^{0}\right\rangle=\left|\Omega_{n}^{0}\right\rangle\) for all \(f\in\Pi_{n}^{F}\), so \(\left|\Omega_{n}^{0}\right\rangle\in\mathcal{W}_{n}\).
Let us define a higher dimensional string net state (also called surface net state) \(\left|\alpha\right\rangle\) on \(\Pi_{n}^{V}\). First, we consider the set \(\overline{\mathcal{S}}\) as the set of dual (not necessarily connected) surfaces lying entirely in \(\Pi_{n}^{E}\). Let \(\left|\alpha\right\rangle\in\overline{\mathcal{S}}\). Then a surface net state \(\left|\alpha\right\rangle\in\mathcal{H}_{n}\) is given as follows:
\[\sigma_{e}^{z}\left|\alpha\right\rangle =-\left|\alpha\right\rangle\qquad e\perp\alpha\] \[\sigma_{e}^{z}\left|\alpha\right\rangle =\left|\alpha\right\rangle\qquad e\not\perp\alpha\] \[B_{f}\left|\alpha\right\rangle =\left|\alpha\right\rangle\qquad f\in\Pi_{n}^{F}\]
Here \(e\perp\alpha\) is understood to have the same meaning as in the definition of the flux string operator. The last condition is known as the "no flux condition". It forces \(\alpha\in\overline{\mathcal{C}}\), the set of closed dual surfaces lying entirely in \(\Pi_{n}^{E}\). \(\left|\alpha\right\rangle\) can be prepared from the product vector state \(\Omega_{n}^{0}\) in the following way:\(\left|\alpha\right\rangle=\prod_{e\in\alpha}\sigma_{e}^{x}\left|\Omega_{n}^{0}\right\rangle\). We define the collection of surface-nets as \(\mathcal{P}_{n}:=\{\left|\alpha\right\rangle\}\).
Let \(\mathcal{G}_{n}\) be the set of gauge transformations on \(\mathcal{H}_{n}\) generated by \(A_{v}\) for all \(v\in\Pi_{n}^{V}\). We notice,
\[\left|\alpha\right\rangle =\prod_{e\in\alpha}\sigma_{e}^{x}\left|\Omega_{n}^{0}\right\rangle\] \[=\prod_{v\in V_{\alpha}}A_{v}\left|\Omega_{n}^{0}\right\rangle\] \[=G_{\alpha}\left|\Omega_{n}^{0}\right\rangle\qquad\qquad G_{ \alpha}:=\prod_{v\in V_{\alpha}}A_{v}\]
Where \(V_{\alpha}\) is the set of vertices lying inside \(\alpha\). Clearly, \(G_{\alpha}\in\mathcal{G}_{n}\). Since \(\left[A_{v},B_{f}\right]=0\), \(\left[G_{\alpha},B_{f}\right]=0\) for all \(f\in\Pi_{n}^{F}\) and \(G_{\alpha}\in\mathcal{G}_{n}\). We thus have \(\left|\alpha\right\rangle\in\mathcal{W}_{n}\).
**Lemma A.1**.: _For surface nets \(\left|\alpha\right\rangle=G_{\alpha}\left|\Omega_{n}^{0}\right\rangle,\left| \alpha^{\prime}\right\rangle=G_{\alpha^{\prime}}\left|\Omega_{n}^{0}\right\rangle\) there exists a unique \(G\in\mathcal{G}_{n}\) such that \(\left|\alpha\right\rangle=G\left|\alpha^{\prime}\right\rangle\), and \(\left\langle\alpha,\alpha^{\prime}\right\rangle=\delta_{G_{\alpha},G_{\alpha^ {\prime}}}\)_
Proof.: First, notice that since \(\mathcal{G}_{n}\) is generated by \(A_{v}\) for \(v\in\Pi_{n}\), \(G_{1},G_{2}\in\mathcal{G}_{n}\implies G_{1}G_{2}=G_{2}G_{1}\in\mathcal{G}_{n}\). So \(\mathcal{G}_{n}\) forms an abelian group, with \(G^{-1}:=G^{\dagger}\). We can show existence of \(G\) by noting that \(\left|\alpha\right\rangle=G_{\alpha}G_{\alpha^{\prime}}^{\dagger}\left|\alpha^ {\prime}\right\rangle\).
To show uniqueness, we first see that if \(\left|\alpha\right\rangle=G_{1}\left|\alpha^{\prime}\right\rangle=G_{2}\left| \alpha^{\prime}\right\rangle\) then \(\left|\alpha^{\prime}\right\rangle=G_{1}^{\dagger}G_{2}\left|\alpha^{\prime} \right\rangle=G\left|\alpha^{\prime}\right\rangle\). So \(G_{\alpha^{\prime}}\left|\Omega_{n}^{0}\right\rangle=GG_{\alpha^{\prime}} \left|\Omega_{n}^{0}\right\rangle\) implying \(G=G_{1}^{\dagger}G_{2}=1\).
We show \(\left\langle\alpha,\alpha^{\prime}\right\rangle=\delta_{G_{\alpha},G_{\alpha^ {\prime}}}\) by first observing that \(\left\langle\Omega_{n}^{0},\Omega_{n}^{0}\right\rangle=1\) and \(\left\langle\Omega_{n}^{0},G\Omega_{n}^{0}\right\rangle=0\) for \(G\in\mathcal{G}_{n}\) such that \(G\neq 1\). Then, \(\left\langle\alpha,\alpha^{\prime}\right\rangle=\left\langle\Omega_{n}^{0},G_{ \alpha}^{\dagger}G_{\alpha^{\prime}}\Omega_{n}^{0}\right\rangle=\delta_{G_{ \alpha},G_{\alpha^{\prime}}}\) thus completing the proof.
_Remark_.: \(\left|\mathcal{G}_{n}\right|=2^{n^{3}}\) as it is generated by all \(A_{v}\in\Pi_{n}\).
Consider a surface net \(\left|\alpha\right\rangle\in\mathcal{H}_{n}\) such that \(\prod_{e\in\partial\Pi_{n}^{F}}\sigma_{e}^{z}\left|\alpha\right\rangle=\left|\alpha\right\rangle\). There can be many ways in which this condition can be satisfied. We call \(b(\alpha)\subset\partial\Pi_{n}^{F}\) a boundary condition of \(\alpha\) such that \(\sigma_{e}^{z}\left|\alpha\right\rangle=-\left|\alpha\right\rangle\) for all edges \(e\in b(\alpha)\), such that it satisfies the condition \(\prod_{e\in\partial\Pi_{n}^{F}}\sigma_{e}^{z}\left|\alpha\right\rangle=\left| \alpha\right\rangle\). We now find it useful to divide \(\mathcal{P}_{n}\) into \(\mathcal{P}_{n}^{b}:=\{\left|\alpha\right\rangle\restriction_{\mathfrak{n}_{\Pi_{n }^{E}}}\in\mathcal{P}_{n}|b(\alpha)=b\subset\Pi_{n}^{E}\}\).
Define \(\mathcal{W}_{n}^{b}:=P_{b}\mathcal{W}_{n}\) where \(P_{b}\) projects to edges in the specific boundary configuration \(b\) along \(\Pi_{n}^{F}\).
**Lemma A.2**.: \(\mathcal{W}_{n}^{b}=span\{\left|\alpha\right\rangle|\left|\alpha\right\rangle \in\mathcal{P}_{n}^{b}\}\)_. Moreover, \(\mathcal{W}_{n}=\bigoplus_{b}\mathcal{W}_{n}^{b}\)_
Proof.: First, we notice that \(\mathcal{W}_{n}=span\{\left|\alpha\right\rangle|\left|\alpha\right\rangle\in \mathcal{P}_{n}\}\), since if there exists a product vector state \(\left|\psi\right\rangle\notin\mathcal{P}_{n}\) then there exists \(f\in\Pi_{n}^{F}\) such that \(B_{f}\left|\psi\right\rangle=0\) and so \(\left|\psi\right\rangle\notin\mathcal{W}_{n}\).
Next, \(\mathcal{W}_{n}^{b}\subset\mathcal{W}_{n}\). We also have \(\sum_{b}P_{b}=1\), and \(P_{b^{\prime}}\mathcal{P}_{n}^{b}=\delta_{b,b^{\prime}}\mathcal{P}_{n}^{b}\). So \(\left|\alpha\right\rangle\in\mathcal{P}_{n}^{b}\) clearly lies in \(\mathcal{W}_{n}^{b}\). If \(b(\alpha_{1})=b_{1},b(\alpha_{2})=b_{2}\) with \(b_{1}\neq b_{2}\), then we have \(\left\langle\alpha_{1}|\alpha_{2}\right\rangle=0\). So \(\mathcal{W}_{n}^{b}=span\{\left|\alpha\right\rangle|\left|\alpha\right\rangle \in\mathcal{P}_{n}^{b}\}\). This implies \(\mathcal{W}_{n}=\sum_{b}P_{b}\mathcal{W}_{n}^{b}=\bigoplus_{b}\mathcal{W}_{n} ^{b}\).
_Remark_.: The action of \(\mathcal{G}_{n}\) leaves \(\mathcal{W}_{n}^{b}\) invariant, since \(\partial\Pi_{n}^{F}\cap\Pi_{n}^{E}=\emptyset\).
Let \(\left|\eta_{n}^{b}\right\rangle\) be a vector given by
\[\left|\eta_{n}^{b}\right\rangle:=\frac{1}{|\mathcal{P}_{n}^{b}|^{1/2}}\sum_{ \alpha\in\mathcal{P}_{n}^{b}}\left|\alpha\right\rangle \tag{9}\]
**Proposition A.3**.: \(\mathcal{V}_{n}=A^{n}\mathcal{W}_{n}\) _is one dimensional and spanned by \(\left|\eta_{n}^{b}\right\rangle\)_
Proof.: We first notice that
\[A^{n}=\prod_{v\in\Pi_{n}^{V}}(1+A_{v})/2=\frac{1}{|\mathcal{G}_{n}|}\sum_{\{g _{v}\}\in\mathcal{G}_{n}}A_{v}^{g_{v}}=\frac{1}{|\mathcal{G}_{n}|}\sum_{U\in \mathcal{G}_{n}}U\]
Then for any \(\left|\alpha\right\rangle\in\mathcal{P}_{n}^{b}\) we know that \(\left|\alpha\right\rangle\in\mathcal{W}^{n}\). Since \(\{\left|\alpha\right\rangle\}\) is a good basis for \(\mathcal{W}^{n}\) we may just work with \(\left|\alpha\right\rangle\). Applying \(A^{n}\) on \(\left|\alpha\right\rangle\) we get:
\[A^{n}\left|\alpha\right\rangle=\frac{1}{|\mathcal{G}_{n}|}\sum_{U\in\mathcal{ G}_{n}}U\left|\alpha\right\rangle=\frac{1}{|\mathcal{G}_{n}|}\sum_{\alpha^{ \prime}\in\mathcal{P}_{n}^{b}}\left|\alpha^{\prime}\right\rangle=\frac{| \mathcal{P}_{n}^{b}|^{1/2}}{|\mathcal{G}_{n}|}\left|\eta_{n}^{b}\right\rangle \tag{10}\]
So \(A^{n}\left|\alpha\right\rangle\propto\left|\eta_{n}^{b}\right\rangle\) and hence \(\mathcal{V}^{n}=A^{n}\mathcal{W}^{n}\) is one-dimensional and spanned by \(\left|\eta_{n}^{b}\right\rangle\). This concludes the proof.
_Remark_.: \(|\mathcal{P}_{n}^{b}|\) is independent of \(b\), since there exists a bijective unitary map \(U_{b}:\mathcal{P}_{n}^{b}\rightarrow\mathcal{P}_{n}^{0}\) supported on the boundary \(\partial\Pi_{n}^{F}\) given by
\[U_{b}=\prod_{e\in\partial\Pi_{n}^{F}}(\sigma_{e}^{x})^{(1-\sigma_{e}^{z})/2}\]
Here \(0\) is the trivial boundary condition.
**Lemma A.4**.: _If \(O\in\mathfrak{A}_{\Pi_{n-1}}\) then we have that_
\[\eta_{n}^{b}(O):=\left\langle\eta_{n}^{b},O\eta_{n}^{b}\right\rangle\]
_is independent of \(b\)._
Proof.: Since there always exists a unique unitary \(U_{b}\) supported on \(\partial\Pi_{n}^{F}\) (see remark above) such that \(\left|\alpha\right\rangle=U_{b}\left|\beta\right\rangle\) such that \(\alpha\in\mathcal{P}_{n}^{0},\beta\in\mathcal{P}_{n}^{b}\), we then have:
\[U_{b}\left|\eta_{n}^{0}\right\rangle=\left|\eta_{n}^{b}\right\rangle\]
Since \(U_{b}\) is supported on \(\partial\Pi_{n}^{F}\) we have \([U,O]=0\). Thus we have
\[\eta_{n}^{b}(O)=\left\langle\eta_{n}^{0},U_{b}^{\dagger}OU_{b}\eta_{n}^{0} \right\rangle=\eta_{n}^{0}(O)\]
This means an operator \(O\in\mathfrak{A}_{\Pi_{n-1}}\) does not see the boundary condition \(b\). This concludes the proof.
**Corollary A.5**.: \(\eta_{m}^{b}\upharpoonright_{\mathfrak{A}_{\Pi_{n}}}=\eta_{m}^{0}\upharpoonright_{ \mathfrak{A}_{\Pi_{n}}}\) _for \(m\geq n+1\)_
We now shorten the notation to say \(\psi|_{m}:=\psi\upharpoonright_{\mathfrak{A}_{\Pi_{n}^{E}}}\) for any state \(\psi\).
**Lemma A.6**.: _Let \(\omega\) be the frustration free translation invariant ground state of the 3dTC satisfying \(\omega(A_{v})=\omega(B_{f})=1\) for all \(v\in\mathcal{V}(\Gamma),f\in\mathcal{F}(\Gamma)\). Then we have \(\omega|_{n}=\eta_{m}^{0}|_{n}\) for any \(n\)_
Proof.: We have the convex decomposition of \(\omega\) into countably many pure states \(\omega_{i}\): \(\omega=\sum_{i}\lambda_{i}\omega_{i}\). Since \(|\omega_{i}(A_{v})|\leq||A_{v}||=1\) and \(0\leq\lambda_{i}\leq 1\) we have \(\omega_{i}(A_{v})=1\) and similarly \(\omega(B_{f})=1\) for all \(v\in\Pi_{n}^{V},f\in\Pi_{n}^{F}\). So for a GNS vector \(\Omega_{i}\) of \(\omega_{i}\) we have \(A_{v}\Omega_{i}=B_{f}\Omega_{i}=\Omega_{i}\). This means \(\Omega_{i}\in\mathcal{V}_{n}\). Since \(\mathcal{V}_{n}\) is 1-dimensional, we have \(\omega_{i}=\eta_{m}^{0}|_{n}\) for all \(n\).
We can now extend \(\eta_{n}^{0}|_{m}\) to the whole observable algebra \(\mathfrak{A}\) and call it \(\omega_{n}\).
**Lemma A.7**.: _The sequence of states \(\omega_{n}\) converges in the weak\(-^{*}\) topology to a state \(\omega\)._
Proof.: To show convergence, Let \(O\in\mathfrak{A}_{\Pi_{n}^{E}}\), then we have for all \(\omega_{m}(O)=\omega_{n+1}(O)\) for all \(m\geq n+1\). Since \(n\) was arbitrarily chosen, the state \(\omega_{n}\) converges for all local observables. Since the set of all local observables is dense in \(\mathfrak{A}\) we have that \(\omega_{n}\) converges in the weak\(-^{*}\) topology to a state \(\omega\).
**Theorem 2.2**.: \(\omega\) _is the unique pure frustration free translation invariant ground state of the 3dTC._
Proof.: To show uniqueness, we consider another frustration free ground state \(\omega^{\prime}\). Since it agrees with \(\omega\) on all local observables, it is the same state as \(\omega\).
To show frustration freeness and purity, we see that \(\omega\) is the unique ground state with \(0\) energy. Hence it is frustration free. Since \(\omega\) is a unique frustration free ground state, \(\omega\) must be pure.
To show translation invariance, consider \(T_{x}\) being a translation operator. We have:
\[\omega(O^{\dagger}\delta_{x}(O)): =\lim_{\Lambda\to\mathcal{E}(\Gamma)}\omega(O^{\dagger}[T_{x}^{ \dagger}H_{\Lambda}T_{x},O])\] \[=\lim_{\Lambda\to\mathcal{E}(\Gamma)}\omega(O^{\dagger}[H_{ \Lambda},O])\] \[=\omega(O^{\dagger}\delta(O))\geq 0\]
So the ground state of the translated Hamiltonian is the same as the ground state \(\omega\) of the original Hamiltonian, which is a unique frustration free ground state. Thus \(\omega\) is translation invariant. This concludes our proof.
**Theorem 2.6**.: _All ground states of the form \(\omega_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}}\) (including \(\omega\), the unique translation invariant ground state) are pure._
Proof.: Let's begin with the trivial case when there are no charges or strings in the ground state configuration (\(m=n=0\)). In this case, theorem 2.2 tells us that \(\omega\) is pure. We have \(\alpha_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}} :=\alpha_{v_{1}}\circ\cdots\circ\alpha_{v_{n}}\circ\alpha_{\overline{\gamma}_ {1}}\circ\cdots\circ\alpha_{\overline{\gamma}_{m}}\) and all \(\alpha_{v},\alpha_{\overline{\gamma}}\) are automorphisms. The composition of automorphisms is an automorphism. Since automorphisms map pure states to pure states, we have \(\omega_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots,\overline{\gamma}_{m}} :=\omega\circ\alpha_{v_{1},\cdots,v_{n};\overline{\gamma}_{1},\cdots, \overline{\gamma}_{m}}\) is a pure state.
Lattice facts
Here we list some miscellaneous but important lattice facts.
**Lemma B.1**.: _Let \(\alpha_{v;n}(O):=F_{\gamma_{v;n}}OF_{\gamma_{v;n}}\). \(\alpha_{v;n}(O)\) converges strongly to \(\alpha_{v}(O)\)._
Proof.: Let \(O\in\mathfrak{A}_{m}\) where \(\mathfrak{A}_{m}\) is the restriction of \(\mathfrak{A}\) to an \(m\times m\times m\) region in \(\Gamma\). Now let \(n^{\prime}>n>m\). Then we have,
\[\alpha_{v;n^{\prime}}(O) =F_{\gamma_{v;n}}OF_{\gamma_{v;n^{\prime}}}\] \[=F_{\gamma_{v;n}}F_{\gamma_{v;n^{\prime}}}OF_{\gamma_{n;n^{\prime }}}F_{\gamma_{v;n^{\prime}}}\] \[=F_{\gamma_{v;n}}OF_{\gamma_{v;n}}\] \[=\alpha_{v,n}(O)\]
So \(\alpha_{v}(O)\) converges for all \(n\geq m\) for any \(O\in\mathfrak{A}_{m}\). Since \(m\) was otherwise arbitrary, \(\alpha_{v}(O)\) converges for any local operator \(O\in\mathfrak{A}_{loc}\). \(\mathfrak{A}_{loc}\) is dense in \(\mathfrak{A}\) so \(\alpha_{v}(O)\) converges for all \(O\in\mathfrak{A}\).
**Lemma B.2**.: \(\alpha_{v}\) _is independent of the orientation of the path \(\gamma_{v;n}\)._
Proof.: We have the following property of the charge operators \(F_{\gamma_{v;n}}^{\dagger}=F_{-\gamma_{v;n}}=F_{\gamma_{v;n}}\). Then we have for any \(O\in\mathfrak{A}\),
\[\alpha_{v}(O) =\lim_{n\to\infty}F_{\gamma_{v;n}}OF_{\gamma_{v;n}}\qquad O\in \mathfrak{A}\] \[=\lim_{n\to\infty}F_{-\gamma_{v;n}}OF_{-\gamma_{v;n}}\] \[=\alpha_{v}(O)\]
**Lemma B.3**.: _Let \(\alpha_{\overline{S}_{\overline{\gamma}_{n}}}(O):=F_{\overline{S}_{\overline {\gamma}_{n}}}OF_{\overline{S}_{\overline{\gamma}_{n}}}\). \(\alpha_{\overline{S}_{\overline{\gamma}_{n}}}(O)\) strongly converges to \(\alpha_{\overline{\gamma}}(O)\)._
Proof.: Let \(O\in\mathfrak{A}_{m}\) where \(\mathfrak{A}_{m}\) is the restriction of \(\mathfrak{A}\) to an \(m\times m\times m\) region in \(\Gamma\). Now let \(n^{\prime}>n>m\). Then we have,
\[\alpha_{\overline{S}_{\overline{\gamma}_{n^{\prime}}}}(O) =F_{\overline{S}_{\overline{\gamma}_{n^{\prime}}}}OF_{\overline{S }_{\overline{\gamma}_{n^{\prime}}}}\] \[=F_{\overline{S}_{\overline{\gamma}_{n}}}F_{\overline{S}_{( \overline{\gamma}_{n^{\prime}}-\overline{\gamma}_{n})}}OF_{\overline{S}_{ \overline{\gamma}_{n^{\prime}}}}F_{\overline{S}_{(\overline{\gamma}_{n^{\prime }}-\overline{\gamma}_{n})}}\] \[=F_{\overline{S}_{\overline{\gamma}_{n}}}OF_{\overline{S}_{ \overline{\gamma}_{n}}}\] \[=\alpha_{\overline{S}_{\overline{\gamma}_{n}}}(O)\]
So \(\alpha_{\overline{S}_{\overline{\gamma}}}(O)\) converges for all \(n\geq m\) for any \(O\in\mathfrak{A}_{m}\). Since \(m\) was otherwise arbitrary, \(\alpha_{\overline{S}_{\overline{\gamma}}}(O)\) converges for any local operator \(O\in\mathfrak{A}_{loc}\). \(\mathfrak{A}_{loc}\) is dense in \(\mathfrak{A}\) so \(\alpha_{\overline{S}_{\overline{\gamma}}}(O)\) converges for all \(O\in\mathfrak{A}\).
**Lemma B.4**.: \(\alpha_{\overline{\gamma}}=\alpha_{-\overline{\gamma}}\) _where \(-\overline{\gamma}\) is the path with a reverse orientation._
Proof.: We have \(F_{\overline{S}_{\overline{\gamma}}}^{\dagger}=F_{\overline{S}_{-\overline{ \gamma}}}=F_{\overline{S}_{\overline{\gamma}}}\) where \(\overline{S}_{-\overline{\gamma}}\) is the surface with reverse boundary orientation. So we have for any \(O\in\mathfrak{A}\),
\[\alpha_{\overline{\gamma}}(O) =\lim_{n\to\infty}F_{\overline{S}_{\overline{\gamma}_{n}}}OF_{ \overline{S}_{\overline{\gamma}_{n}}}\qquad O\in\mathfrak{A}\] \[=\lim_{n\to\infty}F_{\overline{S}_{-\overline{\gamma}_{n}}}OF_{ \overline{S}_{-\overline{\gamma}_{n}}}\] \[=\alpha_{-\overline{\gamma}}(O)\]
**Theorem B.5**.: _The set \((\partial\overline{S}\bigcup_{n=1}^{N}\overline{\gamma}_{n})\setminus\bigcup_{n=1} ^{N}(\partial\overline{S}\cap\overline{\gamma}_{n})\) is a set of paths._
Proof.: For simplicity we will only consider the case when \(\partial\overline{S}\) overlaps with each \(\overline{\gamma}_{n}\) once and only once, and that the paths \(\overline{\gamma}_{n}\) do not overlap with each other. These generalisations are straightforward. From lemma B.4 we know that the orientation of any path \(\overline{\gamma}\) is irrelevant for the physical objects \(\alpha_{\overline{\gamma}}\). We may then freely change the orientation of the paths \(\overline{\gamma}_{n}\). We first change the orientations of \(\overline{\gamma}_{n}\) such that \(\partial\overline{S}\) has the reverse orientation to \(\overline{\gamma}_{n}\) at its overlap with \(\overline{\gamma}_{n}\) for all \(n\).
Call the set \(\partial\overline{S}\cap\overline{\gamma}_{n}\) as \(\overline{\rho}_{n}\). We now divide the paths \(\overline{\gamma}_{n}\) into 3 paths each, starting from their beginning: \(\overline{\epsilon}_{n},\overline{\rho}_{n},\overline{\sigma}_{n}\). Here \(\overline{\epsilon}_{n}\) consists of edges in \(\overline{\gamma}_{n}\) before it overlaps with \(\partial\overline{S}\), \(\overline{\rho}\) is the set of all edges of \(\overline{\gamma}_{n}\) that overlaps with \(\partial\overline{S}\), and \(\overline{\sigma}_{n}\) consists of all the edges after the overlap. Using the sets \(\overline{\rho}_{n}\) we can partition \(\partial\overline{S}\) into sets \(\{\overline{\theta}_{i},\overline{\rho}_{i}\}_{i=1}^{N}\) where \(\overline{\theta}_{n}\) is the set of edges of \(\partial\overline{S}\) between \(\overline{\rho}_{n},\overline{\rho}_{n+1}\). We illustrate this in figure 16 for the case of 3 paths.
Let us now discard the orientation of the paths, as we are considering the paths purely as sets. Then we have
\[\partial\overline{S}\bigcup_{n=1}^{N}\overline{\gamma}_{n} =\sum_{n}\overline{\epsilon}_{n}+\overline{\rho}_{n}+\overline{ \sigma}_{n}+\overline{\theta}_{n}\] \[\bigcup_{n=1}^{N}(\partial\overline{S}\cap\overline{\gamma}_{n}) =\sum_{n}\overline{\rho}_{n}\] \[\implies(\partial\overline{S}\bigcup_{n=1}^{N}\overline{\gamma}_{n })\setminus\bigcup_{n=1}^{N}(\partial\overline{S}\cap\overline{\gamma}_{n}) =\sum_{n}\overline{\epsilon}_{n}+\overline{\sigma}_{n}+\overline{ \theta}_{n}\] \[=\sum_{n}\overline{\epsilon}_{n}+\overline{\theta}_{n}+\overline {\sigma}_{n+1\bmod N}\] \[=\sum_{n}\overline{\gamma}_{n}^{\prime}\]
Where \(\overline{\gamma}_{n}^{\prime}=\overline{\epsilon}_{n}+\overline{\theta}_{n}+ \overline{\sigma}_{n+1\bmod N}\).
To show that \(\overline{\gamma}_{n}^{\prime}\) are paths, we restore the orientations and notice that \(\overline{\theta}_{n}\) is a path from \(\partial\overline{\epsilon}_{n}\) to \(\partial\overline{\sigma}_{n+1\bmod N}\), where \(\partial\overline{\epsilon}_{n},\partial\overline{\sigma}_{n}\) are the finite end and start vertices of paths \(\overline{\epsilon}_{n},\overline{\sigma}_{n}\) respectively. So \(\overline{\gamma}_{n}^{\prime}\) has a consistent orientation. This concludes our proof.
Acknowledgements
We would like to thank Bruno Nachtergaele for the initial idea for the paper and insightful discussions. We would also like to thank Alex Bols for a detailed sketch of the proof of purity of the frustration free ground state of the 3d TC. We were supported by the NSF grant DMS-2108390.
|
2302.02466 | An uncertainty principle for Möbius inversion on posets | We give conditions for a locally finite poset $P$ to have the property that
for any functions $f:P\to {\bf C}$ and $g:P\to {\bf C}$ not identically zero
and linked by the M\"obius inversion formula, the support of at least one of
$f$ and $g$ is infinite. This generalises and gives an entirely poset-theoretic
proof of a result of Pollack. Various examples and non-examples are discussed. | Marcel K. Goh | 2023-02-05T19:33:17Z | http://arxiv.org/abs/2302.02466v2 | # An uncertainty principle for Mobius inversion on posets
###### Abstract
We give conditions for a locally finite poset \(P\) to have the property that for any functions \(f:P\to\mathbf{C}\) and \(g:P\to\mathbf{C}\) not identically zero and linked by the Mobius inversion formula, the support of at least one of \(f\) and \(g\) is infinite. This generalises and gives an entirely poset-theoretic proof of a result of Pollack. Various examples and non-examples are discussed.
M 06A07
## 1 Introduction
In harmonic analysis, the celebrated uncertainty principle states that a function and its Fourier transform cannot both have small support, where the notion of "small" is made precise in various ways by different theorems. A 2011 paper of P. Pollack proves an analogue of this principle in which the Fourier transform is replaced by the number-theoretic Mobius transform [1]. Concretely, it is shown that the support of an arithmetic function and that of its Mobius transform cannot both be finite. The proof is short and uses basic properties of complex power series. Stronger theorems related to the asymptotic densities of the functions involved were subsequently proved by P. Pollack and C. Sanna [2]
One is led to wonder whether Pollack's original result, which can be expressed in entirely poset-theoretic terms, has a purely poset-theoretic proof. In this note, we show that such a proof does indeed exist, and that it is general enough to also apply to the poset of all subsets of \(\mathbf{N}\) ordered by inclusion.
## 2 The main theorem
In this section we establish definitions and notation before proceeding with the proof of our main theorem. Details concerning these definitions can be found in any introductory text on enumerative combinatorics; see, e.g., R. P. Stanley's textbook [4].
Our main object of interest is a poset \((P,\leq)\). An _interval_ in a poset is a subset of the form \([x,y]=\{z\in P:x\leq z\leq y\}\) for some elements \(x,y\in P\). A poset is said to be _locally finite_ if every interval in it is finite. An element \(\widehat{0}\) such that \(\widehat{0}\leq x\) for all \(x\in P\) is called a _bottom element_. For any element \(x\in P\), the _principal order ideal generated by \(x\)_ is the set \(\{y\in P:y\leq x\}\). This is denoted by \(\langle x\rangle\) for short.
The _Mobius function_\(\mu_{P}\) of \(P\) is a function from the intervals of \(P\) to the complex numbers obtained by setting \(\mu_{P}([x,x])=1\) for all \(x\in P\) and recursively putting
\[\mu_{P}([x,y])=-\sum_{x\leq z<y}\mu(x,z)\]
for all \(x\leq y\) in \(P\). For brevity we write \(\mu_{P}(x,y)\) instead of \(\mu_{P}([x,y])\). The Mobius inversion theorem states that if \(f\) is a function from \(P\) to \({\bf C}\) and \(g:P\to{\bf C}\) is given by
\[g(y)=\sum_{x\leq y}f(x)\]
for all \(y\in P\), then for all \(y\in P\) we have
\[f(y)=\sum_{x\leq y}\mu_{P}(x,y)g(x).\]
We shall say in this case that \(g\) is the _Mobius transform_ of \(f\).
Last but not least, for a function \(f:P\to{\bf C}\) the _support_ of \(f\) is defined to be the set of \(x\in P\) such that \(f(x)\neq 0\). Our main theorem gives conditions on a poset under which \(f\) and \(g\) defined as above cannot both have finite support.
Theorem 1: Let \(P\) be a locally finite poset with bottom element \(\widehat{0}\). Suppose that for every \(y\in P\) and finite \(S\subseteq P\), there exist infinitely many \(z>y\) such that
1. \(\bigl{(}\langle z\rangle\setminus\langle y\rangle\bigr{)}\cap S=\emptyset\);
2. for all \(x\leq y\), \(\mu_{P}(x,y)\mu_{P}(y,z)=\mu_{P}(x,z)\); and
3. \(\mu_{P}(y,z)\neq 0\).
Then for any \(f:P\to{\bf C}\) that is not identically zero, the support of \(f\) and the support of its Mobius transform \(g\) cannot both be finite.
Demonstration Proof: If \(g\) has infinite support we are done, so suppose that \(g\) has finite support; let this be our choice of \(S\), and choose \(y\) with \(f(y)\neq 0\). By hypothesis there exist infinitely many \(z\) satisfying the conditions above. Fix an arbitrary such \(z\) and expand
\[f(z)=\sum_{x\in\langle z\rangle}\mu_{P}(x,z)g(x)=\sum_{x\in\langle y\rangle} \mu_{P}(x,z)g(x)+\sum_{x\in\langle z\rangle\setminus\langle y\rangle}\mu_{P}(x,z)g(x).\]
By the first condition on \(z\), the second summation is zero, and by the second and third conditions we have
\[f(z)=\sum_{x\in\langle y\rangle}\mu_{P}(x,y)\mu_{P}(y,z)g(x)=\mu_{P}(y,z)f(y) \neq 0.\]
Hence we see that \(f(z)\neq 0\) for infinitely many \(z\).
## 3 Examples
Let us say that a poset \(P\)_adheres to the uncertainty principle_ if for all pairs of functions \(f,g:P\to\mathbf{C}\) linked by the Mobius inversion formula, at most one of \(f\) and \(g\) has finite support. In this section we show that our theorem applies not only to the divisibility poset treated by Pollack's paper, but also to the poset of all finite subsets of \(\mathbf{N}\).
### The divisibility poset
Let \(P\) be the set \(\mathbf{N}\) of natural numbers, partially ordered by divisibility. The Mobius function is \(\mu_{P}(x,y)=\mu(y/x)\), where \(\mu:\mathbf{N}\to\{-1,0,1\}\) is given by
\[\mu(n)=\left\{\begin{array}{ll}(-1)^{k},&\mbox{if $n$ is the product of $k$ distinct primes;}\\ 0,&\mbox{if $n$ is divisible by a perfect square.}\end{array}\right..\]
This is a _multiplicative_ function, in the sense that \(\mu(mn)=\mu(m)\mu(n)\) whenever \(\gcd(m,n)=1\). We use this in our proof of Pollack's result on arithmetic functions.
**Proposition 2** (_Pollack,_ 2011).: Let \(f:\mathbf{N}\to\mathbf{C}\) be a function not identically zero and let \(g:\mathbf{N}\to\mathbf{C}\) be given by
\[g(n)=\sum_{d|n}f(d),\]
so that
\[f(n)=\sum_{d|n}\mu(n/d)g(d).\]
Then the support of \(f\) and the support of \(g\) cannot both be finite.
_Proof._ Let \(y\in\mathbf{N}\) be given and let \(S\subseteq\mathbf{N}\) be an arbitrary finite set. Let \(Q\) be the set of all primes that do not divide \(y\) and do not divide any member of \(S\); this set is of course infinite, as is the set \(Z=\{yq:q\in Q\}\). Let \(z\in Z\) and let \(q=z/y\in Q\). The order ideal \(\langle z\rangle\) is the set of all divisors of \(z\), namely
\[\bigcup_{d|y}\{d,qd\},\]
which means that \(\langle z\rangle\setminus\langle y\rangle=\{qd:d\mid y\}\). But \(q\) divides everything in this set, which means that \(S\) does not intersect it. Next, observe that for all \(d\) dividing \(y\),
\[\mu_{P}(d,y)\mu_{P}(y,z)=\mu(y/d)\mu(z/y)=\mu(y/d)\mu(q)=\mu(qy/d)=\mu(z/d)= \mu_{P}(d,z),\]
since \(\mu\) is a multiplicative function. Lastly, note that \(\mu(z/y)=\mu(q)=-1\neq 0\). Thus by Theorem 1 we are done.
Our proof only requires elementary number theory and the properties of posets, but a downside is that it uses the fact that there are infinitely many primes. Hence we cannot use Proposition 2 to give an alternative proof of the infinitude of primes, as Pollack did in his paper.
### Finite multisets of a countably infinite set
A _multiset on a ground set_\(X\) is a function \(m:X\to{\bf N}\). A multiset \(m\) is said to be _finite_ if \(\sum_{x\in X}m(x)<\infty\). We can place a partial order on the set of all finite multisets of \(X\) by saying that \(m\leq m^{\prime}\) if for all \(x\in X\), \(m(x)\leq m^{\prime}(x)\). When \(X\) is countably infinite, this is isomorphic to the divisibility poset above, since every integer \(n\) can be uniquely represented by the multiset \(m_{n}\) mapping a prime \(p\) to the exponent of \(p\) in the factorisation of \(n\). Thus we see that the poset of all finite multisets on a countably infinite set adheres to the uncertainty principle.
### Finite sets of natural numbers
Having dealt with multisets, we now turn to ordinary sets. Let \(P\) be the collection \({\cal F}\) of all finite subsets of \({\bf N}\), ordered by divisibility. In this context the Mobius function \(\mu_{P}\) is given by \(\mu_{P}(S,T)=(-1)^{|T|-|S|}\) for all \(S\subseteq T\). It is not difficult to show that this poset satisfies the hypotheses of Theorem 1.
**Proposition 3**: Let \(f:{\cal F}\to{\bf C}\) and let \(g:{\cal F}\to{\bf C}\) be given by
\[g(y)=\sum_{x\subseteq y}f(x),\]
so that
\[f(y)=\sum_{x\subseteq y}(-1)^{|y|-|x|}g(x).\]
Then the support of \(f\) and the support of \(g\) cannot both be finite.
Let \(y\) be arbitrary and let \(S\) be any finite collection of sets in \({\cal F}\). Let \(Q=\bigl{(}{\bf N}\setminus\{y\}\bigr{)}\setminus\bigcup S\); since \(S\) is a finite collection of finite sets, \(Q\) is infinite. Let \(Z=\{y\cup\{q\}:q\in Q\}\). To show that any \(z\in Z\) satisfies condition (i) of Theorem 1, note that any member of \(\langle z\rangle\setminus\langle y\rangle\) must contain the element \(q\), which we know is not in \(S\). Condition (ii) is easy, since for _any_ chain of sets \(x\subseteq y\subseteq z\), we have
\[\mu_{P}(x,y)\mu_{P}(y,z)=\mu_{{\cal F}}(-1)^{|y|-|x|}(-1)^{|z|-|y|}=(-1)^{|z| -|x|}=\mu_{P}(x,z).\]
Condition (iii) is even more trivial, since the image of \(\mu_{P}\) is \(\{\pm 1\}\).
Of course there is nothing special about \({\bf N}\) here; our proof applies just as well to any other countably infinite set.
## 4 Non-examples
It is worth noting that many infinite posets do not adhere to the uncertainty principle. Perhaps the simplest example is given by the set \({\bf N}\) of natural numbers
under the usual definition of \(\leq\). On this poset it is easy to check that if we let \(g(1)=1\), \(g(2)=-1\), and \(g(n)=0\) for all \(n\geq 3\), then for \(f(n)=\sum_{m\leq n}g(n)\) we have \(f(1)=1\) and \(f(n)=0\) for all \(n\geq 2\).
The following proposition gives a simple necessary condition for a poset \(P\) to adhere to the uncertainty principle.
**Proposition 4.** For a locally finite poset \(P\) to adhere to the uncertainty principle it is necessary that for all \(x\in P\), the set
\[S_{x}=\{y\in P:\mu_{P}(x,y)\neq 0\}\]
is infinite.
_Proof._ Suppose there is \(x\in P\) such that \(S_{x}\) is finite. Let
\[g(y)=\cases{1,&if $z=x$;\cr 0,&otherwise.\cr}.\]
Clearly the support of \(g\) is finite. Then since
\[f(y)=\sum_{x\leq y}\mu_{P}(x,y)g(x)=\mu_{P}(x,y)\]
for all \(y\geq x\) and \(f(y)=0\) for \(y\not\geq x\), we conclude that the support of \(f\) is contained in \(S_{x}\). Hence \(P\) does not adhere to the uncertainty principle.
Thus the fact that \(P=({\bf N},\leq)\) does not adhere to the uncertainty principle is explained by the formula
\[\mu_{P}(m,n)=\cases{1,&if $m=n$;\cr-1,&if $m+1=n$;\cr 0,&otherwise}\]
which implies that, for example, \(\mu_{P}(1,x)\) is nonzero only for \(x\in\{1,2\}\).
**Arbitrary convolutions.** Let \({\rm Int}(P)\) denote the set of all intervals in a poset \(P\). When \(P\) is locally finite we define the _incidence algebra_ to be the algebra (over the complex field) of all functions from \({\rm Int}(P)\) to \({\bf C}\), with multiplication given by the convolution operation
\[(\alpha*\beta)(x,y)=(\alpha*\beta)\bigl{(}[x,y]\bigr{)}=\sum_{x\leq z\leq y} \alpha(x,z)\beta(z,y).\]
The identity element of the incidence algebra is the function \(\delta_{P}(x,y)\) defined by
\[\delta_{P}(x,y)=\cases{1,&if $x=y$;\cr 0,&otherwise.\cr}\]
The constant function \(\zeta_{P}(x,y)=1\) is called the _zeta function_, and the Mobius inversion formula is equivalent to the statement that \(\mu_{P}\) and \(\zeta_{P}\) are inverses to one another in the incidence algebra; that is, \(\mu*\zeta=\zeta*\mu=\delta\), where here (and in the rest of the section) we dispense with subscripts for brevity. (When \(P\) has bottom element \(\widehat{0}\) we associate to any function \(f:P\to{\bf C}\) the function \(f:{\rm Int}(P)\to{\bf C}\) given by \(f(\widehat{0},x)=f(x)\) and \(f(z,x)=0\) for all \(z\neq 0\).)
A 2014 paper of C. Sanna [3] generalises the density result of Pollack and Sanna found in [2] to arbitrary Dirichlet convolutions. In this vein, we formulate a more general definition of our own. We shall say that \(P\) adheres to the uncertainty principle _with respect to \((\alpha,\beta)\)_ if \(\alpha\) and \(\beta\) are inverses of one another in the incidence algebra of \(P\) and whenever \(f:P\to{\bf C}\) and \(g:P\to{\bf C}\) are functions linked by the relations
\[f(y)=\sum_{x\leq y}\alpha(x,y)g(x)\qquad\hbox{and}\qquad g(y)=\sum_{x\leq y} \beta(x,y)f(x),\]
at most one of \(f\) and \(g\) can have finite support. Thus our previous definition of adhering to the uncertainty principle is the same as doing so with respect to \((\mu,\zeta)\) under our new definition.
It is trivial to adapt the proof of Proposition 4 to the more general case.
**Proposition 5.** Let \(P\) be a locally finite poset and let \(\alpha\) and \(\beta\) be inverse to one another in the incidence algebra of \(P\). Then for \(P\) to adhere to the uncertainty principle with respect to \((\alpha,\beta)\), it is necessary that for all \(x\in P\), the sets
\[S_{x}=\{y\in P:\alpha(x,y)\neq 0\}\qquad\hbox{and}\qquad T_{x}=\{y\in P: \beta(x,y)\neq 0\}\]
are both infinite.
_Proof._ Let \(x\in P\) and suppose one of the sets above is finite. Define the function \(f\) or \(g\) (depending on whether \(T_{x}\) or \(S_{x}\) is finite) accordingly and proceed as in the proof of the previous proposition.
We note here that Theorem 1 can be generalised in a similar fashion. Observe also that Proposition 4 only had to consider \(\mu\) because \(\zeta\) has infinite support for any infinite poset. The necessary condition put forth by Proposition 4 is of course related to condition (iii) found in Theorem 1. The other two hypotheses of that theorem were added for convenience but it is not as obvious that they should be related to the problem in a fundamental way. Thus we have the following hopeful conjecture, which we formulate in the more general context pertaining to Proposition 5.
**Conjecture 6.** Let \(P\) be a locally finite poset with bottom element and let \(\alpha\) and \(\beta\) be inverses to one another in the incidence algebra of \(P\). Then \(P\) adheres to the uncertainty principle with respect to \((\alpha,\beta)\) if and only if for all \(x\in P\), the sets
\[S_{x}=\{y\in P:\alpha(x,y)\neq 0\}\qquad\hbox{and}\qquad T_{x}=\{y\in P: \beta(x,y)\neq 0\}\]
are both infinite.
## Acknowledgements
The author wishes to thank Carlo Sanna for suggesting the generalisation of Proposition 4, which places Conjecture 6 in a more general setting.
|
2303.03445 | How Auditing Methodologies Can Impact Our Understanding of YouTube's
Recommendation Systems | Data generated by audits of social media websites have formed the basis of
our understanding of the biases presented in algorithmic content recommendation
systems. As legislators around the world are beginning to consider regulating
the algorithmic systems that drive online platforms, it is critical to ensure
the correctness of these inferred biases. However, as we will show in this
paper, doing so is a challenging task for a variety of reasons related to the
complexity of configuration parameters associated with the audits that gather
data from a specific platform.
Focusing specifically on YouTube, we show that conducting audits to make
inferences about YouTube's recommendation systems is more methodologically
challenging than one might expect. There are many methodological decisions that
need to be considered in order to obtain scientifically valid results, and each
of these decisions incur costs. For example, should an auditor use (expensive
to obtain) logged-in YouTube accounts while gathering recommendations from the
algorithm to obtain more accurate inferences? We explore the impact of this and
many other decisions and make some startling discoveries about the
methodological choices that impact YouTube's recommendations. Taken all
together, our research suggests auditing configuration compromises that YouTube
auditors and researchers can use to reduce audit overhead, both economically
and computationally, without sacrificing accuracy of their inferences.
Similarly, we also identify several configuration parameters that have a
significant impact on the accuracy of measured inferences and should be
carefully considered. | Sarmad Chandio, Daniyal Pirwani Dar, Rishab Nithyanand | 2023-03-06T19:08:51Z | http://arxiv.org/abs/2303.03445v1 | # How Audit Methodologies Can Impact Our Understanding of YouTube's Recommendation Systems
###### Abstract
Data generated by audits of social media websites have formed the basis of our understanding of the biases presented in algorithmic content recommendation systems. As legislators around the world are beginning to consider regulating the algorithmic systems that drive online platforms, it is critical to ensure the correctness of these inferred biases. However, as we will show in this paper, doing so is a challenging task for a variety of reasons related to the complexity of configuration parameters associated with the audits that gather data from a specific platform.
Focusing specifically on YouTube, we show that conducting audits to make inferences about YouTube's recommendation systems is more methodologically challenging than one might expect. There are many methodological decisions that need to be considered in order to obtain scientifically valid results, and each of these decisions incur costs. For example, should an auditor use (expensive to obtain) logged-in YouTube accounts while gathering recommendations from the algorithm to obtain more accurate inferences? We explore the impact of this and many other decisions and make some startling discoveries about the methodological choices that impact YouTube's recommendations. Taken all together, our research suggests auditing configuration compromises that YouTube auditors and researchers can use to reduce audit overhead, both economically and computationally, without sacrificing accuracy of their inferences. Similarly, we also identify several configuration parameters that have a significant impact on the accuracy of measured inferences and should be carefully considered.
## 1 Introduction
**Auditing content recommendation systems is becoming increasingly important.** As social media platforms and the algorithms they employ continue to have an increasing impact on our socio-political realities, auditing them (accutately) has become an increasingly important task for many reasons. After all, these audits, often focused on algorithmic recommendation systems, play a significant role in drafting effective regulation around online platforms and algorithms [20] and developing a better understanding of the role of algorithms in political polarization [1], spread of misinformation [14], and other societal behaviors. For example, focusing on the YouTube platform, prior work has uncovered several concerning aspects of the algorithmic recommendation systems such as its propensity to create filter-bubbles [15], recommend age-inappropriate content [21], misinformation [16, 17], and even extremist content [18, 19]. However, these works often appear to contradict each other -- e.g., prior work has shown that YouTube recommendation systems cause a mainstream effect (i.e., promoting popular content over niche content) [10] while also showing its tendency to promote niche and extremist content [18]. Formulating effective regulation and developing a meaningful understanding of the impact of algorithms on society is challenging in these scenarios where contradictory findings from algorithm audit studies are commonplace. _This work (1) shows that auditing methodologies are one source for such contradictory results and (2) suggests approaches to reduce their occurrence._
**Conceptually, designing a recommendation algorithm audit is simple.** Due to the opacity of the algorithms being audited, researchers rely on what is referred to as the "sock-puppet" audit approach [2]. Here, the audit can be generalized into a simple three-step process.
_Create sock-puppets_. Sock-puppets or personas that aim to impersonate real human users are created. The goal is to use automation tools, typically web crawlers, to provide the underlying recommendation system a set of interactions from which it may learn certain characteristics about the sock-puppet. In the context of YouTube, this may involve having the sock-puppet load a set of videos (referred to as the training set) that provide a base from which the recommendation algorithms learn user behaviors and preferences.
_Measure the recommendation tree._ In this step, a seed interaction that generates recommendations is performed by the sock-puppet. This set of recommendations forms the first layer of the recommendation tree. Often, the recommendations themselves are interacted with recursively to form a deeper tree of recommendations. Applied to YouTube, this step involves providing the sock-puppet with a seed video from which all recommendations are gathered. This is followed by then loading the videos associated with each of
these recommendations themselves to fill the recommendation tree.
_Hypothesis testing._ Finally, given a (set of) recommendation tree(s) associated with sock-puppets of different characteristics, hypotheses about the underlying recommendation algorithm are tested and inferences about them are made.
**In practice, algorithm audits are challenging and can force methodological compromises.** Although simple at first glance, there are several key decisions in each step of the previously described process that are often overlooked. For example, when conducting crawls to construct sock-puppets, researchers are faced with the decisions of what videos to use as part of their sock-puppet training set, how many videos to include in this training set, and what video to use as their seed, amongst others. The uncertainty about the impact that each video might have on the gathered recommendations makes these decisions challenging. Complicating matters, even when rigorous and sound rationale are applied to the above questions, are the high dollar and computational costs associated with methodological rigor.
_Methodological compromises due to high dollar costs._ Online platforms, including YouTube, make it difficult to automate the creation of the number of accounts required for a meaningful audit -- they serve CAPTCHAs to (or perform outright blocking of) web automation tools seeking to create accounts and often require verified phone numbers for each account. The costs of circumventing these challenges can be prohibitively high and force compromises that may impact the validity of their inferences. For example, researchers may simply associate each sock-puppet with a unique browser (cookie) and bypass the difficulties (and high costs) associated with obtaining verifiable phone numbers for each sock-puppet. However, such circumvention is often done in the hopes that the accuracy of any inferences drawn from the audit are not harmed -- i.e., they operate on the assumption that YouTube's recommendations treat logged-in users in the same way as non-logged-in users with YouTube's cookie in their browser.
_Methodological compromises due to high computational costs._ Further, crawling videos is computationally expensive and time consuming when crawlers encounter large numbers of hour-long (or longer) videos. This, combined with the need to gather large amounts of data for statistically sound hypothesis testing, can require 1000's of hours of machine time for a single audit. This poses another dilemma: should one pay the high computational costs associated with watching the entirety of each video and should all paths of the recommendation tree be traversed to make valid conclusions? Although the alternatives of simply sampling sections of the tree and not watching videos to completion are more tractable, it remains unclear if they have an impact on the subsequent recommendations gathered by the craw.
Simply put, _there are currently no best-practices or guidelines for sock-puppet-style audits on platforms such as YouTube_. In this paper, we specifically focus on YouTube and seek to fill this gap by answering the following research questions.
**RQ1. What is the relationship between sock-puppet training set, recommendation seed, and recommendation trees? (SS3)** We begin by studying the impact that the training set and seed have on the recommendation trees they generate. Specifically, we conduct an experiment in which we train four sets of sock-puppets using all combinations of two distinctly different seeds and training sets. We then analyze the recommendation trees they generate to understand how recommendations change with alterations to the seed and training set.
**RQ2. What is the impact of reducing dollar costs during audits? (SS4)** We investigate the consequences of one of the most commonly observed cost-saving measures adopted by YouTube auditors -- avoiding the use of real YouTube accounts for each sock-puppet and instead relying on browser cookies to leak a sock-puppet's identity to YouTube. We conduct this analysis by comparing the recommendation trees generated by four sets of sock-puppets that reflect commonly observed practices in audit research. These sets of sock-puppets are identical in every way except for their method of maintaining YouTube account'state'.
**RQ3. What is the impact of reducing computational costs during audits? (SS5)** Finally, we consider the consequences of compromises that are associated with reducing computational costs. We specifically focus on the impact of time spent on each sock-puppet training video and the depth/breadth of recommendation tree exploration. We do so by training sets of sock-puppets that "watch" videos to varying levels of completion and measuring the differences in their gathered recommendation trees. We then study the characteristics of the nodes sampled from all recommendation trees gathered in our study to identify differences in their properties based on their location in the tree.
## 2 Methodology
In total, we conduct eleven experiments (_Cf._ Table 1) in which we alter specific audit parameters. We use the gathered recommendation trees from these audits to identify the impacts of varying parameters. In this section, we describe our audit configuration selections and analysis methodologies.
### Configuration parameters
**Sock-puppet training sets** (\(T_{\text{niche}}\), \(T_{\text{main}}\)). In all 11 experiments, we begin by training our sock-puppets with the videos contained in either \(T_{\text{niche}}\) or \(T_{\text{main}}\). A previous study [1] shows that 22 videos are enough to personalize the YouTube video recommendations, whereas we decided to with 32 videos1 for each of our training sets.
_The niche training set (\(T_{\text{niche}}\))._ The videos in \(T_{\text{niche}}\) were manually curated to represent fringe (e.g., conspiracy theories) and relatively unpopular (lower number of views) content. The videos in this set were chosen from fringe subreddits such as _r/climateskeptics_ and _r/theworldisflat_, amongst others. Videos in this set were advocating for the position associated with the subreddit topic (e.g., pro flat-earth). On average, videos in \(T_{\text{niche}}\) had received 25K views.
_The mainstream training set (\(T_{\text{main}}\))._ Each video in \(T_{\text{main}}\) was curated to cover the same topic as their niche counterpart, except that they were sourced from a YouTube search of the topic (e.g., 'flat earth debunked'). The most popular videos from the search results (in terms of views) were added to \(T_{\text{main}}\). The videos in \(T_{\text{main}}\) represent the'mainstream' views and advocate for the highly accepted world view of the topics (e.g., earth is not flat). On average, videos in \(T_{\text{main}}\) had 5.9M views.
This approach of training set construction offers two sharply differing inputs to the recommendation algorithms so that any effect of training set on recommendations is measurable.
**Recommendation tree seeds** (\(s_{\text{niche}}\) and \(s_{\text{main}}\)). Seed videos are the starting point from which recommendation trees are gathered (i.e., the root of the recommendation tree). Similar to our training sets, we used one of two seeds (\(s_{\text{niche}}\) and \(s_{\text{main}}\)) which were selected based on the intuition that they would have sharply differing impacts on the recommendation tree. The \(s_{\text{niche}}\) video used in our experiments was a fringe political video on the topic of illegal immigration with 7.1K views and the \(s_{\text{main}}\) video was a very popular mainstream video focused on 'Slapgate' [22] with over 3.8M views. The topics of the seed videos were intentionally chosen (1) to not overlap with the any of the videos from the \(T_{\text{main}}\) or \(T_{\text{niche}}\) so that effects from the training sets could be distinguished from those of the seed, and (2) to not overlap with each other to maximize any measurable differences between their recommendation trees. Observing an absence of differences in recommendation trees generated from \(s_{\text{niche}}\) and \(s_{\text{main}}\) would indicate that the seed has a marginal influence on the observed recommendations.
**Account status** (\(A_{\text{full}}\), \(A_{\text{cookies}}\), and \(A_{\text{clear}}\)). To improve our understanding about whether the recommendation algorithm works differently when audits operate under different YouTube account assumptions, we gather recommendation trees using four different types of account assumptions. (1) \(A_{\text{full}}\) represents audits in which crawlers are logged into freshly created YouTube accounts before training and recommendation gathering begins. This is representative of the ideal case where each sock-puppet has its own fresh YouTube account. (2) \(A_{\text{cookies}}\) represents audits where crawlers are not logged in but maintain YouTube's cookies in their browser throughout the crawl. This is representative of the most common crawls observed in audit literature. (3) \(A_{\text{clear}}\) represents audits where crawlers conduct crawls while logged in and clear their watch history before the same account is used for another crawl. This approach is used to allow account reuse by different sock-puppets.
**Watch times** (\(W_{\text{100pc}}\), \(W_{\text{50pc}}\), \(W_{\text{25pc}}\), \(W_{\text{10pc}}\)). Training sock-puppets can be computationally expensive owing to the long lengths of videos typically contained within the training sets. To understand whether videos in the training set need to be watched to completion, we gather recommendation trees from four crawlers all configured identically except that they watch each of the training videos to different levels of completion before moving on to the next video. \(W_{\text{100pc}}\), \(W_{\text{50pc}}\), \(W_{\text{25pc}}\), and \(W_{\text{10pc}}\) watch videos to 100%, 50%, 25%, and 10% of completion, respectively.
**Interactions** (\(I_{\text{get}}\), \(I_{\text{click}}\)). Programming crawlers to perform actual clicks on hyperlinks is a challenging task due to difficulties with reliability. A commonly used alternative is to instead obtain links by parsing the DOM and having the browser load the link of interest. Unfortunately, the absence of actual clicks is also a signature used by common bot-detection tools and may result in server-side differential treatment [23, 14, 15, 16]. We conduct an experiment to understand whether clicking on recommended videos impacts subsequent recommendations. \(I_{\text{click}}\) represents an audit in which each crawler actually performs a mouse click on videos to load them during the recommendation tree crawl. \(I_{\text{get}}\) represents an audit in which each crawler simply obtains the video's URL from the DOM and instructs the browser to load that URL.
**Breadth of exploration** (\(P_{\text{left}}\), \(P_{\text{right}}\)). YouTube's recommendations are dynamically loaded and recommendation options often continue to appear while a user scrolls down the page. This increases the width of the recommendation tree at each level. In our pilot tests, we observed that the maximum number of recommendations was at least 40 for each video (and much higher in many cases). We conduct analyses on the videos that appear at the top of the recommendation list during a recommendation tree crawl (i.e., the left-most path in the tree denoted by \(P_{\text{left}}\)) and those that appear at the bottom of the recommendation list (i.e., the right-most path in the tree denoted by \(P_{\text{right}}\)).
**Depth of exploration** (\(D_{\text{top}}\), \(D_{\text{bottom}}\)). Finally, we consider the importance of performing deep crawls on measured characteristics of the recommendation tree. We do this by analyzing the characteristics of all videos observed after just loading the seed video (i.e., the \(1^{st}\) level in the recommendation tree denoted by \(D_{\text{top}}\)) and comparing them with the characteristics of all videos observed at the \(10^{th}\) level of the tree (i.e., the bottom of _our_ gathered recommendation trees denoted by \(D_{\text{bottom}}\)).
### Data gathering
**Minimizing the influence of latent confounding variables.** Recommendation trees are influenced by a large number of variables, some in researchers' control (e.g., our configuration parameters) and others not. In our study, we make a best-effort attempt to minimize these latent effects with the following approaches.
_Accounting for updates to the search index._ Due to large amounts of new content being created on YouTube, there are continuous changes to the search index and recommendation candidate lists. Therefore, two crawls gathering recommendations at time periods that are far spaced apart, may not be comparable due to vastly different recommendation possibilities. We mitigate such impacts by synchronizing the crawls conducted in each experiment such that for every crawler using one configuration to gather a recommendation tree, there is another synchronized crawler using the alternate configuration to gather the comparison recommendation tree. This synchronization is done at the node level -- i.e., we ensure that each tree arrives at the exact same node position in its respective recommendation tree within,
at most, a few seconds of its counterpart. Therefore trees gathered using alternate configurations of the same parameter are comparable.
_Accounting for distributed infrastructure and effects of geolocation._ As shown in prior work [1], web servers may be distributed across a wide region and servers in different locations or data centers may have inconsistencies in their search indices or perform geo-specific recommendations. To mitigate these effects on our gathered trees, we conduct all our data gathering experiments from the same location and use a static DNS entry for YouTube which ensures that all our content requests and interactions with the platform are served by web servers, at the very least, in the same region.
_Accounting for A/B testing._ Platforms have been known to conduct A-B testing on their users while testing new features or algorithm updates [12]. We make a best-effort attempt to mitigate the effects of such testing by gathering data from _at least eight identical and synchronized crawls for each parameter tested in our study.
**Collecting recommendation trees.** Once a sock-puppet has been trained and has a seed video, we begin exploration of the recommendation tree. Unfortunately, complete exploration of a recommendation tree is infeasible due to the need for one sock-puppet for each configuration being tested for each tree being gathered for each path being traversed. This is necessary due to the fact that prior watched videos will impact future recommendations and therefore a sock-puppet can only perform one-way (downward) traversals of the recommendation tree. Further, We are collecting at least 40 recommendations for each video. Therefore, a recommendation tree of depth \(n\) will have at least \(40^{n}\) paths from root to leaf node each needing a unique sock-puppet. In our traversals of the tree, we explored five unique paths -- the left-most path (comprised of the first recommendation at each node), the right-most path (comprised of the last recommendation at each node), and three pre-selected paths from the middle (sampled with zipfian weights to account for a preference for videos higher in the recommendation list). We explore each of these paths simultaneously, using a unique but identically trained, configured, and seeded sock-puppet dedicated to each, to a depth of 10 and record all recommendations along the way. We stitch these paths and observations together to obtain a subset of the complete recommendation tree upon which our analysis is conducted. We gather at least four such trees for each parameter configuration while ensuring synchronization with alternately configured audits. An example of such a tree is shown in Figure 1. In this figure, each path represents the set of videos that form the sock-puppets, nodes represent videos, and directed edges between any two nodes (\(parent\), \(child\)) indicate that the \(child\) was recommended after direct interaction with the \(parent\). The root node of this tree represents the seed video used to generate the first set of recommendations.
### Recommendation tree characteristics
In our analysis, we focus on studying the popularity, channel diversity, and topics of videos observed in a recommendation tree. We select these parameters since platform audits often focus on them (or their variations) to identify each-chamber, rabbit-holing, or mainstreaming effects caused by recommendation algorithms.
**Popularity of recommended content.** Popularity of recommended content, measured using video views as a proxy, can capture the algorithm's tendency to recommend niche or mainstream content. We record the distribution of views observed in recommended videos at each node. A recommendation tree largely containing videos with low popularity at each node suggests the tendency to recommend niche content for the associated sock-puppet configuration. Conversely, a tree largely containing videos with high popularity at each node suggests the tendency to recommend mainstream content for the associated sock-puppet configuration. Significant differences in the within- and across-group differences between the trees generated by two configurations would suggest that one of the two configurations tends to more mainstream (popular) recommendations than the other.
**Channel diversity of recommended content.** Each video about a topic reflects the perspective of the channel that uploaded it. Therefore, we use the diversity of channels in the trees as a proxy for the range of perspectives provided by the recommended content. We measure this by recording
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Question** & **Parameter** & **Configurations** & **\#trees** & **\#videos** \\ \hline \multirow{2}{*}{RQ1} & Training set & \(T_{\text{min}}\), \(T_{\text{niche}}\) & 16 & 32K \\ & Seed video & \(s_{\text{min}}\), \(s_{\text{niche}}\) & 16 & 32K \\ \hline \multirow{2}{*}{RQ2} & \multirow{2}{*}{Accounts} & \(A_{\text{full}}\), \(A_{\text{acotosis}}\) & 8 & 14K \\ & & \(A_{\text{full}}\), \(A_{\text{clear}}\) & 8 & 13K \\ \hline \multirow{4}{*}{RQ3} & \multirow{4}{*}{Watch Time} & \(W_{\text{topc}}\), \(W_{\text{50pc}}\) & 8 & 15K \\ & & \(W_{\text{50pc}}\), \(W_{\text{25pc}}\) & 8 & 15K \\ \cline{1-1} & & \(W_{\text{25pc}}\), \(W_{\text{10pc}}\) & 8 & 15K \\ \cline{1-1} \cline{2-5} & Interaction & \(I_{\text{get}}\), \(I_{\text{click}}\) & 8 & 16K \\ \cline{1-1} & & \(P_{\text{left}}\), \(P_{\text{right}}\) & all* & 69K \\ \cline{1-1} & Depth & \(D_{\text{top}}\), \(D_{\text{bottom}}\) & all* & 35K \\ \hline \hline \end{tabular}
*Data from all trees were used in analysis for these parameters.
\end{table}
Table 1: **Experiments.** The ‘Parameter’ column indicates the parameter whose values were modified in each experiment and the ‘Configuration’ column indicates the values assigned to the parameter in the experiment. The ‘# trees’ column indicates the total number of recommendation trees gathered for analysis and the ‘# videos’ column indicates the total number of recommendations observed in these trees.
Figure 1: A recommendation tree generated based on 5 sock-puppet. Flat arrows shows the unique recommendation path taken by each sock-puppet. Starting from the seed, each node represents a video recommended by the parent node.
the entropy of channels observed in the recommended content at each node. A recommendation tree with a high entropy of recommended content at each node indicates high recommendation diversity and suggests the absence of a rabbit-holing effect. Significant differences in the within- and across-group differences between the trees generated by two configurations would suggest that one of the two configurations tends to show less diverse recommendations than the other.
**Semantic similarity of recommended content.** We extract the titles and text descriptions associated with each video observed in our tree. We then perform standard NLP preprocessing operations on these texts (i.e., tokenization, stopword & URL removal, removal of tokens observed in more than 50% of our dataset, and lemmatization). We combine these processed texts for all videos observed at each node in a tree and use it as a representation of the topics observed in the recommendations at that node. In order to perform semantic comparisons between any two nodes, we use the SpaCy document similarity method [14] which uses a bag-of-words approach to compute the cosine similarities between the average of all word vectors in a text. While this approach is limited in that it does not capture polarity of content (e.g., anti-vaccine text and text from arguments refuting the anti-vaccine texts will have high similarity), our manual validation found that they captured the similarity of the topics in texts. We also tested other semantic (Latent Semantic Indexing [15]) and lexicographic (Latent Dirichlet Allocation [13]) approaches for measuring similarity but found them to perform poorly at capturing topical similarity for our dataset. A pilot-study in which pairs of videos deemed to be very similar (by SpaCy's docsim, LDA, LSA) were randomly sampled and their texts were manually evaluated to verify similarity. Based on this, we identified Spacy's docsim as best. Significant differences in the within- and across-group differences between trees generated by two configurations suggest that one of the two configurations results in measurably different recommended topics.
### Comparison of Audit Configurations
Each of our experiments result in two sets of recommendation trees -- one set for each audit parameter configuration being tested (e.g., \(A_{\text{full}}\) and \(A_{\text{cookies}}\)). Trees in each set are gathered in synchronization with each other. Given these sets of recommendation trees, we compute the _across-group_ (e.g., between \(A_{\text{full}}\) and its synchronized \(A_{\text{cookies}}\) tree) and _within-group_ (e.g., between two synchronized \(A_{\text{full}}\) (or, \(A_{\text{cookies}}\)) trees) differences along the three dimensions described in SS2.3. We describe this process below.
**Recording characteristics of a recommendation tree node.** Let \(n_{ij}\) denote a traversed node (i.e., viewed video) located on path \(P_{i}\) and at depth \(j\) in a recommendation tree and \(r_{ijk}\) denote the \(k^{th}\) recommendation observed at \(n_{ij}\). At each node \(n_{ij}\) we record: (1) a popularity scalar value \(pop(n_{ij})\) = \(\mu(views(r_{ij,1})\ldots views(r_{ij,40}))\) representing the view counts of all observed recommended videos at this node; (2) a channel entropy scalar value \(div(n_{ij})\) = \(entropy(channel(r_{ij,1}),\ldots,channel(r_{ij,40}))\) representing the diversity of channels in the recommended videos at this node; and (3) a document vector \(doc(n_{ij})\) = \(docvec(desc(r_{ij,1}),\ldots,desc(r_{ij,40}))\) which represents the document vector associated with the video descriptions obtained from all recommended videos at this node.
**Comparing characteristics of recommendation trees.** Given two recommendation trees \(T\) and \(T^{\prime}\), we compute the differences in characteristics in a node position-dependant manner -- i.e., we compute differences in the popularity vector, channel entropy, and document vector for each node position in \(T\) and \(T^{\prime}\). These differences are computed as follows:
\[\delta_{pop}(T,T^{\prime}) =mean([\forall i,\forall j:pop(n_{ij})-pop(n^{\prime}_{ij})])\] \[\delta_{div}(T,T^{\prime}) =mean([\forall i,\forall j:div(n_{ij})-div(n^{\prime}_{ij})])\] \[\delta_{sem}(T,T^{\prime}) =mean([\forall i,\forall j:docsim(docvec(n_{ij}),docvec(n^{ \prime}_{ij})])]\]
These values effectively capture the mean node-to-node differences between \(T\) and \(T^{\prime}\). This node-to-node comparison is possible because all trees gathered in our study traversed the same set of paths in the recommendation tree. Maintaining this node position dependence in tree comparisons is important because it handles differences in characteristics that might arise from the position of a node in the recommendation tree. For example, comparing the top recommendation at depth=1 from \(T\) with the \(40^{th}\) recommendation at depth=10 from \(T^{\prime}\) could result in misattributing differences in tree characteristics that arise from changes in recommendation ranks to the impact of an audit configuration change.
**Computing within- and across-group differences.** Given two auditing configurations \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) which generate the sets of trees \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\), respectively, we compute: (1) the _within-group differences_ as the distribution of differences in characteristics observed between trees within \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\); and (2) the _across-group differences_ as the distribution of differences observed between trees across \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\). These are denoted by:
\[\Delta_{x}^{within}(\mathcal{T})=[\forall(T_{i},T_{j})\in( \mathcal{T}\times\mathcal{T}):\delta_{x}(T_{i},T_{j})]\] \[\Delta_{x}^{across}(\mathcal{T},\mathcal{T}^{\prime})=[\forall( T_{i},T_{j})\in(\mathcal{T}\times\mathcal{T}^{\prime}):\delta_{x}(T_{i},T_{j})]\] \[\forall x\in\{pop,div,sem\}\]
The within-group differences, computed over all trees generated with identical audit configurations, allow us to establish a _baseline_ of characteristic variations caused by factors outside the control of the auditor (e.g., probabilistic recommendation algorithm, A/B testing, etc.). The across-group differences showcase the differences caused by the change in audit configuration _and_ external factors.
**Quantifying the impact of audit parameter configurations.** Given distributions \(\Delta_{x}^{within}\) and \(\Delta_{x}^{across}\) associated with configurations (\(\mathcal{C}\), \(\mathcal{C}^{\prime}\)), we use bootstrapping with 1M samples [1, 13] to create 95% confidence intervals around the mean within- and across-group differences. We also use these bootstrapped samples to compute 95% confidence intervals around the effect size -- i.e., _the difference between the within- and across-group differences bootstrap samples_. Let [\(CI_{lower}\), \(CI_{upper}\)] be the \(N\)% confidence interval for the effect size.
We say that the effect is statistically significant at this confidence level if and only if \((CI_{lower}\leq CI_{upper}<0)\) or \((CI_{upper}\geq CI_{lower}>0)\) -- i.e., _iff N% of the bootstraped samples have observed effect sizes of the same polarity_. In our work, we report the 95% and 99% confidence intervals for effect sizes. We also report the average effect size as the mean of all effect sizes observed in the bootstrap samples.
## 3 Training sets and seeds
**Experiment setup.** Our goal is to measure the impact of training sets and seeds on the characteristics of recommendation trees generated by an audit. To accomplish this, we gathered 32 recommendation trees from four different audit configurations: eight trees each from an audit using \(T_{\text{main}}\) and \(s_{\text{main}}\), \(T_{\text{main}}\) and \(s_{\text{niche}}\), \(T_{\text{niche}}\) and \(s_{\text{main}}\), and \(T_{\text{niche}}\) and \(s_{\text{niche}}\). We split each of these into two sets of four and refer to them as (\(\mathcal{T}_{\text{main,min}}\), \(\mathcal{T}_{\text{main,main}}^{\prime}\)), (\(\mathcal{T}_{\text{main,niche}}\),\(\mathcal{T}_{\text{main,niche}}^{\prime}\)), (\(\mathcal{T}_{\text{niche,main}}\),\(\mathcal{T}_{\text{niche,niche}}^{\prime}\)), and (\(\mathcal{T}_{\text{niche,niche}}\),\(\mathcal{T}_{\text{niche}}^{\prime}\)) respectively. These trees were gathered in synchrony (_Cf._ SS2.2) in order to facilitate accurate within- and across-group comparisons (_Cf._ SS2.4). By splitting each of our sets of eight trees into two sets of four, we avoid reusing trees for testing multiple hypotheses.
_Measuring impact of a training set change._ To uncover the impact of the training set used in an audit on the characteristics of recommendation trees, we compute the means, 95% and 99% confidence intervals associated with the within-group differences, across-group differences, and effect sizes (_Cf._ SS2.4) obtained from two analyses: (1) comparing \(\mathcal{T}_{\text{main,main}}\) with \(\mathcal{T}_{\text{niche,main}}\) -- i.e., using the same mainstream seed while varying the training set; and (2) comparing \(\mathcal{T}_{\text{main,niche}}\) with \(\mathcal{T}_{\text{niche,niche}}\) -- i.e., using the same niche seed while varying the training set.
_Measuring impact of a seed change._ We repeat our methodology for the following analyses: (1) comparing \(\mathcal{T}_{\text{main,main}}^{\prime}\) with \(\mathcal{T}_{\text{main,niche}}^{\prime}\) -- i.e., varying the seed while using a mainstream training set focused on controversial topics; and (2) comparing \(\mathcal{T}_{\text{niche,main}}^{\prime}\) with \(\mathcal{T}_{\text{niche,niche}}^{\prime}\) -- i.e., varying the seed while maintaining a fringe and controversial training set.
**Results.** Our results are summarized in Table 2. In general, we find that altering the characteristics of the training set or the seed _always_ impacts the popularity of the videos observed in an audit. This, however, is not the case for the channel diversity and semantics. More specifically, our analysis yields the following insights.
_There appears strong evidence of a'recency bias' in recommendations._ Paying attention to the bottom two rows of Table 2, we see that the effects of altering the seed from a niche video to a mainstream video are nearly always statistically significant and of high magnitude, with only one exception when channel diversity is recorded using \(T_{\text{main}}\) for training. The (significant) effects on the popularity and entropy of recommended videos are also higher than the effects observed on alterations of the training set (top two rows). The most notable effects of altering seeds are in the 'popularity' dimension where the mean effect of switching a seed video from niche to mainstream results in video recommendations that, on average, have 1.51M and 2.87M more views when trained with \(T_{\text{main}}\) and \(T_{\text{niche}}\), respectively. We only find marginal (yet significant) changes in the semantics of recommended videos, however -- i.e., recommendations are between 1-5% less semantically similar after switching seeds from mainstream to niche. This suggests that, independently of the training set used, the choice of seed can drastically alter the characteristics of a recommendation tree and the audit inferences. Extrapolating this finding suggests that the most recent video will have an outsized impact on future recommendations.
_Channel diversity is not always dependent on the training set and seed._ Our analysis shows that the channel diversity is largely unaffected by the choice of training set and seed. Only one exception occurs: when seeds are altered for a \(T_{\text{niche}}\) training set audit (_Cf._ row four in Table 2). Here we see the effect of switching from \(s_{\text{main}}\) to \(s_{\text{niche}}\) reduces the channel diversity by an average of 0.28 (entropy in bits) at each node. While it appears that this finding lends credence to the claims of the algorithms rabbit-holing tendencies, it is important to note that this decrease only appears when the audit has only interacted with fringe content (in the training set and the seed). Given that the effect disappears when any other interaction occurs, this finding could be explained by the small number of creators addressing the topic of the niche content.
**Takeaways.** Taken together, these results put a different perspective on YouTube's recommendation system and the
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \multicolumn{2}{c}{**Parameters**} & \multicolumn{3}{c}{**Video Popularity (Views in millions)**} & \multicolumn{3}{c}{**Channel Diversity (Entropy in bits)**} & \multicolumn{3}{c}{**Content Semantics (Similarity score)**} \\ Fixed & Varied & \(\mu_{\text{views}}\) & Effect (95\% CI) & Effect (99\% CI) & \(\mu_{\text{niche}}\) & \(\mu_{\text{ungroup}}\) & Effect(95\% CI) & Effect (99\% CI) & \(\mu_{\text{niche}}\) & Effect(95\% CI) & Effect(99\% CI) & \(\mu_{\text{direct}}\) \\ \hline \(s_{\text{min}}\) & \(7.15\) & **[0.34, 1.33]** & **[0.19, 1.49]** & 0.84 & 3.63 & [-0.16, 0.17] & [-0.21, 0.22] & 0.00 & [-0.02, -0.00] & [-0.02, 0.00] & -0.01 \\ \(s_{\text{niche}}\) & \(T_{\text{niche}}\) & \(4.94\) & **[0.34, 1.33]** & **[0.19, 1.49]** & 0.84 & 3.49 & [-0.16, 0.17] & [-0.21, 0.22] & 0.00 & [-0.02, -0.00] & -0.01 \\ \(s_{\text{niche}}\) & \(T_{\text{niche}}\) & \(4.32\) & **[0.16, 2.19]** & **[1.35, 2.30]** & 1.82 & 3.38 & [-0.27, 0.19] & [-0.34, 0.26] & -0.04 & **[-0.04, -0.02]** & **[-0.05, -0.01]** & -0.03 \\ \(T_{\text{min}}\) & \(s_{\text{niche}}\) & \(1.80\) & **[0.16, 2.19]** & **[1.35, 2.30]** & 1.82 & 3.38 & [-0.27, 0.19] & [-0.34, 0.26] & -0.04 & **[-0.04, -0.02]** & **[-0.05, -0.01]** & -0.03 \\ \(T_{\text{min}}\) & \(s_{\text{niche}}\) & \(1.78\) & **[0.73, 2.31]** & **[0.50, 2.56]** & 1.51 & 3.17 & [-0.14, 0.14] & [-0.18, 0.18] & 0.00 & **[-0.04, -0.02]** & **[-0.04, -0.02]** & -0.03 \\ \(T_{\text{niche}}\) & \(s_{\text{niche}}\) & \(4.78\) & **[0.73, 2.31]** & **[0.50, 2.56]** & 1.51 & 2.97 & [-0.14, 0.14] & [-0.18, 0.18] & 0.00 & **[-0.04, -0.02]** & **[-0.04, -0.02]** & -0.03 \\ \(T_{\text{niche}}\) & \(s_{\text{niche}}\) & \(4.93\) & **[0.16, 3.05]** & **[2.62, 3.11]** & 2.87 & 4.02 & 4.02 & **[0.12, 0.45]** & **[0.07, 0.51]** & 0.28 & **[-0.05, -0.02]** & **[-0.05, -0.01]** & -0.03 \\ \(s_{\text{niche}}\) & \(s_{\text{niche}}\) & \(1.72\) & **[0.68, 3.05]** & **[2.62, 3.11]** & 2.87 & 3.44 & **[0.12, 0.45]** & **[0.07, 0.51]** & 0.28 & **[-0.05, -0.02]** & **[-0.05, -0.01]** & -0.03 \\ \hline \end{tabular}
\end{table}
Table 2: Impact of changes caused by varying training sets (top 2 rows) and seeds (bottom 2 rows). Columns represent the mean node values observed in each group for a particular characteristic, the 95% and 99% confidence intervals for the measured effect sizes (i.e., difference between within- and across-group differences; _Cf._ §2.4), and the mean effect size. Values in bold indicate a statistically significant effect size at the corresponding confidence level.
audits that study it. Not only do researchers need to pay particular attention to training and seeding, but also must understand that their measurements of recommended videos are heavily dependent on the _most recent_ nodes already traversed by their sock-puppets. Specifically, it appears that the recency bias can lead to a single video overwhelming the effects of a large number of prior videos -- thus impacting the final inferences from the audit. Generally, we recommend that audit inferences (e.g., presence of a mainstream effect) are conditioned: (1) on the specific characteristics of the training set and seed; and (2) on the specific strategies used to select nodes from a recommendation tree.
## 4 Dollar-Cost Saving Configurations
**Experiment setup.** In this section, we focus on understanding the impact of commonly used sock-puppet account management strategies on the recommendation trees generated by them.
_Measuring the effectiveness of cookie-based sock puppets._ To find out the differences in cookie based sock puppets against real accounts, we gathered four recommendation trees for \(\mathcal{T}_{\text{full}}\) and \(\mathcal{T}_{\text{cookies}}\) each. All the parametric configurations for these two sets were kept identical except \(\mathcal{T}_{\text{full}}\) was using a logged in profile while \(\mathcal{T}_{\text{cookies}}\) was not logged in, but was maintaining YouTube cookies. Both \(\mathcal{T}_{\text{full}}\) and \(\mathcal{T}_{\text{cookies}}\) used the (\(T_{\text{main}}\), \(s_{\text{main}}\)) training set and seed.
_Measuring the effectiveness of clearing account history._ To verify whether clearing account history does indeed purge the watch history effect (i.e even after deleting watch history, user keeps getting similar recommendations), we collected four recommendation trees for \(\mathcal{T}_{\text{full}}^{\prime}\) and \(\mathcal{T}_{\text{clear}}\) each. Both \(\mathcal{T}_{\text{full}}^{\prime}\) and \(\mathcal{T}_{\text{clear}}\) were using logged in profiles and were using (\(T_{\text{main}}\), \(s_{\text{main}}\)) training set and seed. However, before collecting recommendations based on seed \(s_{\text{main}}\), watch history of \(\mathcal{T}_{\text{clear}}\) was deleted.
To gain insights into the measurable effects of different account management strategies, we compute the means, 95% and 99% confidence intervals associated with within-group differences, across-group differences, and effect sizes.
**Results.** The results are summarized in Table 3. Our analysis yielded two conclusive results.
_Audits do not need fresh accounts for each sock-puppet._ First, focusing on the impact of changing between a sock-puppet with a logged-in YouTube account (\(\mathcal{T}_{\text{full}}\)) and one which only maintains its browser cookies (\(\mathcal{T}_{\text{cookies}}\)), we found that there were no significant differences in any measured characteristics of their recommendations. This presents significant cost-saving opportunities that arise from being able to associate a sock-puppet with a browser instance rather than having to navigate the barriers associated with automating account creation and phone number verification.
_The potential for account reuse by clearing history_ There is a significant difference in popularity and content semantics for \(\mathcal{T}_{\text{full}}\) sock-puppets when compared with identically configured and synchronized \(\mathcal{T}_{\text{clear}}\) sock-puppets, suggesting that, by clearing history \(\mathcal{T}_{\text{clear}}\) has purged the popularity-context and topic-context (picked up during training phase) which \(\mathcal{T}_{\text{full}}\) still maintains. Simply put, by clearing account history one might be able to reuse an account for a large-scale study -- particularly where the popularity and content semantics are being measured (e.g., in audits quantifying mainstreaming and rabbit-holing effects). However, we do not make the claim that clearing watch history is equivalent to getting a fresh account (a fresh account would mean Google doesn't have any data stored for the profile at the back end, which we did not check for).
**Takeaways.** These findings present an opportunity for auditors to save huge dollar-costs involved in account creation and curation. We have shown that a browser that maintains YouTube cookies is as good as YouTube account. Furthermore, account re-use (after clearing history) is a viable option for auditors studying the platform for its popularity and content semantics.
## 5 Computational Compromises
**Experiment setup.** In this section, we analyse the impact of three compromises that may be made to save computational resources: (1) watching only a pre-determined fraction of each video in the recommendation tree; (2) using the driver.get (URL) method of selenium rather than automating user clicks on recommended videos; and (3) performing low-depth and narrow-breadth audits.
_Measuring impact of video watch times._ To answer the question of whether audits need to 'watch' videos to completion, we gathered and analyzed four recommendation trees in which the audit 'watched' all videos to completion (\(\mathcal{T}_{\text{w=100}}\)), eight trees in which the audit only 'watched' videos to 50% of their total duration (\(\mathcal{T}_{\text{w=50}}\), \(\mathcal{T}_{\text{w=50}}^{\prime}\)), and four trees in which the audit only 'watched' videos to 25% of their total duration (\(\mathcal{T}_{\text{w=25}}\)). Both sets of audits used the (\(T_{\text{main}}\), \(s_{\text{main}}\)) training set and seed.
_Measuring impact of interaction mechanics._ We gathered four recommendation trees where the audit actually lo
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Parameters**} & \multicolumn{3}{c}{**Video Popularity (Views in millions)**} & \multicolumn{3}{c}{**Channel Diversity (Entropy in bits)**} & \multicolumn{3}{c}{**Content Semantics (Similarity score)**} \\ & \(\mu_{\text{visens}}\) & Effect (95\% CI) & Effect (99\% CI) & \(\mu_{\text{effiner}}\) & \(\mu_{\text{effiner}}\) & Effect (95\% CI) & Effect (99\% CI) & \(\mu_{\text{effiner}}\) & Effect (95\% CI) & Effect (99\% CI) & Effect (99\% CI) & \(\mu_{\text{effiner}}\) \\ \hline \(A_{\text{call}}\) & 9.20 & [-0.90, 0.99] & [-1.20, 1.29] & 0.05 & 3.36 & 3.36 & -0.11, 0.31] & [-0.18, 0.36] & -0.01 & [-0.02, -0.00] & [-0.03, 0.00] & -0.01 \\ \(A_{\text{cookies}}\) & 7.72 & & & & 3.57 & & & & & & \\ \hline \(A_{\text{clear}}\) & 12.34 & **[1.82, 3.46]** & **[1.54, 3.70]** & 2.65 & 3.55 & 2.86 & [-0.03, 0.53] & [-0.12, 0.62] & 0.26 & **[-0.05, -0.01]** & **[-0.05, -0.01]** & **-0.03** \\ \(A_{\text{full}}\) & 8.47 & & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Impact of changes caused by varying login status (row 1) and purging watch history (row 2). Columns represent the mean node values observed in each group for a particular characteristic, the 95% and 99% confidence intervals for the measured effect sizes (i.e., difference between within- and across-group differences; _Cf._ §2.4), and the mean effect size. Values in bold indicate a statistically significant effect size at the corresponding confidence level.
cated and clicked the recommendations video links (\(\mathcal{T}_{\text{click}}\)) and four trees where the audit simply identified the URL of the recommended videos and fetched the video with a driver.get (URL) command (\(\mathcal{T}_{\text{get}}\)). Both sets of audits used the (\(T_{\text{main}}\), \(s_{\text{main}}\)) training set and seed.
_Measuring the impact of crawl-breadath and -depth._ We analyzed the characteristics of the leftmost and rightmost paths of all 96 recommendation trees gathered in this study (\(\mathcal{T}_{\text{left}}\) and \(\mathcal{T}_{\text{right}}\)). These correspond to the paths obtained from only clicking the top and bottom recommendation at each video, respectively. We also analyzed the characteristics of the recommendations observed at depth 1 and 10 for all 96 trees obtained in this study (\(\mathcal{T}_{\text{top}}\) and \(\mathcal{T}_{\text{bottom}}\)).
Like before, in each of these analyses, we compute the means, 95% and 99% confidence intervals associated with within-group differences, across-group differences, and effect sizes.
**Results.** Our results are shown in Table 4. Notably, besides configurations with varying crawl depth, none of our changes yielded statistically significant differences in their measured recommendation characteristics. This has several key implications for auditors.
_Videos do not need to watched to completion._ In all our audit configurations that varied video watch time fractions, there was no statistical relationship between change in the characteristics of recommended videos and the audit's configured watch fraction. This is a surprising finding that suggests even watching 10% of a video impacts the subsequent recommendations to no different extent as watching 100%. Upon further investigation, we discovered evidence showing that YouTube only requires a watch time of 30 seconds for a 'view' to be registered [12, 20]. Based on these previous findings, we hypothesize that this same 30-second watch threshold is also used to determine whether a video should impact subsequent recommendations. Since the videos in our recommendation trees were much longer than 300-seconds (with many being between 20-60 minutes long), even watching 10% of the video would register as a 'view'. This finding that videos do not need to be watched to any specific fraction of completion, but rather to a fixed watch time threshold, presents a promising (accuracy-independent) computational cost-saving avenue for future auditors.
_It is unnecessary to automate clicks on recommended videos._ Our analysis showed no statistically significant differences between any recommendation tree characteristics observed in \(\mathcal{T}_{\text{get}}\) and \(\mathcal{T}_{\text{click}}\). This suggests that using browser automation tools (e.g., Selenium webdriver's action chains [2]) to explicitly click on video links is unnecessary. Without sacrificing on accuracy of audit inferences, this allows auditors to replace a computationally expensive, high programmer overhead, and unreliable approach to navigate to subsequent recommendations with the simple and reliable approach of programming browsers to fetch specific URLs in the DOM.
_Crawl depth impacts recommendation characteristics._ Our analysis on the impact of crawl-depth yield statistically significant results for all recommendation tree characteristics. Specifically, we notice that nodes at the top of the recommendation tree generally appear to be more significantly more popular, diverse, and less semantically similar to recommendations at the bottom of the tree. This finding once again showcases the possibility of a strong recency bias that impacts recommendations. Interestingly, we do not see statistically significant differences between the highest- and lowest-recommended videos -- suggesting that auditors need to pay specific attention to the depth of their crawls.
**Takeaways.** Our analysis yields two significant computational cost-savings for researchers. Specifically, finding that videos do not need to be watched to completion and that clicking on videos causes no different outcomes than simply 'getting' the URL associated with the video reduces the computational and engineering overhead associated with an audit. In addition, our work highlights that different depths of a recommendation tree could result in different recommendation characteristics. To account for these effects, it is important that any inferences from an audit are conditioned on the depth of the trees that were used.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Parameters**} & \multicolumn{3}{c}{**Video Popularity (Views in millions)**} & \multicolumn{3}{c}{**Channel Diversity (Entropy in bits)**} & \multicolumn{3}{c}{**Content Semantics (Similarity score)**} \\ & \(\mu_{\text{visens}}\) & Effect (95\% CI) & Effect (99\% CI) & \(\mu_{\text{differs}}\) & \(\mu_{\text{differs}}\) & Effect (95\% CI) & Effect (99\% CI) & \(\mu_{\text{differs}}\) & Effect (95\% CI) & Effect (99\% CI) & \(\mu_{\text{differs}}\) \\ \hline \(W_{\text{top}}\) & 7.69 & [-0.86, 0.28] & [-1.04, 0.46] & -0.29 & 3.74 & [-0.23, 0.22] & [-0.30, 0.29] & 0.00 & [-0.03, -0.00] & [-0.03, 0.00] & -0.01 \\ \(W_{\text{95\text{top}}}\) & 7.84 & [-3.52, 1.53] & [-4.28, 2.34] & -1.03 & 3.21 & [-0.41, 0.15] & [-0.50, 0.25] & -0.13 & [-0.03, 0.00] & [-0.03, 0.01] & -0.01 \\ \hline \(W_{\text{25\text{top}}}\) & 9.55 & [-3.52, 1.53] & [-4.28, 2.34] & -1.03 & 3.60 & [-0.41, 0.15] & [-0.50, 0.25] & -0.13 & [-0.03, 0.00] & [-0.03, 0.01] & -0.01 \\ \hline \(W_{\text{25\text{top}}}\) & 14.11 & [-2.10, 0.43] & [-2.51, 0.83] & -0.85 & 3.61 & [-0.56, 0.12] & [-0.67, 0.24] & -0.22 & [-0.03, 0.00] & [-0.04, 0.01] & -0.01 \\ \(W_{\text{19\text{top}}}\) & 13.59 & [-0.59, 0.64] & [-0.79, 0.82] & 0.02 & 3.79 & [-0.20, 0.11] & [-0.24, 0.16] & -0.04 & [-0.02, 0.00] & [-0.02, 0.00] & -0.01 \\ \(I_{\text{left}}\) & 6.93 & [-0.65, 0.98] & [-0.91, 1.25] & 0.16 & 3.72 & [-0.03, 0.20] & [-0.07, 0.24] & 0.08 & **[-0.03, -0.02]** & **[-0.03, -0.02]** & -0.02 \\ \(P_{\text{right}}\) & 7.47 & [-0.65, 0.98] & [-0.91, 1.25] & 0.16 & 3.33 & [-0.03, 0.20] & [-0.07, 0.24] & 0.08 & **[-0.03, -0.02]** & **[-0.03, -0.02]** & -0.02 \\ \hline \(D_{\text{top}}\) & 13.73 & **[5.04, 6.67]** & **[4.78, 6.92]** & 5.86 & 4.59 & 3.12 & **[1.05, 1.24]** & **[1.02, 1.26]** & 1.14 & **[-0.02, -0.01]** & **[-0.03, -0.01]** & -0.02 \\ \(D_{\text{bottom}}\) & 5.97 & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Impact of changes caused by varying video watch times (rows 1-3), interaction mechanisms (row 4), recommendation selection strategy (row 4), and crawl depth (last row). Columns represent the mean node values observed in each group for a particular characteristic, the 95% and 99% confidence intervals for the measured effect sizes, and the mean effect size. Values in bold indicate a statistically significant effect size at the corresponding confidence level.
Related work
**Audits of YouTube's recommendation system.** This paper was inspired by a recent influx of YouTube audit research which often showed contrary results. For instance, Lutz et al. Lutz et al. (2021) provided evidence of the absence of a rabbit-holing effect while demonstrating a mainstreaming effect for a variety of political ideologies. Other work Ledwich and Zaitsev (2019); Munger and Phillips (2022); Hosseinnardi et al. (2021); Makhotrykh and Urman (2020) has also challenged the notion of rabbit-holing on YouTube and shown evidence of recommendations swaying users towards mainstream and neutral content. Contrary to these findings, Haroon et al. Haroon et al. (2022) provided evidence that YouTube pushes users towards increasingly biased and radical political content on 'up-next' and homepage recommendations. These findings are complementary to another body of work Bryant (2020); Ribeiro et al. (2020); Tomlein et al. (2021); Kirdemir and Agarwal (2021); Papadamou et al. (2019); 2021) which have argued that YouTube recommendations have promoted polarization in the political, scientific, and health-related domains. Unlike these previous efforts, our goal is not to support or undermine specific theories about YouTube's tendency to impact polarization. Rather, we aim to uncover the possible reasons for these differences and provide guidelines to avoid such confusion and contradictions within the auditing community. More recently, Hussein et al. Hussein et al. (2020) showed how demographics of a user profile altered recommendations from YouTube. In a study focusing on YouTube's demonetization algorithm, Dunna et al. Dunna et al. (2022) found evidence that the recommendation and demonetization algorithms were linked. There are also numerous publications from Google describing the recommendation algorithm used for YouTube. These have suggested the use of user profiles, watch histories, video watch times, and click-through rates as features in their content ranking algorithm Zhao et al. (2019); Tang et al. (2019); Fu et al. (2016); Covington, Adams, and Sargin (2016); Zhao et al. (2015); Brodersen, Scellato, and Wattenhofer (2012); Davidson et al. (2010). These descriptions informed our choice of audit parameters.
**Improving the reliability of crawler-based research.** There have been similar efforts to ours in the Internet measurement community. These have largely focused on facilitating more reliable and reproducible research in the realm of Web measurement and privacy. Yadav et al. Yadav and Goyal (2015) studied a set of open-source web crawlers and showcased how each was suitable for different use cases. More recently, Ahmed et al. Ahmad et al. (2020) showed the impact that different crawlers had on measurement and security research inferences. Along similar lines, Zeber et al. Zeber et al. (2020) and Jueckstock et al. Jueckstock et al. (2021) also showed how the choice of crawler and configuration could harm the repeatability of an experiment. Our work extends these efforts by identifying platform-specific audit challenges.
## 7 Concluding Remarks
**Limitations.** Fundamentally, our work is a best-effort study to understand the impact of different audit methodological decisions on recommendations gathered from YouTube -- one of the highest streamed video-hosting platforms Duo (2023). While our audit has a YouTube-limited scope, it helps pave way for other auditing studies across different video platforms. Thus, our study is not without limitations. First, we ourselves are computationally and economically limited and had to make decisions about crawl parameters to explore. This impacted our ability to (1) perform exploration of more paths in each recommendation tree; (2) conduct more than eight synchronized tree explorations; and (3) explore recommendation trees to a greater depth. We take care to mitigate any incorrect inferences that might result from these limitations by only performing like-for-like node- and position-dependent comparisons and ensuring that any differences measured in our study account for the general probabilistic nature of the recommendation algorithm by measuring across-group differences and comparing them with within-group differences. Second, there are latent effects that cannot be controlled from our external vantage point which is effectively measuring a black-box system. We do our best to identify several of these (e.g., A/B testing, data center location, measurement location, etc.) and attempt to counter each of them. However, it is possible that unaccounted effects might still impact our results. Finally, we acknowledge that our choice of a training set and seed video might ultimately not be sufficient to observe all effects of interactions on the recommendation system. Regardless, we provide useful data points for consideration to a community grappling with a large number of contradictory results.
**Conclusions.** This work showcased the effect of audit configurations on the characteristics of recommendation trees generated by them. Specifically, we showed that although training sets do have a statistical impact on recommendations, their effects can be significantly dampened by a'recency bias' in YouTube's recommendations (SS3). Therefore, specific care needs to be taken when selecting videos to view in an audit. More importantly, these decisions need to be disclosed and any audit inferences _must_ be conditioned on them. Our analysis of different types of auditing profiles (SS4) showed that the expensive task of obtaining clean YouTube accounts would not yield significantly different outcomes than simply maintaining the YouTube cookie for the entire duration of an audit. Further, our findings also suggest that account reuse can be possible by using the 'clear history' feature provided by YouTube. Finally, our analyses of various computational compromises in audits (SS5) show that audits do not need to watch a specific fraction of a video for it to impact subsequent recommendations (rather, a preset threshold appears sufficient), challenging automation tasks such as programming cursor clicks on videos do not need to be performed by auditors, and that the depth of a crawl can impact characteristics of the recommendation tree (and should therefore be used to condition any reported inferences from audits). |
2310.14001 | Toward Stronger Textual Attack Detectors | The landscape of available textual adversarial attacks keeps growing, posing
severe threats and raising concerns regarding the deep NLP system's integrity.
However, the crucial problem of defending against malicious attacks has only
drawn the attention of the NLP community. The latter is nonetheless
instrumental in developing robust and trustworthy systems. This paper makes two
important contributions in this line of search: (i) we introduce LAROUSSE, a
new framework to detect textual adversarial attacks and (ii) we introduce
STAKEOUT, a new benchmark composed of nine popular attack methods, three
datasets, and two pre-trained models. LAROUSSE is ready-to-use in production as
it is unsupervised, hyperparameter-free, and non-differentiable, protecting it
against gradient-based methods. Our new benchmark STAKEOUT allows for a robust
evaluation framework: we conduct extensive numerical experiments which
demonstrate that LAROUSSE outperforms previous methods, and which allows to
identify interesting factors of detection rate variations. | Pierre Colombo, Marine Picot, Nathan Noiry, Guillaume Staerman, Pablo Piantanida | 2023-10-21T13:01:29Z | http://arxiv.org/abs/2310.14001v1 | # Toward Stronger Textual Attack Detectors
###### Abstract
The landscape of available textual adversarial attacks keeps growing, posing severe threats and raising concerns regarding the deep NLP system's integrity. However, the crucial problem of defending against malicious attacks has only drawn the attention in the NLP community. The latter is nonetheless instrumental in developing robust and trustworthy systems. This paper makes two important contributions in this line of search: _(i)_ we introduce LAROUSE, a new framework to detect textual adversarial attacks and _(ii)_ we introduce STAKEOUT, a new benchmark composed of nine popular attack methods, three datasets, and two pre-trained models. LAROUSE is ready-to-use in production as it is unsupervised, hyperparameter-free, and non-differentiable, protecting it against gradient-based methods. Our new benchmark STAKEOUT allows for a robust evaluation framework: we conduct extensive numerical experiments which demonstrate that LAROUSE outperforms previous methods, and which allows to identify interesting factors of detection rate variations.
## 1 Introduction
Despite the high performances of deep learning techniques for Natural Language Processing (NLP) applications, the trained models remain vulnerable to adversarial attacks [1, 13] which limits their adoption for critical applications. In the context of NLP, for a given model and a given textual input, an adversarial example is a carefully constructed modification of the initial text such that it is semantically similar to the original text while affecting the model's prediction. The ability to design adversarial examples [1, 12, 13] raises serious concerns regarding the security of NLP systems. It is, therefore, crucial to develop proper strategies that are available to deal with these threats [21].
Perhaps surprisingly, if the research community has invested considerable efforts to design efficient attacks, there are only a few works that address the issue of preventing them. One can distinguish two lines of research: _detection_ methods that aim at discriminating between regular input and attacks; and _defense_ methods that try to correctly classify adversarial inputs. The latter is based on robust training methods, which customize the learning process, see for instance [11, 12, 13]. These are limited to certain types of adversarial lumes (_e.g._, misspelling), making them vulnerable to other types of attacks that already exist or may be designed in the future. In contrast, detection methods are more relevant to real-life scenarios where practitioners usually prefer to adopt a _discard-rather-than-correct_ strategy [13]. This has been highlighted in [13] which is, to the best of our knowledge, the single word that introduces a detection method that does not require training. On the contrary, the authors propose to measure the _regularity_ of a given input by computing the Mahalanobis distance [10] of its embedding in the last layer of a transformer with respect to the training distribution. Notice that the Mahalanobis distance has also been successfully used in a very similar framework of Out-Of-Distribution (OOD) detection methods (see [14, 15] and references therein).
In this paper, we build upon [13] and introduce a new attack detection framework, called LAROUSE1, which improves the current state-of-the-art. Our approach is based on the computation of the _halfspace-mass depth_[2, 23] of the last layer embed
ding of an input with respect to the training distribution. Halfspace-mass depth is a particular instance of _data depth_[15], which are functions that measure the proximity of a point to the core of a probability distribution. As a matter of fact, the Mahalanobis distance is also - probably one of the most popular - a data depth. Interestingly, in addition to improving the attack detection rate, the halfspace-mass depth remedies several limitations of the Mahalanobis depth: it does not make Gaussian assumptions on the data structure and is additionally non-differentiable, providing security guarantees regarding malicious adversaries that could rely on gradient-based methods.
The second contribution of our work consists in releasing STAKEOUT, a new NLP attack benchmark that enriches the one introduced in [20]. More precisely, we explore the same datasets and extend their four attacks by adding five new adversarial techniques. This ensures a wider variety of testing methods, leading to a robust evaluation framework that we believe will stimulate future research efforts. We conduct extensive numerical experiments on STAKEOUT and demonstrate the soundness of our LAROSSE detector while studying the main variability factors on its performance. Finally, we empirically observe the presence of relevant information to detect attacks across the layers _other_ than the last one. This could pave the way for future research by considering the possibility of building detectors that are not limited to the last embedding layers but rather exploit the full network information.
**Our contributions in a nutshell.** Our contributions are threefold:
1. We introduce LAROSSE, a **new textual attack detector** based on the computation of a carefully chosen similarity function, the _halfspace-mass depth_, between a given input embedding and the training distribution. Contrary to Mahalanobis distance, it does not rely on underlying Gaussian assumptions of data and is non-differentiable, making it robust to gradient-based attacks.
2. We release STAKEOUT, a **new textual attacks benchmark**, which enriches previous ones by including additional attacks. It contains three datasets and nine attacks, covering a wide range of adversarial techniques, including word/character deletion, swapping, and substitution. This allows for a robust and reliable evaluation framework which will be released in DATASETS [14] to fuel future research efforts.
3. We conduct **extensive numerical experiments** to assess the soundness of our LAROSSE detector involving over 20k comparisons, following the method presented in STAKEOUT. Overall, our results prove that LAROSSE improves the state-of-the-art while being less subject to variability. The code will be released on [https://github.com/PierreColombo/AdversarialAttacksNLP](https://github.com/PierreColombo/AdversarialAttacksNLP).
The rest of the paper is organized as follows. In Sec. 2, we briefly review the setting of textual attacks, provide main references on the subject, and formally introduce the problem of attack detection. In Sec. 3, we present our LAROSSE detector and provide some perspectives on data depth and connections to the Mahalanobis distance. In Sec. 4, we introduce our new benchmark STAKEOUT and give details on the evaluation framework of attack detection. Finally, we present our experimental results in Sec. 5.
## 2 Textual Attacks: Generation and Detection
Let us first introduce some notations. We will denote by \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{1\leq i\leq n}\) a textual dataset made of \(n\) pairs of textual input \(\mathbf{x}_{i}\in\mathcal{X}\) and associated attribute value \(y_{i}\in\mathcal{Y}\). We focus on classification tasks, meaning that \(\mathcal{Y}\) is of finite size: \(|\mathcal{Y}|<+\infty\). In this work, the inputs are first embedded through a multi-layer encoder with \(L\) layers and learnable parameters \(\psi\in\Psi\). We denote by \(f^{\ell}_{\psi}:\mathcal{X}\rightarrow\mathbb{R}^{d}\) the function that maps the input text to the \(\ell\)-th layer of the encoder. Note that, as we will work on transformer models, the latent space dimension--the dimension of the output of a layer--of all layers is the same and will be denoted by \(d\). The dimension of the logits, denoted as the \((L+1)-\)th layer of the encoder, is \(d^{\prime}\). The final classifier built on the pre-trained encoder produces a soft decision \(C_{\psi}\) over the classes, where \(\psi\) is a learned parameter. We will denote by \(C_{\psi}(c\,|\,\mathbf{x})\) the predicted probability that a given input \(\mathbf{x}\) belongs to class \(c\). Given an input \(\mathbf{x}\), the predicted label \(\hat{y}\) is then obtained as follows:
\[\hat{y}\triangleq\underset{c\in\mathcal{Y}}{\text{arg max}}\ C_{\psi}\left(c |\mathbf{x}\right)\ \text{with}\ C_{\psi}=\text{softmax}(f^{L+1}_{\psi}(\mathbf{x})).\]
### Review of textual attacks
The sensitivity of neural networks with respect to adversarial examples has been uncovered by [23] and popularized by [1], who introduced fast adversarial generation methods, in the context of computer vision. In computer vision, the meaning of an adversarial attack is clear: a given regular input is perturbed by a small noise which does not affect human perception but nonetheless changes the network prediction. However, due to the discrete nature of tokens in NLP, small textual perturbations are usually perceptible (_e.g._, a word substitution can change the meaning of a sentence). As a result, defining textual attacks is not straightforward and the methods used in the context of images in general do not directly apply to NLP tasks.
The goal of a textual attack is to modify an input while keeping its semantic meaning and luring a deep learning model. At a high level, one can formally define the problem of textual attack generation as follows. Given an input \(\mathbf{x}\), find a perturbation \(\mathbf{x}_{adv}\) that satisfies the following optimization problem:
\[\begin{array}{ll}\max&\mathrm{SIM}(\mathbf{x},\mathbf{x}_{adv})\\ \mathrm{s.t.}&\underset{c\in\mathcal{Y}}{\text{arg max }}C_{\psi}\left(c| \mathbf{x}_{adv}\right)\neq\underset{c\in\mathcal{Y}}{\text{arg max }}C_{\psi}\left(c|\mathbf{x}\right),\end{array} \tag{1}\]
where \(\mathrm{SIM}:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_{+}\) denotes a function that measures the semantic proximity between two textual inputs. Finding a good similarity function is an active research area and previous works [11] rely on embedding similarities such as Word2vect [13], USE [14], or string-based distance [10] based on the Levenshtein distance [17], among others.
The landscape of available adversarial textual attacks keeps growing, with numerous attacks every year [11, 15, 16, 19, 18, 12, 13]. There exist different types of attacks according to the perturbation level, that is the level of granularity at which the corruption is performed. For instance, [15, 17] character-level perturbations are usually based on basic operations such as substitution, deletion, swapping or insertion. There exist also word-level corruption techniques [1, 16] which usually perform word substitution using synonyms or semantically equivalent words [18, 19]. Finally, we can also find sentence-level attacks [10] relying on text generation techniques. Standard toolkits such as OpenAttack [18] or Textattack [15] gather them in a unified framework.
### Review textual attack detection methods
The goal of an adversarial attack detector is to build a binary decision rule \(d:\mathcal{X}\rightarrow\{0,1\}\) that assigns \(1\) to _adversarial samples_ created by the malicious attacker and \(0\) to _clean samples_. Typically, this decision rule consists of a function \(s:\mathcal{X}\rightarrow\mathbb{R}\) that measures the similarity between an input sample and the training distribution, and a threshold \(\gamma\in\mathbb{R}\):
\[d(\mathbf{x})=\mathbb{I}\{s(\mathbf{x})>\gamma\}=\begin{cases}1&\text{if }s( \mathbf{x})\geq\gamma,\\ 0&\text{if }s(\mathbf{x})<\gamma.\end{cases} \tag{2}\]
As already mentioned in the previous section, although some works rely on robust training by adding regularization terms that use adversarial generation [13, 14, 15] at the risk of not being able to cover attacks developed in the future, adversarial detection techniques have received few attention from the NLP community [16]. Detection methods consist in adding an adversarial attack detector on top of a given trained model. The majority of developed techniques require adversarial examples either for validation or for training purposes. For instance, this is the case of [16] which computes sentence likelihood based on words frequencies; and of [11, 16] which focus on specific types of attacks. The only work that does not require access to adversarial examples is [15] which computes a similarity score between a given input embedding and the training distribution. This similarity function is the Mahalanobis distance and has been widely used in the related literature of OOD detection methods [17, 18, 19].
## 3 Larousse: A Novel Adversarial Attacks Detector
We follow the notations introduced in Sec. 2. In particular, recall that \(f_{\psi}^{L}:\mathcal{X}\rightarrow\mathbb{R}^{d}\) is the map
ping to the last layer embedding of the considered network.
### Larousse in a nutshell
Our framework for adversarial attack detection relies on three consecutive steps:
1. **Feature Extraction.** As in [20], we rely on the last layer embedding \(f_{\psi}^{L}(\mathbf{x})\) of a given textual input \(\mathbf{x}\). We will use the following notation: \(\mathbf{z}\triangleq f_{\psi}^{L}\left(\mathbf{x}\right)\in\mathbb{R}^{d}\).
2. **Anomaly Score Computation.** In the second step, we compute a similarity score between the last layer embedding \(\mathbf{z}\) and the predicted class of \(\mathbf{z}\). To formally write this score, we need to introduce, for each \(y\in\mathcal{Y}\), the empirical distribution \(\widehat{P}_{Y}^{L}(y)=(1/|\mathcal{D}_{y}|)\sum_{i:\,y_{i}=y}\delta_{f_{\psi} ^{L}(\mathbf{x}_{i})}\)of the points \(\mathcal{D}_{y}\triangleq\{f_{\psi}^{L}(\mathbf{x}_{i}),\ \mathrm{s.t.}\,y_{i}=y\}\). With these notations in mind, our similarity score function writes, for a given input \(\mathbf{x}\) with predicted class \(\hat{y}\):
\[s_{\textsc{lAROSSE}}(\mathbf{x})\triangleq D_{\mathrm{HM}}\big{(}z,\widehat{ P}_{Y}^{L}(\hat{y})\big{)}, \tag{3}\]
where \(D_{\mathrm{HM}}\) denotes the halfspace-mass depth that we carefully present in Sec. 3.2. The higher the value of \(D_{\mathrm{HM}}\) the more regular \(\mathbf{x}\) is with respect to \(\widehat{P}_{Y}^{L}\).
3. **Thresholding.** Similar to previous works, the final step consists in thresholding our similarity score: we detect \(\mathbf{x}\) as an adversarial attack if and only if \(s_{\textsc{lAROSSE}}(\mathbf{x})\leq\gamma\), where \(\gamma\) is a hyperparameter of the detector.
**Remark 1**: _In the experimental section, we will also consider the case where the depth function is computed based on the logits. It corresponds to replace \(\mathbf{z}=f_{\psi}^{L}\left(\mathbf{x}\right)\in\mathbb{R}^{d}\) by \(\mathbf{z}=f_{\psi}^{L+1}\left(\mathbf{x}\right)\in\mathbb{R}^{|\mathcal{Y}|}\)._
### A brief review of data depths and the halfspace-mass depth
With the goal of extending the notions of order and rank to multivariate spaces, the statistical concept of depth has been introduced by John Tukey in [17]. Data depth found many applications in Statistics and Machine Learning (ML) such as in classification [13], clustering [14], text automatic evaluation [15] or anomaly detection [16, 17]. A depth function \(D(\cdot,P):\mathbb{R}^{d}\rightarrow[0,1]\) provides a score that reflects the closeness of any element \(\mathbf{x}\in\mathbb{R}^{d}\) to a probability distribution \(P\) on \(\mathbb{R}^{d}\). The higher (respectively lower) the score of \(\mathbf{x}\) is, the deeper (respectively farther) it is in \(P\). Many proposals have been suggested in the literature such as the projection depth [13], the zonoid depth [11] or the Monge-Kantorovich depth [1] differing in properties and applications. To compare their benefits and drawbacks, standard properties that a data depth should satisfy have been developed in [16] (see also [17]). We refer the reader to [11] or to [16, Ch. 2 ] for an excellent account of data depth.
**The halfspace-mass depth.** Beyond appealing properties satisfied by depth functions such as affine-invariance [16], these statistical tools suffer in practice from high-computational burden, which limits their spread use in ML applications [11]. However, efficient approximations have been provided such as for the halfspace-mass depth [2] (see also [15, 16]). The halfspace-mass (HM) depth of \(\mathbf{x}\in\mathbb{R}^{d}\) w.r.t. a distribution \(P\) on \(\mathbb{R}^{d}\) is defined as the expectation over the set of all closed halfspaces containing \(\mathbf{x}\)\(\mathcal{H}(\mathbf{x})\) of the probability mass of such halfspaces. More precisely, given a random variable \(\mathbf{X}\) following a distribution \(P\) and a probability measure \(Q\) on \(\mathcal{H}(\mathbf{x})\), the HM depth of \(\mathbf{x}\) w.r.t. \(P\) is defined as follows:
\[D_{\mathrm{HM}}(\mathbf{x},P)=\mathbb{E}_{H\sim Q}\left[P(H)\right]. \tag{4}\]
When a training set \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) is given, expression (4) boils down to:
\[D_{\mathrm{HM}}(\mathbf{x},\hat{P}_{X})=\mathbb{E}_{Q}\left[\frac{1}{n}\sum_{ i=1}^{n}\mathbb{I}\{\mathbf{x}_{i}\in H\}\right], \tag{5}\]
where \(\hat{P}_{X}\) denotes the empirical measure defined by \(\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{x}_{i}}\). The halfspace-mass depth has been successfully used in anomaly detection (see [2] and [16]) making it a natural candidate for detecting adversarial attacks at the layers of a neural network.
**Computational aspects.** The expectation of (5) can be approximated by means of a Monte-Carlo as opposed to several depth functions that are defined as the solution to optimization problems [17, 18], unfeasible when dimensions are too high. The aim is then to approximate (5) with a
finite number of half spaces containing \(\mathbf{x}\). To that end, authors of [3] introduced an algorithm, divided into training and testing parts, that provides a computationally efficient approximation of (5). The three main parameters involved are \(K\), corresponding to the number of directions sampled on the sphere, \(n_{s}\), the sub-sample size which is drawn at each projection step, and \(\lambda\), which controls the extent of the choice of the hyperplane. Since the HM approximation has low sensitivity in its parameters, in the remainder of the paper we set \(K=10000\), \(n_{s}=32\) and \(\lambda=0.5\). The computational complexity of the training part is of order \(\mathcal{O}(Kn_{s}d)\) and the testing part \(\mathcal{O}(Kd)\), which makes ease to compute. Further details are provided to the curious reader in Sec. 8.1
**Remark 2**: Advantages over the Mahalanobis distance _In contrast to approaches based on the Mahalanobis distance [3, 12], the halfspace-mass depth does not require to invert and estimate the covariance matrix of the training data that can be challenging both from computational and statistical perspectives, especially in high dimension. In addition, the HM depth does not need any assumption on the distribution while Mahalanobis distance is restricted to be used on distributions with finite two-first-order moments._
## 4 STAKEOUT: A Novel Benchmark for Adversarial Attacks
Textual attack generation can be computationally expensive as some attacks require hundreds of queries to corrupt a single sample2. To dispose of a benchmark that gathers the result of diverse attacks on different datasets and encoders is instrumental to accelerate future research efforts by reducing computational overhead. To build our benchmark, we relied on the models, the datasets, and the attacks available in TextAttack [11]. In the following, we describe the experimental choices we made when building STAKEOUT and discuss our baseline and evaluation pipeline.
Footnote 2: For STAKEOUT the average number of try is 800.
### A novel benchmark: STAKEOUT
**Training Datasets.** We choose to work on sentiment analysis, using SST2 [13] and IMDB [14], and topic classification, relying on ag-news [11]. These datasets are used in [11] and allow for comparison with previously obtained results.
**Target Pretrained Classifiers.** We rely on the model available on the Transformers' Hub [12]. In order to ensure that our conclusions are not model specific, we work with classifiers that are based on two types of pre-trained encoders: BERT [13] and ROBERTA (ROB) [15]. Tab. 1 reports the accuracy of the different models on each considered dataset.
**Adversarial attacks.** Our benchmark is based on 9 different attacks that cover a broad range of techniques including words/character insertion, deletion, swapping, and substitution. Upon these 9 attacks, 8 are taken from the 16 available methods of TextAttack, namely PRUthi (PRU) [20], TextBugger (TB) [10], IGa (IG) [21], DeepWordBug (DWB) [11], KULeshov (KUL) [12], BAE (BAE) [1], PWW (PWWS) [13] and TextFooler (TF)[14], and the last one is TF-adjusted (TF-ADF) [11]. We tried additional attacks, and they were either too weak to fool the models [15, 12] or were crashing. Further details on the attacks are gathered in Tab. 3. Fig. 1 displays the success rate regarding attack efficiency and the number of queries for each considered attack. It is worth noting that IG fails on IMDB.
**Takeaways of** Fig. 1. Interestingly, attack efficiency only marginally depends on the pre-trained encoder type. In contrast, there is a strong dependency with respect to the training set (variation of over 0.2 points). It is worth noting that TF and KUL are the most efficient attacks. From the averaged number of queries, we note that attacking a classifier trained on IMDB is harder than one trained on SST2 despite being both binary classification tasks.
**Adversarial and clean sample selection.** For evaluation, we rely on test sets that are made of clean samples and adversarial ones. In order to construct
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Dataset & Acc (\%) \\ \hline BERT & SST2 & 92.43 \\ & ag-news & 94.20 \\ & IMDB & 91.90 \\ ROB & SST2 & 94.04 \\ & ag-news & 94.70 \\ & IMDB & 94.10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classifier accuracy for each considered dataset.
such sets while controlling the ratio between clean and adversarial samples, we rely on (Yoo et al., 2022, Scenario 1). From a given initial test set \(\mathcal{X}_{t}\), we sample two disjoint subsets \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\). We then generate attacks on \(\mathcal{X}_{1}\) and take the successful one as an adversarial testing example, while \(\mathcal{X}_{2}\) is taken as the clean testing sample.
### Baseline detectors
We use two baseline detectors. The first one is based on a language model likelihood and the second one corresponds to the Mahalanobis detector introduced in (Yoo et al., 2022). Both of them follow the analog three consecutive steps of LAROSSE, but do not use the same similarity score.
**Language model score.** This method consists in computing the likelihood of an input with an external language model:
\[s_{\mathrm{LM}}(\mathbf{x})=-\sum_{i=1}^{|\mathbf{x}|}\log p_{\psi}(\omega_{i} |\omega_{i-1},\dots,\omega_{1}), \tag{6}\]
where \(\omega_{i}\) represents the individual token of the input sentence \(\mathbf{x}\). We compute the log-probabilities with the output of a pretrained GPT2 (Brown et al., 2020). Notice that this baseline is also used in (Yoo et al., 2022).
**Mahalanobis-based detector.** We follow (Yoo et al., 2022) which relies on a class-conditioned Mahalanobis distance. Following our notations, it corresponds to evaluation:
\[s_{\mathrm{M}}(\mathbf{x})=\left(f_{\psi}^{L}(\mathbf{x})-\mu_{\hat{y}}\right) ^{T}\Sigma_{\hat{y}}\left(f_{\psi}^{L}(\mathbf{x})-\mu_{\hat{y}}\right), \tag{7}\]
where \(\mu_{\hat{y}}\) is the empirical mean for the logits of class \(\hat{y}\) and \(\Sigma_{\hat{y}}\) is the associated empirical covariance.
**Remark 3**: _Similarly to Remark 1, for a given textual input \(\mathbf{x}\), we will either rely on the penultimate layer \(L\) representation \(f_{\psi}^{L}(\mathbf{x})\) or on the logits predictions \(f_{\psi}^{L+1}(\mathbf{x})\) of the networks to compute \(s_{M}\)._
### Evaluation metrics
The adversarial attack detection problem can be seen as a classification problem. In our context, two quantities are of interest, namely _(i)_ the _false alarm rate_, _i.e._ the proportion of samples that are misclassified as _adversarial sample_ while actually being _clean_; and _(ii)_ the _true detection rate_, _i.e._, the proportion of samples that are rightfully predicted as _adversarial sample_. We focus on three different metrics that assess the quality of our method.
1. **Area Under the Receiver Operating Characteristic curve (AUROC; (Bradley, 1997)).** It is the area under the ROC curve which consider the true detection rate against the false alarm rate. From elementary computations, the AUROC can be linked to the probability that a clean example has higher score than an adversarial sample.
2. **Area Under the Precision-Recall curve (AUPR; (Davis and Goadrich, 2006))**. It is the area under the precision-recall curve that is more relevant to imbalanced situations. It plots the recall (true detection rate) against the precision (actual proportion of _adversarial sample_ amongst the predicted _adversarial sample_).
3. **False Positive Rate at 90% True Positive Rate (FPR (%))**. In a practical situation, one wishes to build an efficient detector. Thus, given a detection rate \(r\), this incites to fix a threshold \(\delta_{r}\) such that the corresponding TPR equals \(r\). Following (Yoo et al., 2022), we set \(r=0.90\). For FPR, lower is better.
4. **Classification error (Err (%))**. This refers to the lowest classification error obtained by choosing the best-fixed threshold.
## 5 Experimental results
### Overall Results
We report in Tab. 2 the aggregated performance over the different datasets, the various seeds, and the different attacks.
\(D_{\mathrm{HM}}\) **achieves the best overall results.** It is worth
Figure 1: Efficiency of the chosen attacks. Both check-list and input reduction were tried but discarded due to low efficiency. Dashed lines report the average performance for each dataset.
noting that detection methods better discriminate adversarial attacks on ROB. than BERT. It also consistently improves the performance when using a halfspace mass score \(D_{\mathrm{HM}}\) instead of Mahanalobis \(D_{\mathrm{M}}\), which experimentally validates our choice. This conclusion holds on both ROB. and BERT, corresponding to over \(540\) experimental configurations. Similar to previous work [20], the detector built on GPT2 under-performs \(D_{\mathrm{M}}\). For all methods, we observe that LAROSUSE achieves the best results both in terms of threshold-free (_e.g._ AUROC, AUPR-IN and AUPR-OUT) threshold-based metrics (_e.g._ FPR) which validates our detector.
**Importance of feature selection for adversarial detectors.** Both \(D_{\mathrm{HM}}\) and \(D_{\mathrm{M}}\) are highly sensitive to the layer's choice. For \(D_{\mathrm{M}}\), using the logits is better than the penultimate layer, while for \(D_{\mathrm{HM}}\), the converse works better. Although the AUROC presents a slight variation when using \(f_{\psi}^{L+1}\) instead of \(f_{\psi}^{L}\), it induces a variation of over 10 FPR points.
Overall, it is worth noting that LAROSUSE, although being state-of-the-art on tested configurations, achieves an FPR which remains moderate. The best-averaged error of \(17.9\%\) is far from the error achieved on the main task (less than 10% on all datasets).
### Identifying key detection factors
To better understand the performance of our methods w.r.t different attacks and various datasets, we report in Fig. 4 the performance in terms of AUROC and FPR per attack.
**Detectors and models are not robust to dataset change.** The detection task is more challenging for SST-2 than for ag-news and IMDB, with a significant drop in performance (_e.g._ over 15 absolute points for BAE). On SST-2, \(D_{\mathrm{HM}}\) achieves a significant gain over \(D_{\mathrm{M}}\) both for the AUROC and FPR.
**Detectors do not detect uniformly well the various attacks.** This phenomenon is pronounced on SST2 while being present for both ag-news and IMDB. For example on SST2, FPR varies from less than 10 (a strong detection performance) for TF-ADJ to over 70 (a poor performance) for PRU.
**Hard to detect attack for ROB. are not necessarily hard to detect for BERT.** This phenomenon is illustrated by Fig. 2. For example, KUL is hard to detect for BERT while being easier on ROB as LAROSUSE achieves over 96 AUROC points. If safety is a primary concern, it is thus crucial to carefully select the pre-trained encoder.
**The choice of clean samples largely affects the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & AUROC & FPR & AUPR-IN & AUPR-OUT & Err \\ \hline BERT & \(GPT\) & softmax & 76.1 \(\pm\)0.1 & 58.4 \(\pm\)19.1 & 75.4 \(\pm\)8.7 & 75.3 \(\pm\)10.2 & 34.0 \(\pm\)9.5 \\ & \(D_{\mathrm{M}}\) & \(f_{\psi}^{L}\) & 88.8 \(\pm\)6.3 & 49.7 \(\pm\)25.4 & 90.9 \(\pm\)0.0 & 84.5 \(\pm\)8.2 & 28.4 \(\pm\)12.0 \\ & & \(f_{\psi}^{L+1}\) & 90.1 \(\pm\)7.8 & 32.3 \(\pm\)23.9 & 88.6 \(\pm\)10.4 & 88.5 \(\pm\)7.6 & 19.7 \(\pm\)11.3 \\ & \(D_{\mathrm{HM}}\) & \(f_{\psi}^{L}\) & **92.0** \(\pm\)5.0 & **32.1** \(\pm\)24.1 & **93.3** \(\pm\)4.8 & **89.4** \(\pm\)5.8 & **19.5** \(\pm\)11.2 \\ & \(f_{\psi}^{L+1}\) & 91.9 \(\pm\)5.1 & 35.8 \(\pm\)23.2 & 92.4 \(\pm\)5.7 & 90.0 \(\pm\)5.6 & 21.4 \(\pm\)10.9 \\ \hline ROB. & \(GPT\) & softmax & 77.7 \(\pm\)0.7 & 56.0 \(\pm\)20.4 & 77.2 \(\pm\)0.1 & 76.8 \(\pm\)10.7 & 32.6 \(\pm\)9.9 \\ & \(D_{\mathrm{M}}\) & \(f_{\psi}^{L}\) & 89.9 \(\pm\)5.5 & 44.1 \(\pm\)22.9 & 91.9 \(\pm\)5.1 & 86.1 \(\pm\)7.2 & 25.5 \(\pm\)10.9 \\ & & \(f_{\psi}^{L+1}\) & 90.0 \(\pm\)8.3 & 31.9 \(\pm\)23.6 & 88.5 \(\pm\)11.5 & 88.7 \(\pm\)7.8 & 19.5 \(\pm\)11.3 \\ & \(D_{\mathrm{HM}}\) & \(f_{\psi}^{L}\) & **93.4** \(\pm\)6.0 & **29.0** \(\pm\)21.7 & **93.9** \(\pm\)6.4 & **91.3** \(\pm\)5.3 & **17.9** \(\pm\)10.3 \\ & & \(f_{\psi}^{L+1}\) & 92.8 \(\pm\)5.1 & 32.1 \(\pm\)23.5 & 93.3 \(\pm\)5.9 & **90.9** \(\pm\)5.9 & **19.4** \(\pm\)11.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Aggregated performance over both datasets and attacks. Each average number aggregates 270 measurements (10 seeds \(\times\) 3 datasets \(\times\) 9 attacks). \(D_{M}\) (resp. \(D_{HM}\)) indicates a detector based on Mahalanobis (resp. Halspace Mass depth) (see Eq. 7), GPT to the perplexity score (see Eq. 6).
Figure 2: Performance per attack in for each pretrained encoder in terms of AUROC (left) and FPR (right) of \(D_{\mathrm{M}}\) and \(D_{\mathrm{HM}}\) on STAKEOUT.
detection performance measure.** Fig. 4 and Fig. 2 display several tries with different seeds. As mentioned in Sec. 4, different seeds correspond to various choices of clean samples. On all datasets, we observe that when measuring the algorithm performance, different negative samples will lead to different results (_e.g._ FPR on IMDB varies of over 30 points on KUL and PRU across different seeds).
### All the metrics matter
**Setting.** In this experiment, we study the relationship between the different metrics. From Tab. 2, we see that threshold free metrics (_i.e._, AUROC, AUPR-IN) exhibit lower variance than threshold based metrics such as FPR. The FPR measures the percentage of natural samples detected as adversarial when \(90\%\) of the attacked samples are detected. Therefore, the lower, the better.
**Takeaways.** From Fig. 3, we see that for a large AUROC, AUPR-IN and AUPR-OUT do not necessarily corresponds a low FPR. This suggests that the detectors also detect natural samples as adversarial when it detect at least 90\(\%\) of adversarial examples. Additionally, a small variation of AUROC, AUPR-IN and AUPR-OUT can lead to a high change in FPR. It is therefore crucial to compare the detectors using all metrics.
### Expected performances of LAROSSE
**Setting.** Fig. 5 reports the error probability per attack for LAROSUSE and the considered baselines.
**Efficient attacks are easier to detect.** We observe that on the three most efficient attacks, according to Fig. 1 (_i.e.,_ TF, PWWS and KUL), LAROSUSE is significantly more effective than \(D_{\mathrm{M}}\) and GPT2.
**Different detection methods capturing various phenomena are better suited for detecting types of attack.** Although LAROSUSE achieves the best results overall, GPT2, which relies on perplexity solely, achieves competitive results with LAROSSE and outperforms \(D_{\mathrm{M}}\) on several attacks (_i.e.,_ DWB and IG). This suggests that stronger detectors could be achieve by combining different types of scoring functions.
### Semantic vs syntactic attacks
In this section, we analyze the results of the LAROSSE on semantic (_i.e.,_ working on token) versus syntactic (_i.e,_ working on character) attacks. Raw and processed results are reported in Sec. 10.5.
**Takeaways.** From Fig. 9, we observe that semantic attacks are harder to detect for both our method and \(D_{H}\).
## 6 Concluding Remarks
We have proposed STAKEOUT, a large adversarial attack detection benchmark, and LAROSSE. LAROSUSE leverages _a new anomaly score built on the halfspace-mass depth_ and offers a better
Figure 4: Performance in terms of AUROC (up) and FPR (down) of \(D_{\mathrm{M}}\) and \(D_{\mathrm{HM}}\) on STAKEOUT. Fig. 7 in the Supplementary Material reports the results of GPT2.
Figure 5: Detection error on STAKEOUT.
Figure 3: Empirical study of the metric relationship for the three considered detection methods.
alternative than the widely known Mahalanobis distance.
## 7 Ethical impact of our work
Our work focuses on responsive NLP and aims at contributing to the protection of NLP systems. Our new benchmark STAKEOUT allows for a robust evaluation of new adversarial detection methods. And LAROSSE outperforms previous methods and thus provides a better defense against attackers. Overall, we believe this paper offers promising research direction toward safe and robust NLP systems and will benefit the community.
## Acknowledgements
This work was performed using HPC resources from GENCI-IDRIS (Grants 2022- AD01101838, 2023-103256 and 2023-101838).
Figure 6: FPR for semantic vs syntactic analysis further results can be found in Fig. 8(a) |
2307.10870 | Nonlinear Meta-Learning Can Guarantee Faster Rates | Many recent theoretical works on meta-learning aim to achieve guarantees in
leveraging similar representational structures from related tasks towards
simplifying a target task. Importantly, the main aim in theory works on the
subject is to understand the extent to which convergence rates -- in learning a
common representation -- may scale with the number $N$ of tasks (as well as the
number of samples per task). First steps in this setting demonstrate this
property when both the shared representation amongst tasks, and task-specific
regression functions, are linear. This linear setting readily reveals the
benefits of aggregating tasks, e.g., via averaging arguments. In practice,
however, the representation is often highly nonlinear, introducing nontrivial
biases in each task that cannot easily be averaged out as in the linear case.
In the present work, we derive theoretical guarantees for meta-learning with
nonlinear representations. In particular, assuming the shared nonlinearity maps
to an infinite-dimensional RKHS, we show that additional biases can be
mitigated with careful regularization that leverages the smoothness of
task-specific regression functions, | Dimitri Meunier, Zhu Li, Arthur Gretton, Samory Kpotufe | 2023-07-20T13:42:13Z | http://arxiv.org/abs/2307.10870v4 | # Nonlinear Meta-Learning Can Guarantee Faster Rates
###### Abstract
Many recent theoretical works on _meta-learning_ aim to achieve guarantees in leveraging similar representational structures from related tasks towards simplifying a target task. Importantly, the main aim in theory works on the subject is to understand the extent to which convergence rates--in learning a common representation--_may scale with the number \(N\) of tasks_ (as well as the number of samples per task). First steps in this setting demonstrate this property when both the shared representation amongst tasks, and task-specific regression functions, are linear. This linear setting readily reveals the benefits of aggregating tasks, e.g., via averaging arguments. In practice, however, the representation is often highly nonlinear, introducing nontrivial biases in each task that cannot easily be averaged out as in the linear case.
In the present work, we derive theoretical guarantees for meta-learning with nonlinear representations. In particular, assuming the shared nonlinearity maps to an infinite-dimensional RKHS, we show that additional biases can be mitigated with careful regularization that leverages the smoothness of task-specific regression functions, yielding improved rates that scale with the number of tasks as desired.
## 1 Introduction
Meta-Learning refers colloquially to the problem of inferring a deeper internal structure--beyond a specific task at hand, e.g., a regression task--that may be leveraged towards speeding up other similar tasks. This arises for instance in practice with neural networks where, in pre-training, multiple apparently dissimilar tasks may be aggregated to learn a _representation_ that enables _faster_ training of target tasks (i.e., requiring relatively fewer target data).
Notwithstanding the popularity of Meta-Learning in practice, the theoretical understanding and proper formalism for this setting is still in its early stages. We consider a common approach in the context of regression, which posits an unknown target-task function of the form \(f(x)=g(\Gamma(x))\) and \(N\) unknown related task-functions of the form \(f_{i}(x)=g_{i}(\Gamma(x)),i\in[N]\), i.e., all sharing a common but unknown _representation_\(\Gamma(x)\); it is assumed that all _link functions_\(g,g_{i}\) are _simpler_, for instance linear or at least lower-dimensional than the corresponding regression functions \(f,f_{i}\). As all these objects are a priori unknown, recent research has aimed to establish how the target regression problem may benefit from the \(N\) related tasks. In particular, if \(\Gamma(x)\) may be approximated by some \(\hat{\Gamma}(x)\) at a rate that scales with \(N\) (and the number \(n\) of samples per task), then presumably, the target regression function \(f\) may be subsequently learned as \(\hat{g}(\hat{\Gamma}(x))\) at a faster rate commensurate with the _simplicity_ of \(g\).
Recent theoretical results (Tripuraneni et al., 2021; Kong et al., 2020; Du et al., 2021) have provided significant new insights in this area by considering an idealized linear setting where \(x\in\mathbb{R}^{d}\), \(g,g_{i}\)'s are linear functions
in \(\mathbb{R}^{k},k\ll d\), and \(\Gamma(x)\) denotes a linear projection to \(\mathbb{R}^{k}\). These results show that \(\Gamma\) can be learned at a rate of \(\tilde{O}(\sqrt{dk/nN})\)--under suitable subspace-distance measures, and where \(\tilde{O}\) omits log terms --which then allows for the target task to be learned at a rate of \(\tilde{O}(\sqrt{k/n})\ll\tilde{O}(\sqrt{d/n})\). Here, it is emphasized that the representation learning rate of \(\tilde{O}(\sqrt{dk/nN})\) scales with the number of tasks \(N\) rather than just with \(n\), establishing the benefit of related tasks in improving the target rate.
In practice, however, it often occurs that the representation \(\Gamma\) is a non-linear transformation of \(x\), as when _reproducing kernel Hilbert space_ (RKHS) or neural net representations are used. While it is well understood that this is an important next step to elucidate, fewer works have so far addressed this more challenging setting (Maurer et al., 2016; Du et al., 2021).
In the present work, we consider a setting where \(\Gamma\) maps \(x\), _nonlinearly_, into an RKHS \(\mathcal{H}\), possibly of infinite dimension; more precisely, \(\Gamma\)_projects_ the feature maps \(K(x,\cdot)\) into an \(s\)-dimensional subspace \(\mathcal{H}_{s}\) of \(\mathcal{H}\). The link functions \(g,g^{\prime}_{i}s\) are assumed to be _simple_ in the sense that they are linear in \(\Gamma\), hence we also have that \(f,f_{i}\)'s belong to \(\mathcal{H}\). In other words, if we knew \(\Gamma\) (or \(\mathcal{H}_{s}=\mathcal{H}_{s}(\Gamma)\)), the target problem would reduce to linear regression in \(\mathbb{R}^{s}\), and therefore would admit \((L_{2})\) convergence rates of the form \(\tilde{O}(\sqrt{s/n})\), i.e., significantly faster than usual nonparametric rates for regression over infinite-dimensional \(\mathcal{H}\) (see discussion after Theorem 1 and Corollary 1). As in the case of linear \(\Gamma\) discussed above, this improved rate will turn out to require estimating \(\Gamma\) at a fast rate scaling in both \(N\) and \(n\).
When moving from linear to non-linear, nonparametric \(\Gamma\), a significant new challenge arises due to the bias inherent in the learning procedure. For high-level intuition, note that a main appeal of meta-learning is that the aggregate of \(N\) tasks should help reduce _variance_ over using a single task, by carefully combining task-specific statistics computed on each of the \(N\) samples; _crucially, such statistics ought to introduce little bias, since bias cannot be averaged out_. Task-specific biases are harder to avoid in nonparametric settings, however, if we wish to avoid overfitting task-specific statistics. This is in contrast to linear functionals, which often admit unbiased statistics with no overfitting (one may think e.g. of OLS).
Fortunately, as we show in this work, nonlinear meta-learning remains possible with rate guarantees improving in both \(N\) and \(n\), and crucially by allowing some amount of overfitting of task-specific statistics (for relatively small \(n\)), so as to reduce bias below the level of aggregate variance. More specifically, our approach relies on the following initial fact: if the links \(g_{i}\)'s are linear, it easily follows that the individual regression functions \(f_{i}\)'s all live in the span \(\mathcal{H}_{s}\subset\mathcal{H}\) of the shared representation \(\Gamma\) (see setup Section 3.1). Thus, under a _richness assumption_ where \(f_{i}\)'s span \(\mathcal{H}_{s}\) (extending usual assumptions in the linear case, e.g. of Du et al., 2021), we may estimate \(\mathcal{H}_{s}\) by estimating the span of regularized estimates \(\hat{f}_{i}\) of \(f_{i}\). In order to guarantee fast rates that scale with \(N\) and \(n\), we need to _under-regularize_, i.e., overfit task-specific estimates \(\hat{f}_{i}\)'s to suitably decrease bias, at the cost of increased task-specific (hence overall) variance. Such under-regularization necessarily implies suboptimal regression in each task, but improves estimation of the representation defined by \(\Gamma\). We demonstrate that such delicate tradeoffs may be satisfied, depending on the _smoothness_ level of regression functions \(f_{i}\), as captured by complementary regularity conditions on \(f_{i}\)'s and the interaction between the kernel and data distributions \(\mu_{i}\)'s (see Section 4.1). In the process, some interesting additional subtleties emerges: meta-learning benefits from _regularity beyond usual saturation points_ that were established in traditional RKHS regression. This further illustrates how the meta-learning goal of estimating \(\Gamma\) inherently differs from regression, even when relying on regression estimates. This is discussed in further detail in Section 4.
Fast rates scaling in \(N\) and \(n\) for estimating \(\mathcal{H}_{s}=\mathcal{H}_{s}(\Gamma)\) as \(\hat{\mathcal{H}}_{s}:=\operatorname{span}\{\hat{f}_{i}\}\) are established in Theorem 2. This requires, among other tools, a basic variation on Davis-Kahan for infinite-dimensional operators, which may be of independent interest (Theorem 1). As a consequence, we show that by operating in \(\hat{\mathcal{H}}_{s}\) for the target regression problem, we can achieve _parametric_ target \(L_{2}\) rates of \(\tilde{O}(\sqrt{s/n})\) (see Corollary 1), much faster than usual nonparametric rates for \(f\in\mathcal{H}\). This last step requires us to establish closeness of projections onto the estimated \(\hat{\mathcal{H}}_{s}\) vs \(\mathcal{H}_{s}\).
Finally, although much of the analysis and involved operations pertain to infinite-dimensional \(\mathcal{H}\) space, the entire approach can be instantiated in input data space via suitable representation theorems (see Section 5).
### Related work
Meta-Learning is an umbrella term for a rich variety of learning settings, where we are provided with a set of distributions pertaining to relevant training tasks, and obtain a functional to speed learning on a target task. In this work, we focus on the case where this functional defines _a representation \(\Gamma\) of the data_, and where the target regression function is of the form \(f(x)=g(\Gamma(x))\). We begin this section with the closest work to our setting (namely linear and nonlinear projections \(\Gamma\)), then briefly touch on alternative meta-learning definitions for completeness (although these will be outside the scope of the present study).
We start with works in the _linear setting_, which study generalization error where \(\Gamma\) is a learned linear projection \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\), obtained from \(N\) training tasks (Tripuraneni et al., 2021; Du et al., 2021; Kong et al., 2020). Tripuraneni et al. (2021) study the low-dimensional linear representation learning setting under the assumption of isotropic inputs for all tasks, and obtain the learning rate of \(\tilde{O}(\sqrt{dk^{2}/nN}+\sqrt{k/n})\). Du et al. (2021) achieve a similar rate while relaxing the isotropic assumption. In the linear representation case, they obtain an \(\tilde{O}(\sqrt{dk/nN}+\sqrt{k/n})\) rate. Kong et al. (2020) study a somewhat different scenario to the foregoing, where the number of samples per task may differ (and is smaller than the dimension \(d\) of the data); the aim is to determine how many tasks must be undertaken in order to achieve consistency.
Konobeev et al. (2021) consider a distribution dependent analysis of meta-learning in the setting of fixed design finite dimensional linear regression, with Gaussian noise and a Gaussian parameter distribution. In the case where the covariance matrix of the parameter is assumed to be known, the authors provide matching upper and lower bounds, which demonstrates a precise characterization of the benefit of meta-learning. While there is no theoretical analysis in the case where the covariance matrix is unknown, the authors provide a detailed description of how the EM algorithm can be employed to solve the meta-learning problem.
We next consider the case where the representation \(\Gamma\) is nonlinear. Maurer et al. (2016) evaluate the performance of a method for learning a nonlinear representation \(\Gamma\in\mathcal{F}\) which is \(s\)-dimensional, with special focus on the case of a projection onto a subspace of a reproducing kernel Hilbert space. They focus on a _learning to learn_ (LTL) scenario, where excess risk is evaluated _in expectation over a distribution of tasks_(Maurer et al., 2016, Section 2.2): we emphasize that this is a fundamentally different objective to the performance on a specific novel test task, as in our setting. The loss they propose to minimize (Maurer et al., 2016, Eq. 1) is an average over \(N\) training tasks, where each task involves a different linear weighting of the common subspace projection (the work does not propose an algorithm, but concerns itself solely with the statistical analysis). Maurer et al. (2016, Theorem 5) show that for an RKHS subspace projection, one can achieve an LTL excess risk for Lipschitz losses (in expectation over the task distribution) that decreases as \(\tilde{O}(s/\sqrt{N}+\sqrt{s/n})\). This requires \(N\geq n\) in order to approach the parametric rate. Maurer et al. (2016, note 2, p. 8) demonstrate that the factor \(1/\sqrt{N}\) is an unavoidable consequence of the LTL setting.
Du et al. (2021) consider the case of nonlinear representation learning, using the same training loss as Maurer et al. (Eq. 1 in their paper), but with performance evaluation on a single test task, as in our setting. Again defining \(\Gamma\in\mathcal{F}\), they obtain a learning rate of \(\tilde{O}(\mathcal{G}(\mathcal{F})/\sqrt{nN}+\sqrt{s/n})\) for the excess risk (Du et al., 2021, Theorem 5.1), where \(\mathcal{G}(\cdot)\) measures the Gaussian width of \(\mathcal{F}\) (a data-dependent complexity measure, and consequently a function of \(n,N\); see e.g., Maurer, 2014, for further details). The instantiation of \(\mathcal{G}(\mathcal{F})\) for specific instances of \(\mathcal{F}\) was not pursued further in this work, however Maurer (2014) shows that the Gaussian width is of order \(\sqrt{nN}\) in \(n\) and \(N\), in the case where \(\mathcal{F}\) is a projection onto a subspace of an RKHS with Lipschitz kernels.
The problem of learning a "meaningful" low-dimensional representation \(\Gamma\) has also been addressed in the
field of sufficient dimension reduction. Fukumizu et al. (2009); Yin et al. (2008); Li and Dong (2009) give different criteria for obtaining such \(\Gamma\) and establishing consistency, however they do not address the risk analysis of downstream learning algorithms that employ \(\Gamma\). Li et al. (2011) introduce the so-called principal support vector machine approach for learning both linear and non-linear \(\Gamma\). The idea is to learn a set of support vector regression functions, each mapping to different "features" of the outputs \(Y\) (e.g., restrictions to intervals, nonlinear transforms). The estimator \(\hat{\Gamma}\) of \(\Gamma\) is then constructed from the principal components of these solutions. In the linear setting, the authors provide the \(\sqrt{n}\)-consistency of \(\hat{\Gamma}\). Wu et al. (2007) provide a kernelization of sliced inverse regression, which yields a subspace \(\Gamma\) in an RKHS (the so-called effective dimension reduction space). Consistency of the projection by \(\hat{\Gamma}\) of an RKHS feature map \(\phi(x)\) is established; and an \(O(n^{-1/4})\) convergence rate is obtained, under the assumption that all \(\Gamma\) components can be expressed in terms of a finite number of covariance operator eigenfunctions. The learning risk of downstream estimators using \(\hat{\Gamma}\) remains to be established, however.
Outside of the regression setting, meta-learning has been studied for classification: Galanti et al. (2022) investigates the generalization error in this setting, with the representation \(\Gamma\) being a fully connected ReLU neural net of depth \(Q\), common to all tasks. Finally, there are analyses for other meta-learning schemes such as domain adaption (Ben-David et al., 2006; Mansour et al., 2009), domain generalization Blanchard et al. (2021) and covariate shift Ma et al. (2023), as well as alternative gradient-based approaches to refining algorithms on novel test domains (e.g. Denevi et al., 2019; Finn et al., 2017, 2019; Khodak et al., 2019; Meunier and Alquier, 2021).
**Paper organization:** We begin in Section 2 by providing the necessary background and notation for kernel methods and RKHS ridge regression. In Section 3, we give the learning setup, introducing the training and target tasks. Section 4 contains our modelling and smoothness assumptions on the training and target tasks, and our main consistency results: in particular, Theorem 1 demonstrates convergence up to the misalignment of the empirical and population projections onto the span of the \(\hat{f}_{i}\) and \(f_{i}\), respectively; and Corollary 1 gives the final result, incorporating the convergence of this projection error. Finally, Section 5 contains an instantiation of the approach from data samples.
## 2 Background & Notations
Function Spaces & Basic Operators.Let \(K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) be a symmetric and positive definite kernel function and \(\mathcal{H}\) be a vector space of \(\mathcal{X}\to\mathbb{R}\) functions, endowed with a Hilbert space structure via an inner product \((\cdot,\cdot)_{\mathcal{H}}\). \(K\) is a reproducing kernel of \(\mathcal{H}\) if and only if: \(1\). \(\forall x\in\mathcal{X},\phi(x)\coloneqq K(\cdot,x)\in\mathcal{H}\); \(2\). \(\forall x\in\mathcal{X}\) and \(\forall f\in\mathcal{H},f(x)=\left(f,\phi(x)\right)_{\mathcal{H}}\). A space \(\mathcal{H}\) which possesses a reproducing kernel is called a reproducing kernel Hilbert space (RKHS), Berlinet and Thomas-Agnan (2011). For a probability measure \(\mu\) on \(\mathcal{X}\), \(L_{2}(\mathcal{X},\mu)\) abbreviated \(L_{2}(\mu)\) denotes the Hilbert space of real-valued measurable functions for which the integral of their square against \(\mu\) is finite. When applied to a bounded linear operator on \(\mathcal{H}\), \(\|\cdot\|\) denotes the operator norm, and \(\|\cdot\|_{HS}\) the Hilbert-Schmidt norm. For \((f,g)\in\mathcal{H}^{2}\), \(g\otimes f\coloneqq\left\{f,\cdot\right\}_{\mathcal{H}}g\) is the generalisation of the Euclidean outer product. For a set of vectors \(\{u_{1},\ldots,u_{n}\}\in\mathcal{H}\), \(U:=[u_{1},\ldots,u_{n}]\) denotes the operator with the vectors as "columns", formally \(U:\mathbb{R}^{n}\to\mathcal{H},\alpha\mapsto\sum_{i=1}^{n}u_{i}\alpha_{i}\). Its adjoint is \(U^{*}:\mathcal{H}\to\mathbb{R}^{n},u\mapsto\left(\{u_{i},u_{\mathcal{H}}\} \right)_{i=1}^{n}\).
We require some standard technical assumptions on the previously defined RKHS and kernel: \(1\). \(\mathcal{H}\) is separable, this is satisfied if \(\mathcal{X}\) is a Polish space and \(K\) is continuous, Steinwart and Christmann (2008); \(2\). \(\phi(x)\) is measurable for all \(x\in\mathcal{X}\); \(3\). \(\sup_{x,x^{\prime}\in\mathcal{X}}K(x,x^{\prime})\eqqcolon\kappa^{2}<\infty\). Note that those assumptions are not restrictive in practice, as well-known kernels such as the Gaussian, Laplacian and Matern kernels satisfy all of the above assumptions on \(\mathbb{R}^{d}\).
\(N\in\mathbb{N}^{*}\) is the number of source tasks, \(T\) is the index for the target task, \(2n\) available datapoints for each source task, \(n_{T}\) available datapoints for the target task. For \(n,m\in\mathbb{N}^{*},n\leq m,[n]:=\{1,\ldots,n\},[n,m]\coloneqq\{n,\ldots,m\}\).
Kernel Ridge Regression & Regularization.Let \(\mathcal{X}\times\mathbb{R}\) be a probability space with distribution \(\mu\), where we view \(\mathcal{X}\) and \(\mathbb{R}\) as the input and output spaces, respectively. Let \(\mu_{\mathcal{X}}\) denotes the marginal distribution of \(\mu^{2}\) on \(\mathcal{X}\) and \(\mu(\cdot|x)\) the conditional distribution on \(\mathbb{R}\) given \(x\in\mathcal{X}\). Given a data set \(D=\left\{\left(x_{i},y_{i}\right)\right\}_{i=1}^{n}\) independently sampled from the (unknown) distribution \(\mu\), the goal of nonparametric least-squares regression is to estimate the conditional mean function (a.k.a the regression function) \(f_{\mu}:\mathcal{X}\rightarrow\mathbb{R}\) given by
\[f_{\mu}=\mathbb{E}_{(X,Y)\sim\mu}\left[Y\mid X\right]\]
In this work, we focus on kernel-based regularized least-squares, an estimation of the regression function \(\hat{f}_{\lambda}\) is obtained by solving the convex optimization problem
\[\hat{f}_{\lambda}=\operatorname*{argmin}_{f\in\mathcal{H}}\left\{\frac{1}{n} \sum_{i=1}^{n}\left(y_{i}-f\left(x_{i}\right)\right)^{2}+\lambda\|f\|_{ \mathcal{H}}^{2}\right\}, \tag{1}\]
where a reproducing kernel Hilbert space (RKHS) \(\mathcal{H}\) over \(\mathcal{X}\) is used as hypothesis space and \(\lambda>0\) is the regularization parameter.
The squared expected risk is
\[\mathcal{R}_{\mu}(f):=\mathbb{E}_{(X,Y)\sim\mu}\left[\left(Y-f(X)\right)^{2}\right]\]
and the excess risk is given by
\[\mathcal{E}_{\mu}(f):=\sqrt{\mathcal{R}_{\mu}(f)-\mathcal{R}_{\mu}(f_{\mu})}= \mathbb{E}_{X\sim\mu_{\mathcal{X}}}\left[\left(f(X)-f_{\mu}(X)\right)^{2} \right]^{1/2}\]
## 3 Nonlinear Meta-Learning
### Population set-up
As discussed so far in the introduction, we consider a setting with \(N\) source distributions \(\{\mu_{i}\}_{i\in[N]}\) on \(\mathcal{X}\times\mathbb{R}\), with corresponding regression functions of the form \(f_{i}(x)=g_{i}(\Gamma(x))\), i.e., sharing the same _representation_ function \(\Gamma\). We are interested in minimizing risk for a target distribution \(\mu_{T}\) on \(\mathcal{X}\times\mathbb{R}\), with regression function \(f_{T}(x)=g_{T}(\Gamma(x))\). In the mostly common linear case, it is assumed that \(\Gamma\)_projects_ into a subspace of \(\mathbb{R}^{d}=\mathcal{X}\). However, for the non-linear case, we will assume that \(\Gamma\) is a projection of nonlinear feature-maps in infinite-dimensional space.
**Assumption 1**.: _We let \(\Gamma:\mathcal{X}\mapsto\mathcal{H}\), namely, it maps \(x\in\mathcal{X}\) to a subspace \(\mathcal{H}_{s}\) of dimension \(s\geq 1\) of an RKHS \(\mathcal{H}\) as follows: given a projection operator \(P\) onto \(\mathcal{H}_{s}\), \(\Gamma(x)\doteq PK(x,\cdot)\). Furthermore, all link functions \(g_{T},g_{i}\)'s are assumed linear \(\mathcal{H}\mapsto\mathbb{R}\), i.e., \(\exists w_{T},w_{i}\in\mathcal{H}_{s}\) s.t. \(g_{T}(\Gamma(x))=\langle w_{T},\Gamma(x)\rangle_{\mathcal{H}}\), and \(g_{i}(\Gamma(x))=\langle w_{i},\Gamma(x)\rangle_{\mathcal{H}}\)._
We have the following alternate representation which yields complementary intuition as to the expressivity of \(\Gamma\).
**Remark 1**.: _Given an orthonormal basis (ONB) \(V=[v_{1},\ldots,v_{s}]\) of \(\mathcal{H}_{s}\), we may rewrite \(g_{T}(\Gamma(x))=\alpha_{T}^{\top}V^{*}K(x,\cdot)\), i.e., for \(\alpha_{T}\in\mathbb{R}^{s}\), for an \(s\)-dimensional (nonlinear) representation \(V^{*}\Gamma(x)=V^{*}K(x,\cdot)\) of \(x\). The same is true for \(g_{i}\)'s with respective \(\alpha_{i}\)'s. The representations are non-unique although \(\mathcal{H}_{s}\) is unique (see Remark 3 below), and also are the corresponding regression functions._
The next remark is also immediate, and establishes that, as a consequence of the above assumptions, all regression functions belong to \(\mathcal{H}_{s}\).
**Remark 2**.: _As a consequence of the above, since \(P\) is self-adjoint, we have \(f_{T}(x)\doteq\langle Pw_{T},K(x,\cdot)\rangle_{\mathcal{H}}\), hence by the reproducing property, \(f_{T}=Pw_{T}\in\mathcal{H}_{s}\). Similarly, we have that all \(f_{i}\)'s are in \(\mathcal{H}_{s}\)._
**Remark 3**.: _For any projection \(P\) into some complete subspace \(\mathcal{H}_{s}\), \(\langle\cdot,PK(x,\cdot)\rangle_{\mathcal{H}}\) evaluates every function in \(\mathcal{H}_{s}\) at \(x\), and in fact is well-known as the kernel of the sub-RKHS defined by \(\mathcal{H}_{s}\). The same fact implies uniqueness of \(\mathcal{H}_{s}\) and in particular that it equals \(\overline{\text{span}}\{\Gamma(x)\doteq PK(x,\cdot)\}\)._
The following _richness condition_ will henceforth be assumed, similar to previous works on meta-learning in the linear representation case Du et al. (2021), outside of which we cannot hope to learn \(\mathcal{H}_{s}\).
**Assumption 2** (Source Richness).: _We have that \(\text{span}\left(\{f_{i}\}_{i\in[N]}\right)=\mathcal{H}_{s}\)._
### Learning set-up
In this section we present the high level ideas of our Meta-Learning strategy. The first step is to learn a subspace approximation \(\hat{\mathcal{H}}_{s}\approx\mathcal{H}_{s}\) from source tasks. This process aims to find a suitable representation that facilitates the learning of the target task. We refer to this step as **pre-training**. The second step involves directly learning the target task within the subspace \(\hat{\mathcal{H}}_{s}\). We refer to this step as **inference**.
**Source Tasks - pre-training.** Our approach to approximate \(\mathcal{H}_{s}\) is inspired by Kong et al. (2020), which focused on finite-dimensional linear meta-learning. We extend this strategy to encompass (potentially infinite-dimensional) non-linear meta-learning. Under the source richness assumption (Assumption 2), \(\mathcal{H}_{s}\) is equal to the range of the rank-\(s\) operator
\[C_{N}:=\frac{1}{N}\sum_{i=1}^{N}f_{i}\otimes f_{i},\qquad\text{range}\,C_{N}= \mathcal{H}_{s}. \tag{2}\]
See Proposition 5 in Appendix. Therefore equipped with estimators \((\hat{f}_{i,\lambda},\hat{f}^{\prime}_{i,\lambda})\) for each source task \(i\in[N]\), where \(\hat{f}_{i,\lambda},\hat{f}^{\prime}_{i,\lambda}\) are i.i.d copies of each other and \(\lambda\) is a regularization parameter, our strategy is to estimate \(\mathcal{H}_{s}=\operatorname{ran}C_{N}\) via the range of
\[\hat{C}_{N,n,\lambda}:=\frac{1}{N}\sum_{i=1}^{N}\hat{f}^{\prime}_{i,\lambda} \otimes\hat{f}_{i,\lambda}. \tag{3}\]
Data-splitting to build two i.i.d estimators has for consequence that
\[\mathbb{E}[\hat{C}_{N,n,\lambda}]\coloneqq\frac{1}{N}\sum_{i=1}^{N}\mathbb{ E}[\hat{f}^{\prime}_{i,\lambda}]\otimes\mathbb{E}[\hat{f}_{i,\lambda}].\]
This property plays a crucial role in deriving approximation rates for \(\mathcal{H}_{s}\). Notably, data-splitting is employed in Kong et al. (2020), and it remains an open problem to determine whether we can proceed without it, even within the context of finite-dimensional linear meta-learning.
Each source task is learned from a dataset \(\mathcal{D}_{i}=\{(x_{i,j},y_{i,j})_{j=1}^{2n}\},i\in[N]\) of i.i.d observations sampled from \(\mu_{i}\), via regularized kernel regression as in Eq. (1),
\[\hat{f}_{i,\lambda}=\operatorname*{argmin}_{f\in\mathcal{H}}\sum_{j=1}^{n} \left(y_{i,j}-f(x_{i,j})\right)^{2}+n\lambda\|f\|_{\mathcal{H}}^{2},\ \hat{f}^{\prime}_{i,\lambda}=\operatorname*{argmin}_{f\in\mathcal{H}}\sum_{j=n +1}^{2n}\left(y_{i,j}-f(x_{i,j})\right)^{2}+n\lambda\|f\|_{\mathcal{H}}^{2} \tag{4}\]
For task \(i\in[N]\), let \(K_{i},L_{i}\in\mathbb{R}^{n\times n}\) be the Gram matrices such that \((K_{i})_{j,l}=K(x_{i,j},x_{i,l})\), \((j,l)\in[n]\) and \((L_{i})_{j,l}=K(x_{i,j},x_{i,l})\), \((j,l)\in[n+1:2n]\). Then for all \(x\in\mathcal{X}\),
\[\hat{f}_{i,\lambda}(x)=Y_{i}^{\top}\left(K_{i}+n\lambda I_{n}\right)^{-1}k_{i,x},\quad\hat{f}_{i,\lambda}^{\prime}(x)=(Y_{i}^{\prime})^{\top}\left(L_{i}+n \lambda I_{n}\right)^{-1}\ell_{i,x}, \tag{5}\]
where \(k_{i,x}=(K(x_{i,1},x),\ldots,K(x_{i,n},x))^{\top}\in\mathbb{R}^{n}\), \(\ell_{i,x}=(K(x_{i,n+1},x),\ldots,K(x_{i,2n},x))^{\top}\in\mathbb{R}^{n}\), \(Y_{i}=(y_{i,1},\ldots,y_{i,n})^{\top}\in\mathbb{R}^{n}\) and \(Y_{i}^{\prime}=(y_{i,n+1},\ldots,y_{i,2n})^{\top}\in\mathbb{R}^{n}\).
It is important to note that although \(\hat{C}_{N,n,\lambda}\) has rank at most \(N\), we cannot guarantee that the rank of \(\hat{C}_{N,n,\lambda}\) equals \(s\). Consequently, a direct comparison between \(\operatorname{ran}C_{N}\) and \(\operatorname{ran}\hat{C}_{N,n,\lambda}\) is not possible. Instead, we consider the singular value decomposition of \(\hat{C}_{N,n,\lambda}\):
\[\hat{C}_{N,n,\lambda}=\sum_{i=1}^{N}\hat{\gamma}_{i}\hat{u}_{i}\otimes\hat{v} _{i}=\hat{U}\hat{D}\hat{V}^{*},\]
where the singular values are arranged in descending order such that \(\hat{\gamma}_{1}\geq\dots\geq\hat{\gamma}_{N}\geq 0\), and stored in the diagonal matrix \(\hat{D}\in\mathbb{R}^{N\times N}\). The right singular vectors are stored as columns in \(\hat{V}=[\hat{v}_{1},\ldots,\hat{v}_{N}]\), and the left singular vectors are stored in \(\hat{U}=[\hat{u}_{1},\ldots,\hat{u}_{N}]\). To form an approximation of \(\mathcal{H}_{s}\), we retain only the right singular vectors \(\{\hat{v}_{1},\ldots,\hat{v}_{s}\}\) (a similar approach can be applied to the left singular vectors indifferently). This selection allows us to construct an approximation of \(\mathcal{H}_{s}\) as:
\[\hat{\mathcal{H}}_{s}:=\operatorname{span}\{\hat{v}_{1},\ldots,\hat{v}_{s}\}.\]
We define the orthogonal projection onto \(\hat{\mathcal{H}}_{s}\) as \(\hat{P}\).
**Remark 4**.: _In nonparametric regression, as employed in this approach, regularization becomes necessary and \(\lambda>0\). Due to regularization, a bias is introduced as \(\mathbb{E}[\hat{f}_{i,\lambda}]\neq f_{i}\) when \(\lambda>0\). For subspace approximation, it is crucial to effectively control this bias since it cannot be averaged out._
**Target task - inference.** Following the pre-training phase, during inference, we are given a target task dataset \(\mathcal{D}_{T}=\{(x_{T,j},y_{T,j})_{j=1}^{n_{T}}\}\in(\mathcal{X}\times \mathbb{R})^{n_{T}}\) sampled from \(\mu_{T}\) in order to approximate \(f_{T}\). As mentioned in Remark 3, \(\hat{\mathcal{H}}_{s}=\hat{P}(\mathcal{H})\subseteq\mathcal{H}\) forms a RKHS on \(\mathcal{X}\) having the same inner product as \(\mathcal{H}\) and with reproducing kernel \(\hat{K}(x,y)=(\hat{P}\phi(x),\phi(y))_{\mathcal{H}},(x,y)\in\mathcal{X}^{2}\). Consequently, We can estimate \(f_{T}\) via regularized kernel regression within \(\hat{\mathcal{H}}_{s}\), as shown in Eq. (1). For \(\lambda_{*}>0\),
\[\hat{f}_{T,\lambda_{*}}:=\operatorname*{arg\,min}_{f\in\hat{\mathcal{H}}_{s}} \sum_{j=1}^{n_{T}}\left(f(x_{T,j})-y_{T,j}\right)^{2}+n_{T}\lambda_{*}\|f\|_{ \mathcal{H}}^{2}. \tag{6}\]
Since \(\hat{\mathcal{H}}_{s}\) is \(s-\)dimensional, it can be treated as a standard regularized regression in \(\mathbb{R}^{s}\), allowing us to obtain a closed-form expression for \(\hat{f}_{T,\lambda_{*}}\). The specific procedures for computing \(\hat{\mathcal{H}}_{s}\) and \(\hat{f}_{T,\lambda_{*}}\) in closed-form from data are elaborated upon in Section 5.
## 4 Theoretical Analysis
### Regularity Assumptions
We now detail the list of required assumptions for each task. For \(i\in[N]\cup\{T\}\), we define the covariance operator for task \(i\) as \(\Sigma_{i}:=\mathbb{E}_{X\sim\mu_{i}}[\phi(X)\otimes\phi(X)]\). It is a self-adjoint semi-definite trace-class operator on \(\mathcal{H}\), thereby admitting an eigenvalue decomposition with non-negative real eigenvalues \(\lambda_{i,1}\geq\lambda_{i,2}\geq\ldots\geq 0\) and associated eigenvectors \(\sqrt{\lambda_{i,1}}e_{i,1},\sqrt{\lambda_{i,2}}e_{i,2}\ldots\in\mathcal{H}\).
**Assumption 3**.: _For \(i\in[N]\), the kernel \(K\) and the marginal distribution \(\mu_{i}\) are such that the eigenvalues of the covariance operator \(\Sigma_{i}\) satisfy a polynomial decay of order \(1/p\), i.e. for some constant \(c>0\) and \(0<p\leq 1\), and for all \(j\geq 1\), \(\lambda_{i,j}\leq cj^{-1/p}\)._
The assumption on the decay rate of the eigenvalues is typical in the risk analysis for kernel ridge regression, see e.g., (Fischer and Steinwart, 2020; Caponnetto and De Vito, 2007).
**Assumption 4**.: _There exist \(\alpha\in(0,1]\) and \(k_{\alpha,\infty}\geq 0\), such that, for any task \(i\in[N]\), for \(\mu_{i}-\)almost all \(x\in\mathcal{X}\),_
\[\sum_{j\geq 1}\lambda_{i,j}^{\alpha}e_{i,j}^{2}(x)\leq k_{\alpha,\infty}^{2}.\]
This assumption is known as an _embedding property_ (into \(L_{\infty}\), see Fischer and Steinwart (2020)), and is a regularity condition on the pair \(k,\mu_{i}\). In particular, let \(T_{K}\doteq\sum_{j}\lambda_{i,j}\ e_{i,j}\otimes_{L_{2}(\mu_{i})}e_{i,j}\) denote the _integral operator_\(L_{2}(\mu_{i})\mapsto L_{2}(\mu_{i})\) induced by \(K\), then the assumption characterizes the smallest \(\alpha\) such that \(\operatorname{range}T_{K}^{\alpha/2}\) may be continuously embedded into \(L_{\infty}(\mu_{i})\); since, as is well-known for continuous kernels, \(T_{K}^{1/2}\equiv\mathcal{H}\), the assumption holds for \(\alpha=1\) whenever \(K\) is bounded. Note that the _interpolation spaces_\(\operatorname{range}T_{K}^{\alpha/2}\) only get larger as \(\alpha\to 0\), eventually coinciding with the closure of \(\operatorname{span}\{e_{i,j}\}_{j\geq 1}\) in \(L_{2}(\mu_{i})\). Furthermore, Assumption 4 implies Assumption 3 with \(p=\alpha\)(Fischer and Steinwart, 2020, Lemma 10) and hence we assume \(p\leq\alpha\) in the following.
As alluded to in the introduction, we remark that, as it turns out, \(\alpha\) has no direct benefit for vanilla regression in our _well-specified_ setting with \(f_{i}\in\mathcal{H}\), but is beneficial in meta-learning as we will see (see Corollary 1 and Remark 8 thereafter).
**Assumption 5**.: _There exist \(r\geq 0\) and \(R\geq 0\), such that for \(i\in[N]\), the regression function \(f_{i}\) associated with \(\mu_{i}\) is an element of \(\mathcal{H}\) and satisfies \(\|\Sigma_{i}^{-r}f_{i}\|_{\mathcal{H}}\eqqcolon R<\infty\)._
This assumption is standard in the statistical analysis of regularized least-squares algorithms (Caponnetto and De Vito, 2007), which imposes a smoothness assumption on the regression function of each source task.
**Remark 5**.: _It is important to emphasize that the various regularity conditions Assumptions 3, 4, and 5 only concern the source tasks towards nonlinear meta-learning. We will soon see in Section 4.2 that they are complementary in insuring enough **smoothness** of the source regression functions to allow for sufficient **under-regularization** to take advantage of the aggregate of \(N\) source samples. Thus, the main assumption on the target task is simply that it shares the same nonlinear representation as the source tasks._
Finally, to control the noise we assume the following.
**Assumption 6**.: _There exists a constant \(Y_{\infty}\geq 0\) such that for \(i\in[N]\cup\{T\}\): \(\mu_{i}(\mathcal{X}\times[-Y_{\infty},Y_{\infty}])=1\)._
### Main results
**Theorem 1**.: _Under Assumptions 1, 2 and 6 with \(s\geq 1\) and \(Y_{\infty}>0\), for \(\tau\geq 2.6\), \(0<\lambda_{*}\leq 1\) and_
\[n_{T}\geq 6\kappa^{2}\lambda_{*}^{-1}\left(\tau+\log(s)\right), \tag{7}\]
_with \(\mu_{T}^{n_{T}}\)-probability not less than \(1-4e^{-\tau}\) and conditionally on \(\mathcal{D}_{i}=\{(x_{i,j},y_{i,j})_{j=1}^{2n}\},i\in[N]\)_
\[\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})\leq c_{1}\left\{\sqrt{\frac{\tau s }{n_{T}}}+\frac{\tau}{n_{T}\sqrt{\lambda_{*}}}+\sqrt{\lambda_{*}}+\sqrt{1+ \lambda_{*}}\left\|\hat{P}_{1}P\right\|\right\},\]
_where \(\hat{P}_{\perp}:=I_{\mathcal{H}}-\hat{P}\) and \(c_{1}\) is a constant that only depends on \(Y_{\infty},\|f_{T}\|_{\mathcal{H}}\), and \(\kappa\). Hence, if we take \(\lambda_{*}\) of the order \(n_{T}^{-1}\), conditionally on \(\mathcal{D}_{i}=\{(x_{i,j},y_{i,j})_{j=1}^{2n}\},i\in[N]\),_
\[\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})=O_{p}\left(\sqrt{\frac{s}{n_{T }}}+\big{\|}\hat{P}_{\perp}P\big{\|}\right). \tag{8}\]
Theorem 1, specifically Eq. (8), reveals that the excess risk for the target task consists of two components: the \(\sqrt{s/n_{T}}\) part and the part \(\big{\|}\hat{P}_{\perp}P\big{\|}\) arising from errors in the pre-training stage. In the upcoming Theorems 1 and 2, we will see that the pre-training error takes the form \(O(1/\sqrt{n^{ra}N^{rb}})\) for some constants \(a,b>0\) and \(r\) from Assumption 5. In other words, if either \(N\) (number of tasks) or \(n\) (number of data within each task) is sufficiently large, we can guarantee that the excess risk decays at the optimal parametric rate \(O\left(\sqrt{s/n_{T}}\right)\), a rate achieved only by performing linear regression in a space of dimension \(s\).
The quantity \(\big{\|}\hat{P}_{\perp}P\big{\|}\) can be shown to be equal to the \(\sin\)-\(\Theta\) distance (a distance on the Grassmannian \(\operatorname{Gr}(s,\mathcal{H})\), representing the space of all \(s-\)dimensional subspaces of \(\mathcal{H}\)) between \(\mathcal{H}_{s}\) and \(\hat{\mathcal{H}}_{s}\)(Stewart and Sun, 1990; Von Luxburg, 2007; Rohe et al., 2011). We can relate this discrepancy between \(\mathcal{H}_{s}\) and \(\hat{\mathcal{H}}_{s}\) to the difference between \(C_{N}\) and \(\hat{C}_{N,n,\lambda}\). This relationship is an instance of the Davis-Kahan theorem (Davis and Kahan, 1970) which we note was established generally for linear operators, including compact infinite-dimensional operators. However, a naive application of the earlier inequality of (Davis and Kahan, 1970) would involve eigenvalues of both operators \(\mathcal{H}_{s}\) and \(\hat{\mathcal{H}}_{s}\), the latter being a random object; we therefore instead rely on a more recent version of Mollenhauer (2021) which only relies on eigenvalues of population operators (and extends earlier work of Yu et al. (2015) that treats the case of matrices). The following proposition instantiates such a result.
**Proposition 1** (Population Davis-Kahan).: _Given \(C_{N}\) and \(\hat{C}_{N,n,\lambda}\) defined in Eq. (2,3), with \(\gamma_{1}\) and \(\gamma_{s}\) respectively the largest and smallest nonzero eigenvalues of \(C_{N}\), \(P\) the orthogonal projection onto \(\mathcal{H}_{s}\) and \(\hat{P}\) the orthogonal projection onto \(\hat{\mathcal{H}}_{s}\), then_
\[\|\hat{P}_{\perp}P\|_{HS}\leq 2\gamma_{s}^{-2}(2\gamma_{1}+\big{\|}\hat{C}_{N,n,\lambda}-C_{N}\big{\|})\big{\|}\hat{C}_{N,n,\lambda}-C_{N}\big{\|}_{HS}. \tag{9}\]
Proof.: Apply Corollary A.4.5 of (Mollenhauer, 2021) where by Assumptions 1 and 2, \(C_{N}\) has rank \(s\).
Recall that the operator norm \(\|\cdot\|\) is dominated by the Hilbert-Schmidt norm \(\|\cdot\|_{HS}\); in particular, for sufficiently large \(N\) and \(n\) Eq. (9) is dominated by \(\big{|}\hat{C}_{N,n,\lambda}-C\|_{HS}\). The following result is a bound on the term \(\|\hat{C}_{N,n,\lambda}-C_{N}\|_{HS}\), and is a main cornerstone of our analysis.
**Theorem 2**.: _Let Assumptions 1, 2, 3, 4, 5 and 6 hold with parameters \(s\geq 1\), \(c\geq 0\), \(p\in(0,1]\), \(\alpha\in[p,1],k_{\alpha,\infty}\geq 0\), \(R\geq 0,r\in(0,1]\) and \(Y_{\infty}>0\). For \(N\geq\tau\geq\ln(2)\), \(n\geq 1\), and \(0<\lambda\leq 1\), we have with probability at least \(1-e^{-\tau}\) over the randomness in the source samples that_
\[\|\hat{C}_{N,n,\lambda}-C_{N}\|_{HS}\leq c_{2}\Bigg{(}\underbrace{\frac{1}{ \lambda\sqrt{nN}}\left(1+\frac{1}{\lambda\sqrt{n}}\right)\sqrt{\tau}}_{Variance}+ \underbrace{\lambda^{r}\left(1+\frac{1}{\lambda\sqrt{n}}\min\left\{1,\frac{1} {\sqrt{n}\lambda^{\alpha+p}}+\frac{1}{n^{3/2}\lambda^{2\alpha}}\right\}\right)} _{Bias}\Bigg{)}\Bigg{)}, \tag{10}\]
_where \(c_{2}\) only depends on \(Y_{\infty}\), \(R\), \(\kappa\), \(p\), \(c\) and \(k_{\alpha,\infty}\)._
**Remark 6** (Further smoothness \(r>0\)).: _While in usual analyses, risk convergence in \(L_{2}\) norm is assured for \(r=0\) (implying that the regression function is in the hypothesis space \(\mathcal{H}\)), we require further smoothness on source regression functions (i.e., \(r>0\)) to guarantee the above rate. This is because the result relies on convergence of regression estimates in **the stronger RKHS norm** rather than in \(L_{2}\) norm, as the above \(\|\cdot\|_{HS}\) and projections are defined w.r.t. the RKHS itself._
The next corollary which is our main result, further clarifies the role of source tasks in speeding up target regression rates. The corollary follows by optimizing over \(\lambda\) in Theorem 2 (see Proposition 9 in Appendix A.4), and applying Eq. (8) from Theorem 1 along with Proposition 1.
**Corollary 1** (Main Result).: _Assume the conditions of Theorem 1 and Theorem 2, and further let for \(N\geq\tau\geq 2.6\), \(n\geq 1\) and \(\lambda_{*}=C\max\{\log(s),1\}n_{T}^{-1}\) where \(C\) is a constant depending on \(\kappa\) and \(\tau\) such that Eq. (7) is satisfied. With probability \(1-5e^{-\tau}\) over the randomness in both the source and target tasks, we have the following regimes of rates for a constant \(c_{3}\) that only depends on \(Y_{\infty}\), \(R\), \(\kappa\), \(\gamma_{1}\), \(\gamma_{s}\), \(p\), \(c\), \(\tau\), \(\|f_{T}\|_{\mathcal{H}}\) and \(k_{\alpha,\infty}\)._
_A. For_ **small number of tasks**_\(N\leq n^{r}\), for a choice of \(\lambda=(nN)^{-\frac{1}{2(1+r)}}\), we have_
\[\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})\leq c_{3}\tau\left\{\sqrt{ \frac{s}{n_{T}}}+(nN)^{-\frac{r}{2(1+r)}}\right\};\]
_B. For_ **large number of tasks**_\(N\geq n^{r}\), gains further depend on the **smoothness** of the kernel itself, i.e., on the kernel source condition and eigenvalue decay parameters \(\alpha\) and \(p\):_
1. \(\alpha+p>1\)_: for a choice of_ \(\lambda=n^{-\frac{1}{2}}\)_, we have_ \[\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})\leq c_{3}\tau\left\{\sqrt{ \frac{s}{n_{T}}}+n^{-\frac{r}{2}}\right\};\]
2. \(\alpha+p\leq 1\)_: we again have two cases now depending on how large_ \(N\) _is:_ 1. \(N\leq n^{\frac{2(r+2)}{\alpha+p+1}-2}\)_: for a choice of_ \(\lambda=(n^{2}N)^{-\frac{1}{2(r+2)}}\) _we have_ \[\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})\leq c_{3}\tau\left\{\sqrt{ \frac{s}{n_{T}}}+(n^{2}N)^{-\frac{r}{2(r+2)}}\right\};\] 2. \(N\geq n^{\frac{2(r+2)}{\alpha+p+1}-2}\)_: for a choice of_ \(\lambda=n^{-\frac{1}{\alpha+p+1}}\) _we have_ \[\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})\leq c_{3}\tau\left\{\sqrt{ \frac{s}{n_{T}}}+n^{-\frac{r}{\alpha+p+1}}\right\}.\]
Regimes of Gain.Of particular interest are regimes where the final target regression rate is of the fastest parametric form \(\sqrt{s/n_{T}}\): such regimes witness the benefit of source tasks since otherwise target rates would be of a much slower nonparametric form--e.g., \(O(n_{T}^{-1/4})\), see (Caponnetto and De Vito, 2007)--given that no further regularity condition is imposed on the target distribution outside of \(f_{T}\in\mathcal{H}\) sharing the same representation \(\mathcal{H}_{s}\) as source tasks. _Thus, in fact we already gain whenever the final rate is \(o(n_{T}^{-1/4})\)_.
However we will be interested in _requires where the gain is most considerable_ in that the source tasks permit a final meta-learning rate of \(\mathcal{E}_{\mu_{T}}(\hat{f}_{T,\lambda_{*}})\lesssim\sqrt{s/n_{T}}\); Corollary 1 displays various such regimes according to the number of source samples \(N\) and \(n\), and the _niceness_ parameters \(r\), \(\alpha\) and \(p\). While it is clear that larger \(r\) indicates _smoother_ source regression functions \(f_{i}\) as viewed from within the RKHS \(\mathcal{H}\), smaller parameters \(\alpha+p\) can on the other hand be understood as a _smoothness level_ of the RKHS \(\mathcal{H}\) itself--e.g., consider a Sobolev space \(\mathcal{H}\) of \(m\)-smooth functions, then we may take \(\alpha+p\propto 1/m\) (see Example 3). Thus the smoother the source tasks, viewed under \(r\) and \(\alpha+p\), the faster rates we expect since our approach aims at reducing the bias in each individual task which is easiest under smoothness (see Remark 7 below).
\(\bullet\) To better illustrate such regimes of gain, let's first consider situations where the number of samples per task are roughly the same across source and target, i.e., \(n\propto n_{T}\). Then:
* Under high smoothness \(r=1\), we always gain, provided the number of tasks \(N\gtrsim n=n^{r}\).
* If furthermore, \(\alpha+p\) is sufficiently small (see B.2) then we gain under even less smoothness on \(f_{i}\)'s, i.e., for \(r<1\); for instance, for \(\alpha+p\approx 0\), we get from B.2.a. that it is sufficient that \(n^{(2-r)/r}\leq N\leq n^{2r+2}\), requiring only \(f_{i}\) smoothness \(1/2\leq r\leq 1\); for larger \(N\geq n^{\frac{2(r+2)}{\alpha+p+1}-2}\), then it is sufficient to have \(f_{i}\) smoothness \(r\geq\frac{\alpha+p+1}{2}\).
* Note that, from B.2.b. there is no further improvement from larger \(N\gg n^{\frac{2(r+2)}{\alpha+p+1}-2}\); this is due to _saturation_ effects discussed in Remark 9. The most gain, i.e., fastest learning rate \(O\left(\sqrt{s/n_{T}}+n^{-\frac{r}{\alpha+p+1}}\right)\) is observed in this regime; for instance, under maximal _smoothness_\(r=1\) together with \(\alpha+p\approx 0\), the source term is of the form \(n^{-1}\) allowing in fact fewer samples \(n=O(\sqrt{n_{T}})\) per source dataset.
Finally, notice that in the entire regime \(n\propto n_{T}\), we require \(r\geq 1/2\) to have considerable gain.
\(\bullet\) More generally, for large number of samples per source tasks, i.e., for \(n\gg n_{T}\), regimes of gain are less restricted as can be read off of Corollary 1; for example, in case B.1, for any \(0<r\leq 1\), meta-learning provides a gain whenever \(n\gtrsim n_{T}^{1/r}\); in particular, for \(r\geq 1/2\), we would just need \(n\gtrsim n_{T}^{2}\). This is because the samples in each task are then large enough to already provide sufficient variance reduction, provided proper _under_ regularization.
**Remark 7** (Under-regularization/Overfitting).: _We note that in the regime of small samples per task, i.e., \(n\propto n_{T}\), we have to **overfit** the regression estimates in each source task, i.e., set \(\lambda\) lower than would have been prescribed for optimal regression; as discussed earlier in the introduction, this is because in these small sample regimes, we incur high bias per task which cannot be averaged out; under-regularization purposefully reduces such bias at the cost of increased per-task variance; the final variance however may then be reduced by the aggregate of tasks._
_More precisely, under the above discussed regimes of gain, rather than an optimal regression choice of \(\lambda^{*}\asymp n^{-\frac{1}{2r+1+p}}\) (see e.g. (Fischer and Steinwart, 2020, Theorem 1)), we set \(\lambda=n^{-1/2}\) in cases A, and B.1, while in cases B.2 (where \(\alpha+p\leq 1\)), we set \(n^{-\frac{1}{\alpha+p+1}}\leq\lambda\leq n^{-1/2}\) (solving for the range of \(N\) in settings of \(\lambda\)). Since in all regimes of gain we have \(r\geq 1/2\) as remarked above, we see that all these choices fall below \(\lambda^{*}\)._
**Remark 8** (Regularity beyond regression).: _Notice that \(\lambda^{*}\asymp n^{-\frac{1}{2r+1+p}}\) has no direct dependence on \(\alpha\): lower values of \(0<\alpha\leq 1\) yield no further benefit in regression once we assume \(f_{i}\in\mathcal{H}\) as opposed to the misspecified setting where \(f_{i}\) lies outside \(\mathcal{H}\). However, since we always have \(p\leq\alpha\), there is an indirect benefit. Yet, in situations where \(p\neq\alpha\), especially as we see that the combined quantity \(\alpha+p\) is most relevant in our rates, we see that meta-learning can benefit from smoothness beyond usual saturation points in regression._
**Remark 9** (Saturation on \(N\)).: _Corollary 1 shows that, in some instances, the final learning rate is independent of \(N\) (e.g., cases B.1 & B.2.b): increasing the number of tasks \(N\) in these cases does not further improve the learning rate. This behaviour may be explained from Eq.(10): \(N\) exclusively affects the variance term2. As such, once the variance reaches the level of the bias, a further increase in \(N\) can no longer improve the rate._
Footnote 2: Refer to Eq.(36) in Appendix A.3 for the detailed variance-bias split.
Characterizing \(\alpha+p\).As discussed above, smaller parameters \(\alpha+p\) yield faster meta-learning rates. The next examples yield some insights on situations with small \(\alpha+p\). Throughout, recall that by (Fischer and Steinwart, 2020, Lemma 10) we have \(p\leq\alpha\), i.e., \(p=\alpha\) is always admissible.
**Example 1** (Finite-dimensional kernels).: _Suppose \(\mathcal{H}\) is finite dimensional, i.e., the covariance operators \(\Sigma_{i}\) each admit a finite number of eigenfunctions \(e_{i,j},j=1,2,\ldots k\) for some \(k\). Then clearly if the eigenfunctions \(e_{i,j}(x)\) are bounded, then Assumption 4 holds for any arbitrarily small \(\alpha>0\) (recall \(p\leq\alpha\)). This is the case for instance for polynomial kernels \(K(x,x^{\prime})\doteq(x^{\top}x^{\prime}+c)^{m}\), on compact domains \(\mathcal{X}\subset\mathbb{R}^{d}\) by continuity of
such kernels (implying continuity of all functions in \(\mathcal{H}\) including \(e_{i,j}\)). Note that, since polynomial regression converges at rate \(O(\sqrt{d^{m}/n_{T}})\) (see for example Ghorbani et al., 2021; Chen and Meka, 2020; Andoni et al., 2014; Zippel, 1979), a gain from meta-learning holds true once the nonlinear representation \(\mathcal{H}_{s}\) is of dimension \(s\ll d^{m}\)._
**Example 2** (Gaussian kernel).: _Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be a bounded set with Lipschitz boundary2, \(\mu\) a distribution supported on \(\mathcal{X}\times\mathbb{R}\), with marginal distribution uniform on \(\mathcal{X}\) and let \(K\) be a Gaussian kernel. Then by (Kanagawa et al., 2018, Corollary 4.13), Assumption 4 is satisfied with any \(\alpha\in(0,1]\), implying that Assumption 3 is also satisfied with any \(p\in(0,1]\)._
Footnote 2: For the definition of Lipschitz boundary see (Kanagawa et al., 2020, Definition 3), as an example any bounded convex set has Lipschitz boundary.
**Example 3** (Sobolev spaces and Matern kernels).: _Let \(\mathcal{X}\subset\mathbb{R}^{d}\), be a non-empty, open, connected, and bounded set with a \(C_{\infty}-\)boundary. Let \(\mu\) be a distribution supported on \(\mathcal{X}\times\mathbb{R}\), with marginal equivalent to the Lebesgue measure on \(\mathcal{X}\). Choose a kernel which induces a Sobolev space \(\mathcal{H}_{m}\) of smoothness \(m\in\mathbb{N}\) with \(m>d/2\), such as the Matern kernel_
\[K\left(x^{\prime},x\right)=\frac{1}{2^{m-d/2-1}\Gamma(m-d/2)}\left(\sqrt{2(m-d /2)}\left\|x^{\prime}-x\right\|\right)^{m-d/2}\mathcal{K}_{m-d/2}\left(\sqrt{2 (m-d/2)}\left\|x^{\prime}-x\right\|\right),\quad x,x^{\prime}\in\mathcal{X},\]
_where \(\mathcal{K}_{m-d/2}\) is the modified Bessel function of the second kind of order \(m-d/2\) and \(\Gamma\) is the Gamma function (see e.g., Kanagawa et al. (2018) Examples 2.2 and 2.6). Then by (Fischer and Steinwart, 2020, Corollary 5), Assumption 3 is satisfied with \(p=\frac{d}{2m}\) and Assumption 4 is satisfied for every \(\alpha\in(\frac{d}{2m},1]\)._
## 5 Instantiation in Data Space
In this section, we delve into the intricacies of the steps outlined in Section 3 to offer a comprehensive understanding of the process. Specifically, we elaborate on the computation of the right singular vectors of \(\hat{C}_{N,n,\lambda}\), which plays a crucial role in constructing \(\hat{\mathcal{H}}_{s}\). Additionally, we provide insights into the projection of new data points onto \(\hat{\mathcal{H}}_{s}\), which is essential during the inference phase. By elucidating these key aspects, we aim to enhance the clarity and grasp of the methodology involved in our meta-learning approach. We emphasize that such instantiations were not provided for kernel classes in the nonlinear settings addressed by Maurer et al. (2016); Du et al. (2021); given the nonconvexity of the loss (Eq. (1) in both papers), this task would be nontrivial.
**Singular Value Decomposition of \(\hat{C}_{N,n,\lambda}\)**. We start by explaining how we can compute the SVD of \(\hat{C}_{N,n,\lambda}\) in closed form from data. Let us denote by \(\hat{V}_{s}=[\hat{v}_{1},\ldots,\hat{v}_{s}]\) the right singular vectors as columns and by \(\hat{U}_{s}=[\hat{u}_{1},\ldots,\hat{u}_{s}]\) the left singular vectors as columns, both associated to the \(s-\)largest singular values. The next proposition shows that \((\hat{U}_{s},\hat{V}_{s})\) can be obtained through the solution of a generalized eigenvalue problem associated to the matrices \(J,Q\in\mathbb{R}^{N\times N}\) such that for \((i,j)\in[N]\)2
Footnote 2: For the definition of Lipschitz boundary see (Kanagawa et al., 2020, Definition 3), as an example any bounded convex set has Lipschitz boundary.
\[J_{i,j} =(\hat{f}_{i},\hat{f}_{j})_{\mathcal{H}}=nY_{i}^{\top}\left(K_{i }+n\lambda I_{n}\right)^{-1}K_{ij}\left(K_{j}+n\lambda I_{n}\right)^{-1}Y_{j},\] \[Q_{i,j} =(\hat{f}_{i}^{\prime},\hat{f}_{j}^{\prime})_{\mathcal{H}}=n(Y_{i }^{\prime})^{\top}\left(L_{i}+n\lambda I_{n}\right)^{-1}L_{ij}\left(L_{j}+n \lambda I_{n}\right)^{-1}Y_{j}^{\prime},\]
where for tasks \((i,j)\in[N]^{2}\), \(K_{i,j},L_{i,j}\in\mathbb{R}^{n\times n}\) are the cross Gram matrices such that \((K_{i,j})_{l,m}=k(x_{i,l},x_{j,m})\), \((l,m)\in[n]\) and \((L_{i,j})_{l,m}=k(x_{i,l},x_{j,m})\), \((l,m)\in[n:2n]\).
**Proposition 2**.: _Consider the generalized eigenvalue problem which consists of finding generalized eigenvectors \((\alpha^{\top},\beta^{\top})^{\top}\in\mathbb{R}^{2N}\) and generalized eigenvalues \(\gamma\in\mathbb{R}\) such that_
\[\begin{bmatrix}0&QJ\\ JQ&0\end{bmatrix}\begin{bmatrix}\alpha\\ \beta\end{bmatrix}=\gamma\begin{bmatrix}Q&0\\ 0&J\end{bmatrix}\begin{bmatrix}\alpha\\ \beta\end{bmatrix}\]
_Define \(A:=[\hat{f}_{1}^{\prime},\dots,\hat{f}_{N}^{\prime}]\) and \(B:=[\hat{f}_{1},\dots,\hat{f}_{N}]\) and let \(\{(\hat{\alpha}_{i}^{\top},\hat{\beta}_{i}^{\top})^{\top}\}_{i=1}^{s}\) be the generalized eigenvectors associated to the \(s\)-largest generalized eigenvalues of the above problem and re-normalized such that \(\alpha_{i}^{\top}Q\alpha_{i}=\beta_{i}^{\top}J\beta_{i}=1,i\in[s]\). Then the top-s left and right singular vectors of \(\hat{C}_{N,n,\lambda}\) satisfy_
\[\hat{u}_{i}=A\hat{\alpha}_{i}=\sum_{j=1}^{N}(\alpha_{i})_{j}\hat{f}_{j}^{ \prime},\quad\hat{v}_{i}=B\hat{\beta}_{i}=\sum_{j=1}^{N}(\hat{\beta}_{i})_{j} \hat{f}_{j},\quad i\in[s].\]
_Hence,_
\[\hat{\mathcal{H}}_{s}=\operatorname{span}\{\hat{v}_{1},\dots,\hat{v}_{s}\}= \operatorname{span}\{B\hat{\beta}_{1},\dots,B\hat{\beta}_{s}\}.\]
See proof on page 33.. Generalized eigenvalue problems can be solved straightforwardly in most scientific libraries such as Matlab or SciPy.
**Projection into \(\hat{\mathcal{H}}_{s}\) and inference.** Next we explain how we can project a new point into \(\hat{\mathcal{H}}_{s}\) and perform inference on such representations. The projection onto \(\hat{\mathcal{H}}_{s}\) satisfies \(\hat{P}=\hat{V}_{s}\hat{V}_{s}^{*}\). A new point \(x\in\mathcal{X}\) can be projected into \(\hat{\mathcal{H}}_{s}\) as \(\hat{P}\phi(x)\) and identified to \(\mathbb{R}^{s}\) via
\[\tilde{x}=\hat{V}_{s}^{*}\phi(x)=(\{\hat{v}_{1},\phi(x)\}_{\mathcal{H}},\dots,\{\hat{v}_{s},\phi(x)\}_{\mathcal{H}})^{\top}=(\hat{v}_{1}(x),\dots,\hat{v}_ {s}(x))^{\top}\in\mathbb{R}^{s}. \tag{11}\]
By Proposition 2, \(\tilde{x}\) can be computed as
\[\tilde{x}_{i}=\hat{v}_{i}(x)=\{\hat{v}_{i},\phi(x)\}_{\mathcal{H}}=\langle B \hat{\beta}_{i},\phi(x)\rangle_{\mathcal{H}}=\hat{\beta}_{i}^{\top}B^{*}\phi( x),\quad i\in[s],\]
where \(B^{*}\phi(x):=(\hat{f}_{1}(x),\dots,\hat{f}_{N}(x))^{\top}\in\mathbb{R}^{N}\). Recall that after pre-training, at inference, we receive a target task dataset \(\mathcal{D}_{T}=\{(x_{T,j},y_{T,j})_{j=1}^{n_{T}}\}\in(\mathcal{X}\times \mathbb{R})^{n_{T}}\). We denote by \((\tilde{x}_{T,j})_{j=1}^{n_{T}}\in(\mathbb{R}^{s})^{n_{T}}\) the embedding of the covariates into \(\hat{\mathcal{H}}_{s}\) according to Eq. (11) and by \(X_{T}:=[\tilde{x}_{T,1},\dots,\tilde{x}_{T,n_{T}}]\in\mathbb{R}^{s\times n_{T}}\) the data matrix that collects the embedded points as columns, \(K_{T}:=X_{T}^{\top}X_{T}\in\mathbb{R}^{n_{T}\times n_{T}}\) is the associated Gram matrix and \(n_{T}^{-1}X_{T}X_{T}^{\top}\in\mathbb{R}^{s\times s}\) the associated empirical covariance.
**Proposition 3**.: \(\hat{f}_{T,\lambda_{*}}=\hat{V}_{s}\beta_{T,\lambda_{*}}\)_, where_
\[\beta_{T,\lambda_{*}} \coloneqq\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{n_{T}}} \sum_{j=1}^{n_{T}}\left(\beta^{\top}\tilde{x}_{T,j}-y_{T,j}\right)^{2}+n_{T} \lambda_{*}\|\beta\|_{2}^{2}=(X_{T}X_{T}^{\top}+n_{T}\lambda_{*}I_{s})^{-1}X_ {T}Y_{T}\] \[=X_{T}(K_{T}+n_{T}\lambda_{*}I_{n_{T}})^{-1}Y_{T}.\]
\(Y_{T}:=(y_{T,1},\dots,y_{T,n_{T}})^{\top}\in\mathbb{R}^{n_{T}}\)_. For all \(x\in\mathcal{X}\), \(\hat{f}_{T,\lambda_{*}}(x)=\beta_{T,\lambda_{*}}^{\top}\tilde{x}\)._
See proof on page 34..
## 6 Conclusion
We have addressed the problem of meta-learning with non-linear representations, providing theoretical guarantees for its effectiveness. Our study focuses on the scenario where the shared representations maps inputs nonlinearly into an infinite-dimensional RKHS. By leveraging smoothness of task-specific regression functions and employing careful regularization techniques, the paper demonstrates that biases introduced by the nonlinear representation can be mitigated. Importantly, the derived guarantees show that the convergence rates in learning the common representation can scale with the number of tasks, in addition to the number of samples per task. The analysis extends previous results obtained in the linear setting, highlighting the challenges and subtleties specific to the nonlinear case. The findings presented in this work open up several avenues for future research. These include: exploration of different types of nonlinear representations beyond RKHS, alternative subspace estimation techniques, and further refinement of trade-offs between bias and variance. |
2303.13656 | Non-Linear Signal Processing methods for UAV detections from a
Multi-function X-band Radar | This article develops the applicability of non-linear processing techniques
such as Compressed Sensing (CS), Principal Component Analysis (PCA), Iterative
Adaptive Approach (IAA) and Multiple-input-multiple-output (MIMO) for the
purpose of enhanced UAV detections using portable radar systems. The combined
scheme has many advantages and the potential for better detection and
classification accuracy. Some of the benefits are discussed here with a phased
array platform in mind, the novel portable phased array Radar (PWR) by Agile RF
Systems (ARS), which offers quadrant outputs. CS and IAA both show promising
results when applied to micro-Doppler processing of radar returns owing to the
sparse nature of the target Doppler frequencies. This shows promise in reducing
the dwell time and increase the rate at which a volume can be interrogated.
Real-time processing of target information with iterative and non-linear
solutions is possible now with the advent of GPU-based graphics processing
hardware. Simulations show promising results. | Mohit Kumar, Keith Kelly | 2023-03-13T02:40:45Z | http://arxiv.org/abs/2303.13656v1 | # Non-Linear Signal Processing methods for UAV detections from a Multi-function X-band Radar.
###### Abstract
This article develops the applicability of non-linear processing techniques such as Compressed Sensing (CS), Principal Component Analysis (PCA), Iterative Adaptive Approach (IAA) and Multiple-input-multiple-output (MIMO) for the purpose of enhanced UAV detections using portable radar systems. The combined scheme has many advantages and the potential for better detection and classification accuracy. Some of the benefits are discussed here with a phased array platform in mind, the novel portable phased array Radar (PWR) by Agile RF Systems (ARS), which offers quadrant outputs. CS and IAA both show promising results when applied to micro-Doppler processing of radar returns owing to the sparse nature of the target Doppler frequencies. This shows promise in reducing the dwell time and increase the rate at which a volume can be interrogated. Real-time processing of target information with iterative and non-linear solutions is possible now with the advent of GPU-based graphics processing hardware. Simulations show promising results.
Compressed Sensing radar processing, Iterative Adaptive Algorithm, Principal Component Analysis, X-band phased array radars, UAV
## I Introduction
The main goal of CS is to use optimization methods to recover a sparse signal from a small number of non-adaptive measurements. The radar measurements can be viewed as sparse in both time and Doppler space and are possibly sampled at sub-Nyquist rates, which breaks the relationship between the number of samples acquired and the perfect recovery of radar parameters like delay, velocity, and target angle. The recovery of essential micro-Doppler signatures from the UAV target through the sparse representation of the signal in the frequency domain and following optimization of the sparse signal's \(l_{1}\) norm using CS can improve the classification accuracy of various UAV targets. Additionally, MIMO-based virtual aperture formation can impart a better spatial resolution for the small spatial footprint UAV targets. IAA is another Doppler resolution enhancement technique that is considered in this article and it shows a promising application for UAV detections with few pulses. Prior to CS, filtering is accomplished using PCA-based decomposition into eigen sub-spaces to get rid of clutter contamination principally due to sidelobes pointing towards the ground. We develop a unified theory for the applicability of these non-linear processing methods and show their enhancements for better UAV detection using simulations. A common theoretical framework is developed for ease of understanding and applicability of these techniques.
For the US Air Force, Agile RF Systems (ARS) has finished developing a portable weather radar (PWR) system built on phased arrays and a four-quadrant architecture. It can be mounted on a roof or tower. It has a sealed radome that provides wind, rain, snow, hail, and sand protection. The CS, IAA and PCA methods elaborated in this article are with reference to this phased array design. The Figure 1 depicts the conceptual representation of the various sub-sections of this phased array radar. The data from the quadrant-based four-phased array centers can be processed by the signal processor and backend processing algorithms implemented in servers. This radar is based on quadrant-level processing to implement a 4-channel MIMO architecture. This article makes use of this hardware platform to demonstrate non-linear processing which also has a quadrant-wise aperture for MIMO related enhancements.
Recent advances in computational methods and increased computing capacity for real-time radar operations have greatly increased the use of non-linear processing in radars and communication. For radar applications, the design complexity is typically higher, and a variety of computational techniques can be used to achieve the desired properties. [1]. Today's modern phased array radars are able to switch beams faster and are based on inertia-less electronic phase programmability for observing different directions. With such a rapid observation capacity is needed a software framework that can extract
Fig. 1: A conceptual representation of a MIMO quadrant phased array for PWR Weather radar system.
information from the least number of acquired samples and pulses, aiding in reducing the dwell time of radar in a specific direction. This ultimately results in overall increase at the rate at which targets can be revisited or surveilled. This article combines the power of non-linear CS, IAA and PCA to make this advancement into the next generation of radar processing and develops a theoretical understanding with modeling of signal, clutter and noise spaces for non-linear processing.
Taking a peek into CS, if **x** is a sparse vector, we can try recovering it from the knowledge of observation vector **y** by solving the following optimization problem:
\[\begin{array}{c}arg\;\underset{x}{min}||\textbf{x}||_{0}\quad subject\;to\; \textbf{y}=\Theta\textbf{x}\end{array} \tag{1}\]
This search is, however, NP-hard and can be replaced by its closest convex norm, the \(l_{1}\) norm [9]. The equation above can thus be reformulated as:
\[\begin{array}{c}arg\;\underset{x}{min}||\textbf{x}||_{1}\quad subject\;to\; \textbf{y}=\Theta\textbf{x}\end{array} \tag{2}\]
where \(\Theta\) is the reconstruction matrix. This condition is influenced by the incoherence of the matrix (the sensing matrix), as well as the sparsity of the initial vector **x**[9]. The literature offers a number of solutions to this optimization issue. To locate the sparse approximation of the incoming signal **x** in a dictionary or matrix \(\psi\), basis pursuit is used in CS. The Dantzig selector, basis pursuit denoising (BPDN), total variation (TV) minimization-based denoising, etc. are additional commonly used formulations for reliable data recovery from noisy measurements [8]. The squared \(l_{2}\)-norm of the error between the reconstructed signal **y** and the sparse signal \(\hat{x}\) in the case of BPDN should be less than or equal to \(\epsilon\) for the obtained solution.
\[\begin{array}{c}arg\;\underset{x}{min}||\textbf{x}||_{1}\quad subject\;to\;|| \textbf{y}-\Theta\textbf{x}||_{2}^{2}\;\leq\epsilon\end{array} \tag{3}\]
We can also solve BPDN in its Lagrangian form, which is an unconstrained optimization problem and can be rewritten as:
\[\begin{array}{c}\hat{x}=arg\;\underset{x}{min}\lambda||\textbf{x}||_{1}\;+ ||\textbf{y}-\Theta\textbf{x}||_{2}^{2}.\end{array} \tag{4}\]
The primal-dual interior-point technique and fixed-point continuation are two well-known algorithms that have been applied to the aforementioned equation. Algorithms for linear programming, such as the simplex algorithm known as BP-simplex and the interior-point algorithm known as BP-interior, can also be used to solve the optimization problem in equation 3. These are solvers for convex problems.
In this article, we seek to develop formulations of the MIMO, CS and PCA operations with PWR as the platform for UAV detections which is a novel combination of non-linear processing techniques not explored before in literature. The data from the four sub-aperture channels are processed by the signal processor and backend processing algorithms implemented in servers kept in an enclosed thermally controlled chamber as part of PWR radar hardware. To enable MIMO and CS-related enhancements, the signal processor implemented in the radar server would need modifications to incorporate MIMO, CS and PCA-related data processing.
These non-linear processing methods would eventually lead to the development of low-cost, power-efficient, and small-size radar systems that can scan faster and acquire larger volumes than traditional systems. Here we briefly present the evolution of these methods. Many previous works on CS methods allow recovery of sparse, under-sampled signals from random linear measurements [4]. In [5], authors present Xampling as a sub-Nyquist framework for signal acquisition and processing of signals in a union of subspaces. We are not utilizing Xampling for analog-to-digital conversion. All processing techniques are after the Nyquist rate ADC conversions in fast time. [6] has used CS to enhance micro-Doppler signatures of drones however, what is lacking is a common framework of understanding and evaluating other non-linear methods like IAA that is presented in our article and how IAA compares against CS in terms of performance. In [15], an optimal dwell time is evaluated for effectiveness to capture at least one full rotation of the blades. They have commented on the total dwell time required but they don't discuss the sampling rate requirement over the dwell time. The article [4] serves as a good introduction to and a survey about compressed sensing. In [16] authors analyze the number of samples required for perfect recovery under noiseless conditions. They have devised a good theoretical framework which we have extended to PCA and IAA under clutter and noise conditions. In [8], authors have summarized a whole set of optimization routines that can be used to reconstruct a signal using CS. Authors in [9] developed the beginning of a mathematical theory of super-resolution. They illustrated that you can super-resolve point sources with infinite precision i.e. recover the exact locations and amplitudes by solving a simple convex optimization problem, which can essentially be reformulated as a semi-definite program. This holds provided the distance between the sources meet a certain criteria. The article [10] talks about a method that exploits the difference in the statistics of the returns from sea clutter and the target to improve detection performance. On the other hand, we selected PCA as the dominant approach to remove clutter echoes by suppressing clutter eigen-vectors and also removing few noise eigen-vectors to enhance SNR. In [11], authors discuss Subspace space-time adaptive processing (STAP) algorithms to eliminate clutter. However, estimation of clutter sub-space is a severe limitation.
This article explores using CS and IAA-based reconstruction of micro-Doppler for small UAV targets from fewer pulses, such that we do not lose micro-Doppler characteristics for the detection and classification of these targets. Traditionally using Fourier transform on these fewer pulses will degrade the resolution to such an extent that the nearby micro-Doppler features cannot be identified. This aspect is simulated using CS and IAA performance versus FFT-based reconstruction and the benefits can be readily observed. The MIMO formulation is also presented which aids in better spatial resolution indeed needed to support the accurate localization of these small targets.
Figure 2 gives the basic conceptual processing steps needed for building up this system. As would be evident later that for CS-based recovery from a minimum number of pulses, the pulses must be randomly transmitted in different elevation states (in the case of PWR radar) thus we need a random ray (direction) selector to send out a pulse. At the receiver, we
would need to segregate all the pulses together for a ray and process along the slow time (pulse) axis for Doppler super-resolution. IAA however, doesn't have this requirement which can be one of its advantages as compared to CS. Uniform sampling, in case of IAA, also aids in PCA-based clutter suppression which would not work for the non-uniformly sampled received echo. In that case, we need to wait for all the pulses to go out in all directions (rays) for PCA and CS to start. In case of IAA, however, we can start as soon as one ray (direction) echoes have been received.
There is an urgent need for faster scanning for a drone detection radar system because these small objects are highly agile and maneuvering. A really fast update is required to surveillance and track the full space for drones and swarms of drones to keep an eye on their ever-changing activities and strategies. We need counter UAS systems like PWR equipped with very fast scan strategies using very few pulses in a direction and still able to recover high-resolution Doppler features from detected drones. These non-linear processing techniques would aid PWR in achieving this goal.
### _Portable Weather Radar (PWR)_
All the techniques discussed in this article are being developed with reference to ARS PWR weather radar sensor. PWR is a flexible and agile radar due to the phase spin architecture and central Radar System Controller (RSC) [12]. This radar is based on a phased array design and is inherently very different from parabolic dish antenna radars like D3R [19, 20]. The radar located at the Greeley radar test facility is shown in Figure 3(a). The phase gradient that the phased array controller uses is coordinated by the RSC, along with the motor control for azimuth positioning and rotation. The FPGA logic in Software defined radio (SDR) has a programmable register interface that enables the RSC to change a broad range of operational radar parameters. The RSC uses alternate horizontal and vertical polarization to allow transmit pulses with a Linear Frequency Modulated Chirp (LMFC) waveform in the SDR. The Host Processor in the local cabinet receives the filtered radar returns from the multichannel receive hardware and processes them there.
The beamforming network and phased array antenna for PWR underwent extensive testing. Array calibration and aperture beam pattern data were collected to confirm expected aperture performance. An internal Radar Target Simulator (RTS) was created to test the complete functionality of the radar system with the calibrated aperture. The RTS was positioned for this test in far-field of the aperture (Figure 3 (b)). The PWR waveform was received by RTS, a digital time delay was applied, and the result was transmitted back to the PWR while it was still in receive mode. An accurate estimate of the combined beam pattern and all four quadrants was confirmed by determining the peak value from the returned waveform in PWR at each elevation. The two-way combined H-pol antenna pattern is shown in Figure 3 (c) measured with the help of RTS. The two-way pattern sidelobes are approximately 25 dB below the mainlobe peak power and the 3dB beamwidth measured is 1.4 deg confirming good phase/time alignment of all quadrant channels in the combined pattern. With a similar setup of RTS, using MIMO Coherent implementation, we expect to measure 0.9 to 1 deg of 3 dB beamwidth, improvements coming through the four-quadrant based MIMO signal processing.
To remove any spatial ambiguity, PWR was co-located with the CSU-CHILL radar and concurrently gathered weather observations. This was done to determine the PWR data products' level of quality for SISO-based radar operations. It has been described here so that it can serve as a standard by which to compare the efficacy of MIMO. Data comparisons between these two radars were carried out while PWR rotated at a constant quarter RPM and CHILL transmitted in the eastern region for the same 14 elevation states. Let's examine one of the light rain cases that both radars recorded on May 31, 2022. Figure 4(a) and (d) show the reflectivity field for CHILL and PWR radars respectively while (b) and (e) shows the comparison of differential reflectivity between the two. Figure 4(c) and (f) show the differential phase being encountered going through the storm from these radars. The top plots are from the CHILL radar and the bottom ones are from PWR. The CHILL radar was scanning only the eastern sector while the PWR did the whole 360-degree coverage. Both radars observed 14 elevation states and the 2 deg elevation state is shown in the figures. Several of the bright thunderstorm features that both radars picked up in the southeast can be seen distinctly in these figures. All of the level 2 products were subjected to this comparison. With nine times larger antenna dimensions than PWR, we can easily see the CHILL radar's high spatial resolution. With MIMO Coherent processing, the spatial resolution of PWR is anticipated to improve without a physical size increase in an effort to resolve the weather storm features better.
PWR can be very easily configured for sensing both weather
Fig. 2: Simple illustration of the processing system.
and drone targets. This is the hardware platform developed at ARS currently being used for weather sensing. Using a separate processing chain shown in Figure 2, we can easily expand its capabilities for drone target detection using non-linear processing. It is fully capable of MIMO aperture extension because of its four quadrants transmit and receive channels and on similar terms, because of its software-defined capabilities in terms of beam agility and waveforms, it is an ideal platform for testing out non-linear techniques.
Spectrogram and smoothed-pseudo Wigner-Ville distribution are two time-frequency representation techniques that have been widely used to analyze drone micro-Doppler signatures. Furthermore, a number of classification methods based on micro-Doppler signatures have been reported for classifying drones of various sizes, types, and loads, as well as drones and people, dogs, and birds. The radar antennas in real-world ground-based surveillance radar systems must scan rapidly to cover a large spatial area of up to 360deg. This implies that the radar beam's dwell period on any given target is quite short (which is, usually, a few tens of milliseconds). Thus, when adopting the conventional fast Fourier transform (FFT) for Doppler processing, the radar Doppler resolution is very poor and the accurate micro-Doppler signatures of drones are difficult to discriminate.
## II Methods
### _PCA and CS Formulation for micro-Doppler enhancement_
Radar echoes from drones can be identified, categorized, and tracked using Micro-Doppler. A spinning blade is a feature of the majority of drones, including single-rotor, quadrotor, six-rotor, and even hybrid vertical takeoff and landing (VTOL) drones. They are typically active in low-altitude airspace, are small, and fly slowly [13, 14]. The rotating movement of rotating blades can modulate the incident radar wave and produce an additional micro-Doppler on the base of the body Doppler contributed by the flying motion of the drone body. Micro-Doppler signals are thought to be quite useful signatures for radar-based drone detection and classification [15].
The importance of micro-Doppler for drone detections cannot be overstated. Using lengthy FFT sizes in traditional signal processing, drone detections can be more accurately resolved at higher Doppler resolutions. In general, greater Doppler resolution is associated with longer radar dwell times (sending out more pulses for longer FFTs). However, the maximum radar dwell period for a functional radar sensor applies. A practical radar system should be able to track targets more quickly and look quickly in all directions to search the entire volume. The secret to observing such a micro-Doppler is the radar dwell time. The dwell period should be sampled quickly enough to improve the Doppler resolution of our spectral analysis. CS and IAA-based non-linear processing can break this relation of linear dependence of resolution to the number of pulses required to observe micro-Doppler features of drones [16].
Prior to performing CS/IAA, we begin with PCA to get rid of clutter contamination of the drone echo. The cleaned-up signal can then go through spectral analysis.
Fig. 3: Radar and RTS Setup [12]
### _PCA decomposition of Clutter_
Consider a radar transceiver, similar to [16] that transmits a pulse train:
\[\textbf{x}_{T}(t)=\sum_{p=0}^{P-1}h(t-p\tau),\ \ 0\leq t\leq P\tau. \tag{5}\]
consisting of P equally spaced pulses \(h(t)\). The pulse-to-pulse delay \(\tau\) is referred to as the PRI, and it's reciprocal \(1/\tau\) is the PRF. The entire span of the signal in equation 5 is called the coherent processing interval (CPI). Let L Doppler-producing drone targets make a scene. The pulses travel back to the transceiver after reflecting off the L targets. Three parameters are used to describe each target: a complex amplitude \(\alpha_{l}\) that is proportional to the target's radar cross-section (RCS), a Doppler radial frequency \(\upsilon_{l}\) that is proportional to the target-radar closing velocity, and a time delay \(\tau_{l}\) that is proportional to the target's distance from the radar. We can write the received signal as:
\[x(t)=\sum_{p=0}^{P-1}\sum_{l=0}^{L-1}\alpha_{l}h(t-\tau_{l}-p\tau)e^{-j\upsilon _{l}p\tau}. \tag{6}\]
It might be convenient to express the signal as a sum of single frames:
\[x(t)=\sum_{p=0}^{P-1}x_{p}(t) \tag{7}\]
where
\[x_{p}(t)=\sum_{l=0}^{L-1}\alpha_{l}h(t-\tau_{l}-p\tau)e^{-j\upsilon_{l}p\tau}. \tag{8}\]
This is the case when the target can be characterized using a single velocity, \(\upsilon_{l}\), however, in case of micro-Doppler frequencies we have a band of frequencies around the main body Doppler component comprising the micro-Doppler (\(\Delta\upsilon_{l}\)) as:
\[x_{p}(t)=\sum_{l=0}^{L-1}\sum_{i=0}^{I-1}\alpha_{l}h(t-\tau_{l}-p\tau)e^{-j( \upsilon_{l}+\Delta\upsilon_{i})p\tau}. \tag{9}\]
\(I\) are the number of micro-Doppler components.
Additionally, in practice, this signal is contaminated with noise and clutter:
\[x(t)=\sum_{p=0}^{P-1}[x_{p}(t)+\omega_{p}(t)+C_{p}(t)]. \tag{10}\]
where \(\omega(t)\) is a zero mean wide-sense stationary random signal with auto-correlation \(r_{\omega}(s)=\sigma^{2}\delta(s)\) and \(C(t)\) is the clutter component. A synonymous equation quantized in time would be:
\[x(n)=\sum_{p=0}^{P-1}[x_{p}(n)+\omega_{p}(n)+C_{p}(n)]. \tag{11}\]
In order to decrease its effect on micro-Doppler features, removing the clutter component is necessary. The equation 11 can be thought to be composed of signal, noise and clutter sub-spaces. Let the mean values of \(\textbf{x}_{p}(n)\) be \(\mu_{p}\). Then the mean subtracted received signal can be written as:
\[x(n)=\sum_{p=0}^{P-1}(x_{p}(n)-\mu_{p}). \tag{12}\]
Forming the auto-correlation matrix \(\textbf{R}_{xx}\) of \(\textbf{x}(n)\) and performing SVD decomposition on it yields,
\[\textbf{R}_{xx}=\textbf{U}\textbf{S}\textbf{V}^{T}. \tag{13}\]
Sorting out the eigenbasis vectors in **U** in descending order, we get the largest principal components in the received signal.
Fig. 4: The different polar products being generated by CHILL and PWR radars [12].
If clutter is supposed to be the dominant return signal component, we can set the corresponding eigenvector corresponding to the largest eigenvalue in \(\mathbf{S}\) to zeros. The rest of the received signal \(x_{p}(t)\) is projected to remaining eigenvectors and sum them up as follows to reconstruct signal and noise:
\[x_{recons}(n)=\sum_{p=0}^{P-1}\sum_{d=0}^{D-1}u_{d}(n)x_{p}(n). \tag{14}\]
\(u_{d}^{\prime}s\) span the eigenvector space comprising of signal and noise sub-spaces minus the clutter sub-space as that eigenvector is not part of this space. The signal and noise sub-spaces are orthogonal to each other. We can also reduce noise power by considering only a few noise eigenvectors and adding them up to the signal sub-space. This might improve SNR. A demonstration of this is part of the simulations section. A point worth noting is that we can figure out clutter power by averaging out the received signal whose mean will give an estimate of clutter power centered at DC. Based on this, we can come to know which eigenvector should be removed to nullify the clutter sub-space, if clutter is not the most dominant echo in the received signal.
If we compare PCA with MTI clutter filter, we can observe that MTI removes the clutter component while low-frequency micro-Doppler components can also get completely suppressed, which would decrease the distinction of micro-Doppler features and finally influence the classification accuracy of drone targets. After the clutter signal has been suppressed, we go to CS step for enhancing micro-Doppler features. Micro-Doppler spectral lines can have better distinction when they are CS processed.
Apart from the primary signal Doppler, a few micro-Doppler lines, and clutter, which have high values, the majority of the entries in the spectral domain of drone targets are zeros or low values. Only the primary Doppler and a few high spectral lines caused by micro-Doppler may remain after removing clutter. In order to improve drone classification and identification using fewer pulse samples, CS may be able to provide high-resolution Doppler components for such a sparse signal. If we use fewer pulses to give the same resolution as with, say, 10 times the number of pulses, then we are effectively reducing the dwell time on the target and can potentially spin faster as in the case of PWR. This faster scanning radar can track and do multiple functions at the same time which may mean portable systems like PWR, is able to accomplish weather surveillance, track, and surveillance UAVs.
### _CS-based enhancement of Doppler Space_
The clutter-suppressed received signal can now be processed by CS to better resolve the micro-Doppler frequencies with relatively fewer random measurements. The premise is CS would be able to provide a higher resolution Doppler space with few samples as against conventional FFT processing which would need a sufficiently larger number of measurements or pulses to give the same resolution as CS. The first stage of CS is multiplying the random measurement matrix, \(\psi(n)\) with \(x(n)\):
\[y(n)=\sum_{p=0}^{P-1}\psi_{p}(n)[x_{p}(n)+\omega_{p}(n)+C_{p}(n)]. \tag{15}\]
where \(\psi_{p}(n)\in\mathbb{R}^{MxN}\ or\ \mathbb{C}^{MxN}\) and \(y(n)\in\mathbb{R}^{M}\ or\ \mathbb{C}^{M}\). The number of measurements taken is much lesser than the length of the input signal, i.e., \(M<<N\). To further reduce the number of measurements which are necessary for perfect reconstruction, the measurement matrix must be incoherent with the basis in which the signal is sparse. The inputs to the reconstruction algorithm are the measurement vector \(y(n)\) and reconstruction matrix \(\Theta\) where,
\[\Theta=\psi\xi\in\mathbb{R}^{MxN}\ or\ \mathbb{C}^{MxN}. \tag{16}\]
\(\xi\) is the basis vector of the space where \(x(n)\) is sparse. Thus \(x(n)\) can be written as:
\[x(n)=\sum_{p=0}^{P-1}s_{p}(n)\xi_{p}(n). \tag{17}\]
\(s\in\mathbb{R}^{N}\) is the sparse coefficient vector of length N. The optimization problem expressed as \(l_{1}\) norm (for reconstruction) can thus be expressed as:
\[\hat{s}=arg\ \mathop{min}_{s}||\mathbf{s}||_{1}\quad subject\ to\ \Theta\mathbf{s}= \mathbf{y} \tag{18}\]
The estimate of x(n), i.e., \(\hat{x}\) can be obtained from \(\hat{s}\) by taking its inverse transform. Some of the other types of this same optimization problem with noise included and a lagrangian form of the above equation were discussed in the paragraphs preceding equations 3 and 4.
It is shown in the literature [16] that for a noise-free case, the estimation of parameters \((\alpha_{l},\tau_{l},\upsilon_{l})_{l=0}^{L-1}\) without micro-Doppler frequencies can be recovered using 3L samples using a Xampling framework and assuming Finite Rate of Innovation (FRI) samples. However, with micro-Doppler and the presence of noise, there are likely more samples required for perfect recovery. Simulations do confirm the fact that the number of slow time, pulse measurements required for a higher resolution Doppler reconstruction is sufficiently less so that either the radar can be made to scan faster or it can be made to accomplish multi-functions like weather detections and forecasting too. The software-defined phased array architecture of PWR is ideally suited for drone detection and weather surveillance.
#### Iii-D1 Snr
The SNR is linked to the attenuation that the signal receives going through the link which is very obvious based on the CS model. However, noise power can be reduced in the prior step of PCA with fewer noise eigenvectors considered for the reconstruction of the received signal. Certainly, this can improve SNR and it is demonstrated in simulations too.
### _MIMO and CS Framework_
The multi-function PWR radar is capable of MIMO because of its four-quadrant array structure. So we should be able to use quadrant-wise MIMO formulation along with CS. This gives the benefit of virtual array formation without the addition of physical array elements, and also it is cost-effective since each element is not required to have an RF and IF hardware chain associated, and only four channels are sufficient to make use of quadrant MIMO benefits instead of hundreds if not thousands of channels for a full MIMO implementation.
The quadrant MIMO system is equivalent to the spatial convolution of the transmit and receive quadrant phase centers and the formation of virtual array elements beyond the physical aperture size. The virtual array dimensions are 1.5 times the physical array (in both axes) as evident from Fig. 5. Equivalently, this would give the beamwidth reduction by the same factor and the spatial resolution will improve. The PWR system provides a very cost-effective MIMO radar system using a quadrant phased array structure. One of the main challenges of an element-wise MIMO radar is coping with complicated systems in terms of cost, high computational load, and complex implementation, which have been traded very well using quadrant MIMO in PWR radar hardware. To demonstrate quadrant MIMO processing, we assume transmissions using the same LFM waveforms from all the quadrants, however, we would transmit sequentially from quadrants in a time-multiplexed manner. The data cube received by the quadrants would have to be processed to form the virtual array data cube. Let the data collected when quadrant 1 transmits (from all four receive quadrants) given by:
\[\textbf{Ph}_{r}=\begin{bmatrix}\textbf{M}_{11}&\textbf{M}_{12}\\ \textbf{M}_{13}&\textbf{M}_{14}\end{bmatrix} \tag{19}\]
Where the first subscript tells the transmit quadrant and the second signifies the receive. The coherent data matrix after all transmissions is given by:
\[\textbf{Vi}_{r}=\begin{bmatrix}\textbf{M}_{21}&\textbf{M}_{11}+\textbf{M}_{22 }&\textbf{M}_{12}\\ \textbf{M}_{23}+\textbf{M}_{31}&\textbf{M}_{13}+\textbf{M}_{24}+\textbf{M}_{31 }+\textbf{M}_{41}&\textbf{M}_{14}+\textbf{M}_{42}\\ \textbf{M}_{333}&\textbf{M}_{43}+\textbf{M}_{43}&\textbf{M}_{44}\end{bmatrix} \tag{20}\]
This matrix includes 5 additional virtual phase centers corresponding to five additional quadrants to make it a total of 9 quadrants. For PWR, each entry in the column is \(12\lambda\) wide and high, the third row and column would give it an extra \(12\lambda\) height and width to the receive aperture due to virtual quadrants [17].
Extending our discussion further about MIMO and CS, let's revisit equation 8 for a sparse scene with L drone targets. The received signal at the \(q_{th}\) quadrant after demodulation to baseband for a single frame is in turn given by:
\[\textbf{x}_{q}(t)=\sum_{l=0}^{L-1}\sum_{m=0}^{M-1}\alpha_{l}h(t-\tau_{l}-p\tau) e^{-j\nu_{l}pr}e^{-j\beta_{m,q}\varsigma}. \tag{21}\]
where \(\varsigma=sine(\theta_{l})\) is the azimuth angle of the \(l^{th}\) drone target relative to the quadrant \(\theta_{l}\). Also note that, \(\beta_{m,q}=(\zeta_{q}\xi_{m})(f_{c}\lambda/c+1)\), \(f_{c}\) is the carrier frequency radiated from the quadrant and \(\zeta_{q},\xi_{m}\in\textbf{Vi}_{r}\). Again \(y(n)\) can be written as:
\[y(n)=\sum_{p=0}^{P-1}\psi_{p}(n)[x_{p,q}(n)+\omega_{p}(n)+C_{p}(n)]. \tag{22}\]
and then forming **s**, an sparse coefficient matrix using basis \(\iota\) as:
\[x(n)=\sum_{p=0}^{P-1}\sum_{q=0}^{M-1}s_{p,q}(n)\nu_{p,q}(n). \tag{23}\]
The optimization problem expressed as \(l_{1}\) norm (for reconstruction) can now be expressed as:
\[\hat{s}=arg\;\;\underset{s}{min}\;||\textbf{s}||_{1}\quad subject\;to\;\Theta \textbf{s}=\textbf{y} \tag{24}\]
**Theorem 1**: _The minimal number of transmit times the number of receive channels required for perfect recovery of L targets in noiseless settings is \(\geq 2L\) with a minimal number of \(\geq 2L\) samples per receiver and \(\geq 2L\) pulses per transmitter [4]._
This is true for Xampling and an FRI framework used in conjunction with CS. In PWR, quadrant MIMO offers four transmit and nine receive channels (four physical and five virtual quadrants), thus \(L=18\) drone targets can be resolved in a CPI or dwell time \(\tau\). For this recovery, 36 samples are needed per pulse and 36 pulses are needed per transmitter quadrant for perfect recovery of Doppler for these L targets. This arithmetic is different for a noisy link but then most probably we don't have that many drone targets too. Only in the case of drone swarms, we may need to be detecting more targets, however, it would be quite a coincidence to get so many of them in a CPI or dwell (one direction) otherwise. This result also implies that many more targets can be perfectly resolved in DoA sense by using MIMO virtual elements and this framework allows CS theory to be applied for calculating the number of pulses required for a perfect Doppler recovery for all these targets as well.
### _An Iterative Approach to solve the dwell time limitation for a fast scanning drone radar_
The Doppler resolution of the temporal signal can be increased by using the super-resolution algorithms that are frequently used in array processing, such as minimal variance distortionless response (MVDR) and multiple signal classification (MUSIC). To estimate the covariance matrix or carry out eigen analysis, these algorithms typically need a number of signal snapshots. Some algorithms, like MUSIC, require knowing the number of sources up front as well. However, in surveillance radar, the Doppler processing is carried out over
Fig. 5: A 8x8 element phased array with multiple transmit phase centers based on the quadrant. The whole array is divided into 4 quadrants [17].
the slow-time samples (over pulses) at each range increment. As a result, there is only one available temporal snapshot. It is also unclear how many target Doppler and micro-Doppler sources there will be. Consequently, it is impossible to use the traditional super-resolution methods. Unlike the conventional MVDR and MUSIC algorithms in which many snapshots are required to estimate the covariance matrix, IAA can work well with only a few or even one snapshot to achieve super-resolution.
```
\(\hat{P}_{k}=\frac{1}{\textbf{a}^{H}\textbf{a}\sum_{p=0}^{P-1}\textbf{a}^{H}(f_{ k})y(n)|^{2}}\) while!converge do \(\textbf{R}=\textbf{A}(f)\textbf{P}\textbf{A}^{H}(f)\) for k = 1,...,K do \(\hat{s}_{k}=\frac{\textbf{a}^{H}(f_{k})\textbf{R}^{-1}\textbf{a}(f_{k})}{ \textbf{a}^{H}(f_{k})\textbf{R}^{-1}\textbf{a}(f_{k})}\)\(n=1,2,...,N\). \(\hat{P}_{k}=1/N\sum_{n=0}^{N-1}|\hat{s}_{k}(n)|^{2}\). endfor endwhile
```
**Algorithm 1** An iterative algorithm [18]
The formulation of this method is elaborated next. It is similar to the one highlighted in [18]. The basis steering vectors defined on the grid points that either have the frequency present or not span the output space of the Doppler processor. We can, henceforth, write the outcome of the Doppler process as:
\[\textbf{y}=\textbf{A}(f)\textbf{s}+\omega+\textbf{C}. \tag{25}\]
where \(\omega(n)\) is a zero mean wide-sense stationary random signal with auto-correlation \(r_{\omega}(s)=\sigma^{2}\delta(s)\) and C(n) is the clutter component. \(A(f)=[\textbf{a}(f_{1})\textbf{a}(f_{2})...\textbf{a}(f_{k})]\) is \(PxK\) dimension where P is the number of pulses and K is the number of finite points in the Doppler grid. **s** is a vector of the amplitudes of frequencies at the grid locations \(k=1,2,...,K\). The clutter and noise matrix can be defined as:
\[\textbf{Q}(f_{k})=\textbf{R}-P_{k}\textbf{a}(f_{k})\textbf{a}^{H}(f_{k}). \tag{26}\]
\(\textbf{R}=\textbf{A}(f)\textbf{P}\textbf{A}^{H}(f)\) is the auto-correlation matrix of the input and P is a KxK diagonal matrix, whose diagonals \(P_{k}=|s_{k}|^{2},k=1,2,...,K\) contains the powers at each Doppler frequency on the Doppler grid. The cost function is given by:
\[\Xi=(\textbf{y}-s_{k}\textbf{a}(f_{k}))^{H}\textbf{Q}^{-1}(\textbf{y}-s_{k} \textbf{a}(f_{k})). \tag{27}\]
Minimizing the cost function with respect to \(s_{k}\) gives [18]:
\[\hat{s}_{k}=\frac{\textbf{a}^{H}(f_{k})\textbf{R}^{-1}\textbf{y}}{\textbf{a}^{ H}(f_{k})\textbf{R}^{-1}\textbf{a}(f_{k})}. \tag{28}\]
Since the iteration requires **R**, which depends on the unknown powers, it must be implemented as an iterative approach. The initialization can be done by letting **R** equal to the identity matrix \(\textbf{I}_{P}\). The steps are shown in Algorithm 1. Both IAA and CS are capable to enhance the Doppler resolution with fewer pulses, however, CS needs a random pulse transmission within the dwell time. It relies on non-uniform sampling within the dwell time. The pulse time left vacant because a pulse cannot be transmitted in CS can be used for transmitting pulses in other directions though, however, it can make radar operations complex. On the other hand, IAA works with a uniform sampling of the dwell time.
### _Simulations and Discussions_
In this section, we show the feasibility and practicality of non-linear methods discussed in earlier sections by simulations using PWR radar parameters and features. We begin by simulating a few micro-Doppler frequencies making an echo of length 512 samples. This comprises a main Doppler echo from the base motion of the UAV and there are micro-Doppler from the rotary motion of the blade movement modulating the primary echo signal. The signal is corrupted by clutter from the elevation sidelobes of PWR simulated as zero Doppler component being added up to the received echo. The PCA formulation described in an earlier section is used next for the removal of clutter sub-space and reconstruction of the time domain signal for further processing. Current methods including MTI, CLEAN, etc can not realize the real-time removal of ground clutter without suppressing nearby micro-Doppler components. That is why PCA is adopted to remove the clutter components in the echo signal. Figure 6 (a) depicts clutter centered at DC and the micro-Doppler components and the main Doppler signal. If clutter is always a dominant signal in the received echo, then it is pretty easy to remove the highest valued eigenvector from the SVD decomposition, however, if it is not, then we need to figure out the DC power by averaging out the samples and looking at average power. This should be able to give us an estimate of the eigenvector which has similar power levels as the mean power. After that is made sure, we can remove it from our reconstruction process to get rid of clutter. Next, this signal is processed using CS. The time domain and the frequency domain of the echo for CS processing are shown in Fig. 7 (a) and (b). Random 32 samples (out of 512) are picked from \(x(t)\) (Fig. 8) and the recovery of the sparse frequency domain is accomplished from these random samples using \(l_{1}\) minimization.
The recovered high-resolution frequency and time domain samples from the lower dimensional signal are shown in Figure 9 (a) and (b).
Exact recovery of the higher dimensional signal is possible due to sparsity in the frequency domain. The frequency analysis of the lower dimensional signal is shown in Fig. 10 which is the Fourier transform of the first 32 samples from the sequence of the original 512 samples. The loss of resolution is evident and the modulations due to micro-Doppler cannot be observed. This would lead to faulty classification results for the UAV type and detection of UAV based on certain micro-Doppler features.
It is to be noted that CS needs random \(K=32\) samples from a set of \(N=512\) echo samples. The \(N\) can be considered here to be the number of pulses where we reduced it to \(K<<N\). Thus only \(K\) pulses are sufficient to reconstruct the micro-Doppler features of the UAV echo which can easily reduce the dwell time and overall scan time of the radar. These pulses would need to be randomly transmitted in the larger dwell time of \(N\) pulses, scanning other elevation states to
Fig. 8: The samples that are picked randomly and CS based reconstruction is applied.
Fig. 10: The Fourier transform of the lower-dimensional signal.
Fig. 6: Clutter removal using PCA. (a) is with clutter centered at DC and (b) clutter removed with no harm to nearby micro-Doppler components.
Fig. 7: The original signal characteristics for a CS-based micro-Doppler reconstruction.
Fig. 9: The reconstructed frequency domain and time domain.
cover up the total volume in the case of PWR for example. With this, the \(N\) pulses would be transmitted at random in different elevation states and they can be all clubbed together in the signal processor. This is a little complex and would need the beam switching time every pulse instead of every dwell in case of uniform sampling of pulses. The IAA however, doesn't rely on non-uniform random sampling and is simulated next.
Figure 11 shows the same setup of micro-Doppler frequencies. Instead of the CS-based reconstruction, we are using IAA for recovery. It is shown that super-resolution can be achieved using a single snapshot of data samples. The IAA iteration was setup simulating three peaks of the micro-Doppler base echo and the modulations from the rotation of the blades of the quadcopter similar to the one for CS and then using \(K(32pulses)<<N(512pulses)\) that are much lesser than the higher dimensions (512) in the frequency domain required to reconstruct a higher resolution frequency response. We can clearly see that the sidelobes are very low for IAA-based reconstruction. To achieve a similar level of sidelobe performance, a very aggressive taper would be needed for FFT-based recovery that would lead to quite a bit of SNR loss. IAA can achieve higher Doppler resolution with even higher SNR than the conventional FFT with more pulses for one snapshot. With this, micro- Doppler signatures of drones can be clearly discriminated, which greatly benefits the subsequent classification of drones. The higher Doppler resolution can also help to separate the slow-moving targets and the ground clutter at zero Doppler, and therefore improve the detectability of multi-rotor drone which usually moves very slowly. Using IAA can avoid the taper loss in FFT-based Doppler processing and the overall radar detection performance for all targets is also improved. If we compare it with CS technique, CS works with random non-uniform sampling that is unconventional and as we said earlier about it's applicability to PWR, we converged on a scheme in which different elevation states would be selected at random for transmission of a pulse, that scheme is complicated as compared to uniformly sampled IAA. Having said this, it is worth noting that CS can be extended to fast time sampling using Xampling and FRI principles so that lower sampling ADCs are sufficient for below the Nyquist rate sampling of fast time signals. Hence both schemes have their own pros and cons and should be judiciously used.
## III Conclusion
We looked into and simulated PCA for clutter mitigation explored CS and IAA for micro-Doppler and spectral retrievals, MIMO for spatial estimation of drone UAV targets. A unified theoretical framework was developed that stitches together all these non-linear areas towards drone micro-Doppler enhancement for detections from phased array PWR multi-function radar sensor. Both IAA and CS were found to be very useful to recover micro-Doppler drone features so that those targets can be efficiently detected and classified using fewer pulses than conventional FFT processing. The drawbacks and applicability of each one of these techniques were discussed.
|
2304.09871 | A Theory on Adam Instability in Large-Scale Machine Learning | We present a theory for the previously unexplained divergent behavior noticed
in the training of large language models. We argue that the phenomenon is an
artifact of the dominant optimization algorithm used for training, called Adam.
We observe that Adam can enter a state in which the parameter update vector has
a relatively large norm and is essentially uncorrelated with the direction of
descent on the training loss landscape, leading to divergence. This artifact is
more likely to be observed in the training of a deep model with a large batch
size, which is the typical setting of large-scale language model training. To
argue the theory, we present observations from the training runs of the
language models of different scales: 7 billion, 30 billion, 65 billion, and 546
billion parameters. | Igor Molybog, Peter Albert, Moya Chen, Zachary DeVito, David Esiobu, Naman Goyal, Punit Singh Koura, Sharan Narang, Andrew Poulton, Ruan Silva, Binh Tang, Diana Liskovich, Puxin Xu, Yuchen Zhang, Melanie Kambadur, Stephen Roller, Susan Zhang | 2023-04-19T06:15:11Z | http://arxiv.org/abs/2304.09871v2 | # A Theory on Adam Instability in Large-Scale Machine Learning
###### Abstract
We present a theory for the previously unexplained divergent behavior noticed in the training of large language models. We argue that the phenomenon is an artifact of the dominant optimization algorithm used for training, called Adam. We observe that Adam can enter a state in which the parameter update vector has a relatively large norm and is essentially uncorrelated with the direction of descent on the training loss landscape, leading to divergence. This artifact is more likely to be observed in the training of a deep model with a large batch size, which is the typical setting of large-scale language model training. To argue the theory, we present observations from the training runs of the language models of different scales: 7 billion, 30 billion, 65 billion, and 546 billion parameters.
## 1 Introduction
Training instability reported by Chowdhery et al. (2022) is an interesting phenomenon that has only been reported for the large language models trained on an order of a trillion tokens, posing a threat to further scaling of the AI systems. Chowdhery et al. (2022) have observed dozens of spikes in the loss curve throughout training. To mitigate the issue, they re-started training from a checkpoint roughly 100 steps before the spike started, and skipped roughly 200-500 data batches, in order to exclude batches that were seen right before and during the spike. In that case, the spike of the loss value did not repeat. The spikes were also not observed when the skipped data was fed through the model again _after_ the aforementioned mitigation, which implies that the data itself did not cause the spike, but rather an interference of the data batch with the state of the model training run. The purpose of this work is to rigorously reproduce the experiment with a different hardware and software setup, come up with an explanation for the observed behavior supported by empirical evidence and theoretical arguments, and propose alternative ways of mitigating the issue.
Loss spikes are difficult to study because any reproduction of these spikes at a smaller scale is not necessarily caused by or remediated by the same factors as in larger scales. We therefore analyze large-scale language modeling experiments, training four models between 7 billion and 546 billion parameters. The models are decoder-only transformers (Brown et al., 2020; Smith et al., 2022) with different depth and embedding dimensions and trained using the AdamW (Loshchilov and Hutter, 2017) algorithm with a linear learning rate schedule. Comparing to the modified Adafactor (Shazeer and Stern, 2018) used by Chowdhery et al. (2022), we did not use the "parameter scaling", \(\beta_{2}\) build-up or the dynamic weight decay. This did not critically change the observed training instabilities. We also made modifications in the architecture relative to the setup of Chowdhery et al. (2022) so that the phenomenon we reproduce are robust to some changes in the specifics of model architectures. For example, we used the ReLU activation function like Zhang et al. (2022) instead of SwiGLU Shazeer (2020), and absolute learned positional embeddings instead of RoPE Su et al. (2021). The settings of each training run that are important in the context of our analysis are displayed in Table 1. We cross-checked our results with the models trained using a different codebase and a different dataset,
similar to those that were used for LLaMa (Touvron et al., 2023), and did not see significant differences in our results at the scale of the 65b model.
Training on a cluster of GPUs (while Chowdhery et al. (2022) were working on TPUs), using a completely different codebase and datasets, we replicated the unstable behavior of the loss curve at the largest scale, which is demonstrated in Figure 1. Although the batch skipping trick described earlier was also implemented for the reported training run, from most of the loss spikes the curve recovered without a batch skipping intervention and within an order of a dozen of training iterations. The spikes that had to employ the batch skipping trick are not displayed in Figure 1. Our theory further also addresses the observation of loss curve recovery, besides the relationship between the training state and the data batch during the loss explosion. The training run for the 65b model experienced moderate instabilities compared to the 546b, while training of the smaller models (30b and 7b) did not show explosive divergence behavior at all. Although we observe that the severity of instability depends on the data and architecture choice, we attribute its nature to the Adam algorithm itself as the only common element of all of the large-scale training experiments conducted up to date. To confirm or refute this point, however, additional empirical observations of a different learning algorithm, for example, Stochastic Gradient Descent (SGD) (Kiefer and Wolfowitz, 1952), should be made in a similar setup.
The rest of the paper is structured as follows. In Section 2 we give the introduction on the notions considered and the key notation throughout the paper. In Section 3 we elaborate on the specific properties of the Adam algorithm that we found relevant to explaining the loss spike behavior. Section 4 contains theoretical prediction and empirical confirmation of statistical properties of the update rule of the Adam optimization algorithm, while Section 5 argues for the malicious nature of these properties. Section 6 lays out the step-by-step explanation of what is happening to the training run state during the loss spike. We further discuss the implications of our theory in Section 7. Section 8 describes the most relevant background on the research devoted to the divergent behavior of Adam. Section 9 provides a conclusion for our work.
\begin{table}
\begin{tabular}{||c|c c c c c c c||} \hline model & depth & embedding dimension & \(b\) (batch size) & \(\eta_{t}\) (learning rate) & \(\varepsilon\) & \(\beta_{1}\) & \(\beta_{2}\) \\ \hline \hline
7b & 32 & 4096 & 2048 & \(\approx 10^{-4}\) & \(10^{-8}\) & 0.9 & 0.95 \\ \hline
30b & 36 & 8192 & 8192 & \(\approx 10^{-4}\) & \(10^{-8}\) & 0.9 & 0.95 \\ \hline
65b & 80 & 8192 & 8192 & \(\approx 6\times 10^{-5}\) & \(10^{-8}\) & 0.9 & 0.95 \\ \hline
546b & 108 & 20480 & 65536 & \(\approx 2\times 10^{-5}\) & \(10^{-8}\) & 0.9 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 1: Training run settings
Figure 1: Training perplexity curve of 546b model with prominent spikes
## 2 Prerequisites
### Notations
We interchangeably use the notation for matrices and vectors by referring to their components. An entry \(i\) of a vector \(x\in\mathbb{R}^{n}\) is denoted with \(x[i]\in\mathbb{R}.\) For a subset of indices \(G\subset\{1,\ldots,n\}\) of size \(|G|,\) the vector obtained by combining together the entries of \(x\) corresponding to \(i\in G\) is denoted with \(x[G]\in\mathbb{R}^{|G|}.\) For a matrix \(x\in\mathbb{R}^{n\times T},\) the entry of row \(i\in\{1,\ldots,n\}\) and column \(t\in\{1,\ldots,T\}\) is denoted with \(x[i,t]\in\mathbb{R},\) while the \(t\)-th column is denoted with \(x[:,t]\in\mathbb{R}^{n}\) or \(x_{t}\in\mathbb{R}^{n}.\) Similarly, \(x[G,t]\in\mathbb{R}^{|G|}\) denotes the sub-vector of \(x[:,t]\) that corresponds to the subset \(G\subset\{1,\ldots,n\}.\) For two tensors \(x\) and \(y,\) let \(x\otimes y\) denote the outer product between \(x\) and \(y\) and \(\langle x,y\rangle\) denote the standard inner product between them. The arithmetic operations on vectors, such as logarithm, square, and square root are implied to be coordinate-wise operations. We refer to the random eigenvalue of a constant matrix \(x\) as \(\lambda[x]\) (subject to uniform distribution over the eigenvalues). The notation \(x\sim D\) where \(x\) is a random variable and \(D\) is a distribution or a random variable means that the distributions of the two coincide almost surely. If \(x_{t}\) is an infinite sequence of random variables that is indexed with \(t,\) then \(x_{t}\stackrel{{ d}}{{\rightarrow}}D\) denotes the convergence of \(x_{t}\) to \(D\) in distribution. The operators of the mean and variance of a random variable, mapping it to a scalar value, are denoted with \(\mathbb{E}\) and \(\mathbb{D},\) respectively. \(\mathcal{N}(0,1)\) denotes the standard Normal distribution. Bernoulli\((p)\) is the Bernoulli distribution that assigns the probability \(p\) to the value of \(1\) and \((1-p)\) to the value of \(-1.\)
### Adam algorithm
Let us introduce the training procedure under investigation. It consists of a practical optimization algorithm that aims to minimize the empirical risk \(f(\theta)=\mathbb{E}_{X\sim D}\ell_{X}(\theta)\) of a model \(\theta\) subject to a dataset \(D\) and a loss function \(\ell\). The model parameters here are denoted with a vector of parameters \(\theta\in\mathbb{R}^{n}\). In the natural language processing context, the cross entropy loss function ties the empirical risk to the metric of perplexity, referred to in Figure 1. In large language modeling practice, the optimization is performed using a variation of Adam (Kingma and Ba, 2014) optimization algorithm over a parametric family (architecture) of deep learning models called transformers (Vaswani et al., 2017), following the choices made in the generative pre-trained transformer (GPT) line of research (Radford et al., 2019; Brown et al., 2020). The state-of-the-art models have the number of parameters of order of 100 billion (\(n\approx 10^{11}\)) and the number of layers (depth) on the order of a hundred.
Adam is an iterative algorithm that uses first-order (gradient) information to update the model parameters \(\theta_{t}\) on every iteration \(t.\) It relies on the stochastic approximation \(g_{t}\) of the gradient of the objective function \(\nabla f(\theta_{t}),\) which is calculated as a gradient of the empirical risk subject to a portion of data samples called a batch. The number of data samples (size) \(b\) of the batch is an important hyper-parameter of the algorithm that controls the variance of the gradient estimation \(g_{t}.\) The Adam update rule can be written as follows:
\[m_{t} =\frac{\beta_{1}}{1-\beta_{1}^{t}}m_{t-1}+\frac{1-\beta_{1}}{1- \beta_{1}^{t}}g_{t}\] \[v_{t} =\frac{\beta_{2}}{1-\beta_{t}^{t}}v_{t-1}+\frac{1-\beta_{2}}{1- \beta_{2}^{t}}g_{t}^{\ 2}\] \[u_{t} =\frac{m_{t}}{\sqrt{v_{t}}+\varepsilon}\] \[\theta_{t+1} =\theta_{t}-\eta_{t}u_{t}\]
where \(\beta_{1},\beta_{2}\) are averaging hyper-parameters, \(\varepsilon\) is a stability hyper-parameter and \(\eta_{t}\) is a step size (or a learning rate). The vectors \(m_{t}\) and \(v_{t}\) make up the optimizer state at the iteration \(t\) and \(u_{t}\) is called the (unscaled) update vector. The original Adam algorithm is the focus of our paper because it is simple and illustrative. Our analysis can be carried over to include the many modifications of the algorithm that are used in practice, including weight decay (Loshchilov and Hutter, 2017), model norm regularization and gradient clipping (Pascanu et al., 2012).
The values \(m_{t}\) and \(v_{t}\) are weighted average vectors of \(g_{\tau}\) which could be calculated explicitly from the values of \(g_{\tau}\) for \(\tau\in\{1,\ldots,t\}.\) Let us define \(w_{t}^{1}[\tau]:=\frac{\beta_{t}^{1-\tau}(1-\beta_{1})}{\prod_{T=\tau}^{2}(1- \beta_{1}^{t})}\) and \(w_{t}^{2}[\tau]:=\frac{\beta_{t}^{2-\tau}(1-\beta_{2})}{\prod_{T=\tau}^{2}(1- \beta_{2}^{t})}\) which
are the coefficients of \(g[i,\tau]\) in the expressions for \(m[i,t]\) and \(v[i,t]\) for any component \(i\in\{1,\ldots,n\}:\)
\[m[i,t]=\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]g[i,\tau]\]
\[v[i,t]=\sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]g[i,\tau]^{2}.\]
It can be shown that \(\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]=\beta_{1}\), \(\sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]=\beta_{2}\) and the sums \(\sum_{\tau=1}^{t}(w_{t}^{(1)}[\tau])^{2}\) and \(\sum_{\tau=1}^{t}(w_{t}^{(2)}[\tau])^{2}\) converge to finite values as \(t\to\infty\).
## 3 Assumptions underlying Adam efficacy
In this section, we explicitly go through the assumptions on the training setup that we believe give Adam the advantage over other first-order stochastic methods, such as SGD. We point out the assumption of time-domain independence between the gradient estimations as the crucial assumption that has a high chance of being subtly violated along the optimization procedure.
Let us take a look at Taylor's expansion of the gradient of the loss in the proximity of a point \(\theta^{\star}\):
\[\nabla f(\theta)=\nabla f(\theta^{\star})+\nabla^{2}f(\theta^{\star})[\theta- \theta^{\star}]+o(\|\theta-\theta^{\star}\|).\]
After introducing the notation \(\Delta=\theta-\theta^{\star}\), the outer product of the gradient with itself takes the form
\[\nabla f(\theta)\nabla f(\theta)^{\top}= \nabla f(\theta^{\star})\nabla f(\theta^{\star})^{\top}+\nabla^{ 2}f(\theta^{\star})\Delta f(\theta^{\star})^{\top}+f(\theta^{\star})\Delta^{ \top}\nabla^{2}f(\theta^{\star})^{\top}+\nabla^{2}f(\theta^{\star})\Delta \Delta^{\top}\nabla^{2}f(\theta^{\star})^{\top}\] \[+o(\|\Delta\|^{2}).\]
If \(\theta^{\star}\) is a first-order stationary point (\(\nabla f(\theta^{\star})=0\)):
\[\nabla f(\theta)\nabla f(\theta)^{\top}=\nabla^{2}f(\theta^{\star})\Delta \Delta^{\top}\nabla^{2}f(\theta^{\star})^{\top}+o(\|\Delta\|^{2}).\]
Assuming \(\theta\) is a random vector and taking the expectation with respect to the randomness in \(\theta\) yields
\[\mathbb{E}_{\Delta}\nabla f(\theta)\nabla f(\theta)^{\top}=\mathbb{E}_{\Delta }\nabla^{2}f(\theta^{\star})\Delta\Delta^{\top}\nabla^{2}f(\theta^{\star})^{ \top}+\mathbb{E}_{\Delta}o(\|\Delta\|^{2}).\]
Due to the linearity of mathematical expectation,
\[\mathbb{E}_{\Delta}\nabla f(\theta)\nabla f(\theta)^{\top}=\nabla^{2}f( \theta^{\star})\mathbb{E}_{\Delta}\left[\Delta\Delta^{\top}\right]\nabla^{2} f(\theta^{\star})^{\top}+\mathbb{E}_{\Delta}o(\|\Delta\|^{2}).\]
Consider an expansion of the covariance matrix of \(\Delta\) into a scaled identity and a residual: \(E_{\Delta}\left[\Delta\Delta^{\top}\right]=\sigma^{2}\mathbb{I}+\Sigma\), then
\[\mathbb{E}_{\theta}\nabla f(\theta)\nabla f(\theta)^{\top}=\sigma^{2}\nabla^ {2}f(\theta^{\star})\nabla^{2}f(\theta^{\star})^{\top}+\nabla^{2}f(\theta^{ \star})\Sigma\nabla^{2}f(\theta^{\star})^{\top}+\mathbb{E}_{\Delta}o(\|\Delta \|^{2}). \tag{1}\]
Due to the definition of \(g_{t}\) in the Adam algorithm, \(v_{t}\) can be viewed as an approximation of the diagonal of the matrix \(\mathbb{E}_{\theta\sim\theta_{\tau}}\nabla f(\theta)\nabla f(\theta)^{\top}\) where \(\theta_{\tau}\) is the distribution of the model weights over the latest several steps, the number of which is controlled by the value of \(\beta_{2}.\) If the covariance matrix for this distribution can be decomposed into \(\mathbb{E}_{\theta\sim\theta_{\tau}}\left[(\theta-\theta^{\star})(\theta- \theta^{\star})^{\top}\right]=\sigma^{2}\mathbb{I}+\Sigma\) such that \(\sigma^{2}\gg\|\Sigma\|,\) and \(\sigma\) sufficiently small for the residual term \(\mathbb{E}_{\Delta}o(\|\Delta\|^{2})\) in Taylor's expansion (1) to be negligible, then
\[v_{t}\approx\text{diag}\left(\mathbb{E}_{\theta\sim\theta_{\tau}}\nabla f( \theta)\nabla f(\theta)^{\top}\right)\approx\sigma^{2}\text{diag}\left(\nabla ^{2}f(\theta^{\star})\nabla^{2}f(\theta^{\star})^{\top}\right).\]
Assuming that \(\nabla^{2}f(\theta^{\star})\) is primarily diagonal, meaning that approximations like \(\text{diag}(\nabla^{2}f(\theta^{\star})\nabla^{2}f(\theta^{\star})^{\top}) \approx\text{diag}(\nabla^{2}f(\theta^{\star}))^{2}\) are valid, one could speculatively write that
\[\frac{1}{\sqrt{v_{t}}}\approx\sigma^{-1}\text{diag}(\nabla^{2}f(\theta^{ \star}))^{-1}\approx\sigma^{-1}\text{diag}(\nabla^{2}f(\theta^{\star})^{-1}).\]
Supposing additionally that the Hessian of the loss function is approximately constant over the support of \(\theta_{\tau},\) one could also make the approximation \(\nabla^{2}f(\theta_{t})\approx\nabla^{2}f(\theta^{\star}).\) This, together with the assumptions
that \(m_{t}\approx\nabla f(\theta_{t})\), and \(v_{t}\gg\varepsilon\), would imply that the update of the Adam algorithm is an approximation of the update of the pure form of Newton's method, up to a constant multiplier:
\[u_{t}=\frac{m_{t}}{\sqrt{v_{t}}+\varepsilon}\approx\frac{m_{t}}{\sqrt{v_{t}}} \stackrel{{\propto}}{{\approx}}\left(\nabla^{2}f(\theta_{t}) \right)^{-1}\nabla f(\theta_{t}).\]
The quality of such an approximation would impact the convergence of the Adam dynamics and could explain the empirical success and popularity of the Adam algorithm.
Note that even if \(\nabla f(\theta^{\star})\neq 0\) but \(\|\nabla f(\theta^{\star})\nabla f(\theta^{\star})^{\top}\|\ll\sigma^{2}\), the reasoning above goes through under the regularity assumption that \(\mathbb{E}_{\Delta}\Delta=0\) and slight modifications.
## 4 The assumption of time-domain independence between gradient estimations
We further focus on the above assumption \(\sigma^{2}\gg\|\Sigma\|\) on the covariance matrix of the distribution of the model weights over the latest several steps in the training \(\mathbb{E}_{\theta\sim\theta_{\tau}}\left[(\theta-\theta^{\star})(\theta- \theta^{\star})^{\top}\right]=\sigma^{2}\mathbb{I}+\Sigma\). We focus on this assumption because it can be violated on a relatively small time scale, which is in line with the short-term explosion phenomenon we study. One important implication of \(\sigma^{2}\gg\|\Sigma\|\) is that the non-diagonal components of the covariance matrix \(\mathbb{E}_{\theta\sim\theta_{\tau}}\left[(\theta-\theta^{\star})(\theta- \theta^{\star})^{\top}\right]\) are of a small magnitude.
One scenario in which \(\mathbb{E}_{\theta\sim\theta_{\tau}}\left[(\theta-\theta^{\star})(\theta- \theta^{\star})^{\top}\right]\) has small in magnitude off-diagonal entries is when the components of the update vector \(u_{t}=\frac{m_{t}}{\sqrt{v_{t}}+\varepsilon}\) are independent over both time-domain and space-domain (over the space of model parameters). Projecting this requirement onto the gradient estimations, for any \(i,j\in\{1,\ldots,n\}\), spanning the domain of model parameters (space-domain) and \(t\), \(s\), spanning the domain of algorithm iterations (time domain) such that \((i,t)\neq(j,s)\), we demand from the pair of scalar random variables \(g[i,t]\), \(g[j,s]\) (stochasticity comes from the data distribution) to be approximately independent (the joint distribution function is point-wise close to the product of individual distribution functions). We call this requirement the time-domain independence of gradient estimation components. In this case, \((\theta_{t}-\theta^{\star})\) becomes a vector of (almost) independent Markov chains (due to the property of independent increments), and the condition on its covariance matrix is guaranteed to hold.
### Manifestation of independent gradient estimates
In this section, we show how the independence of \(g[i,t]\) can be detected from the model dynamics. For example, consider \(g[i,\tau]\sim\text{Bernoulli}(\frac{1}{2})\) - i.i.d. for all \(i\in\{1,\ldots,n\}\), \(\tau\in\{1,\ldots,\infty\}\) and let \(\varepsilon=0\).
\[u[i,t]=\frac{\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]g[i,\tau]}{\sqrt{\sum_{\tau=1} ^{t}w_{t}^{(2)}[\tau]g[i,\tau]^{2}}}.\]
Since \(g[i,\tau]^{2}=1\), and \(\sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]=\beta_{2}\), it holds that \(u[i,t]=\beta_{2}^{-\frac{1}{2}}\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]g[i,\tau].\) The random variables \(w_{t}^{(1)}[\tau]g[i,\tau]\) are independent, with \(\mathbb{E}w_{t}^{(1)}[\tau]g[i,\tau]=0\) and \(\mathbb{E}(w_{t}^{(1)}[\tau]g[i,\tau])^{2}=(w_{t}^{(1)}[\tau])^{2}=\mathbb{D} w_{t}^{(1)}[\tau]g[i,\tau].\) The sequence of partial sums \(\Gamma_{t}(\beta_{1})=\sum_{\tau=1}^{t}(w_{t}^{(1)}[\tau])^{2}\) converges and the finite limit we denote with \(\lim_{t\to\infty}\Gamma_{t}(\beta_{1})=\Gamma(\beta_{1})=\Gamma_{1}.\) By the Central Limit Theorem (Lyapunov Theorem, [10]), as \(t\to\infty\)
\[\frac{1}{\Gamma_{t}(\beta_{1})}\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]g[i,\tau] \stackrel{{ d}}{{\to}}\mathcal{N}(0,1).\]
Thus, for a large enough fixed value of \(t\), the distributions of the update vector components \(u[:,t]\) under the assumption of independent Bernoulli \(g[i,t]\), approach the distribution of \(\frac{\Gamma(\beta_{1})}{\sqrt{\beta_{2}}}G\) where \(G\) is a standard Gaussian random variable.
Note that the application of the Central Limit Theorem only requires \(g[i,\tau]\) to be independent, and thus \(g[i,\tau]^{2}=1\) is the only part of the reasoning that uses Bernoulli distribution of \(g[i,\tau]\). Thus, the same line of argument could be used to argue that any distribution of \(g[:,:]\) which is close to being
composed of independent distributions of \(g[i,\tau]\), would result in a bell-shaped distribution of \(u[:,t]\) for a large enough fixed value of \(t\).
To illustrate this point, we present examples of the distribution of the update \(u[:,t]\) for time steps \(t\) where the training loss is showing a healthy converging dynamic. Figure 2 contains the probability density functions of \(u[i,t]\) over \(i\in G\) where \(G\) is a group of model parameters that make up a layer of the network. It is remarkable how similar the distributions are for different models and different steps of the model training \(t\).
### Manifestation of time-domain correlated gradient estimates
Now, let us see what the distribution of the update value across the model looks like if there is a time-domain correlation in the distribution of \(g[i,t]\). Assume that \(g[i,t]=g^{(1)}[i]+g^{(2)}[i,t]\) where the scaled Bernoulli variables \(g^{(1)}[i]\sim\rho\times\text{Bernoulli}(\frac{1}{2})\) and \(g^{(2)}[i,t]\sim\text{Bernoulli}(\frac{1}{2})\) are independent random variables for all \(i\in\{1,\ldots,n\}\) and \(t\in\{1,\ldots,\infty\}\). Note that \(\mathbb{E}g[i,t]=0\) and
\[\mathbb{E}g[i,t]=\underbrace{\mathbb{E}g^{(1)}[i]^{2}}_{=1}+ \underbrace{\mathbb{E}g^{(2)}[i,t]^{2}}_{\rho^{2}}+2\underbrace{\mathbb{E}g^{ (1)}[i]g^{(2)}[i,t]}_{=\mathbb{E}g^{(1)}[i]\mathbb{E}g^{(2)}[i,t]=0}=1+\rho^{2 }=\mathbb{D}g[i,t].\]
Under the assumption \(\varepsilon=0\) and using that \(g[i,t]^{2}=g^{(1)}[i]^{2}+g^{(2)}[i,t]^{2}+2g^{(1)}[i]g^{(2)}[i,t]=1+\rho^{2} +2g^{(1)}[i]g^{(2)}[i,t]\), we write the expression for a component of an Adam update vector:
\[u[i,t]=\frac{\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]g[i,\tau]}{\sqrt{ \sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]g[i,\tau]^{2}}}=\frac{\sum_{\tau=1}^{t}w_{t }^{(1)}[\tau]g^{(2)}[i,\tau]+g^{(1)}[\underbrace{\overbrace{t=1}^{t}w_{t}^{( 1)}[\tau]}_{=\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]}}{\sqrt{(1+\rho^{2}) \underbrace{\overbrace{\sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]}_{=\beta_{2}}}}+2g^ {(1)}[i]\sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]g^{(2)}[i,\tau]}.\]
Again, by the Central Limit Theorem (Lyapunov Theorem, [11]),
\[\sum_{\tau=1}^{t}w_{t}^{(1)}[\tau]g^{(2)}[i,t] \overset{d}{\rightarrow}\Gamma(\beta_{1})G[i]=\Gamma_{1}G[i]\] \[\sum_{\tau=1}^{t}w_{t}^{(2)}[\tau]g^{(2)}[i,t] \overset{d}{\rightarrow}\Gamma(\beta_{2})G[i]=\Gamma_{2}G[i]\]
where \(G[i]\) are independent standard Gaussian random variables. Thus, for a large value of \(t\), the components of the update vector can be considered to come from the distribution
\[u[i,t]\sim\frac{\Gamma_{1}G[i]+\beta_{1}g^{(1)}[i]}{\sqrt{\beta_{2}(1+\rho^{ 2})+2\Gamma_{2}g^{(1)}[i]G[i]}}.\]
Figure 2: The distribution of \(u[i,t]\) over the domain of model parameters \(i\in G\). Each plot corresponds to a layer \(G\) of a model at the training iteration \(t\) taken during the normal operation of the Adam algorithm, showing healthy converging dynamic.
We can distinguish two regimes for this distribution, depending on the value of \(\rho\). If \(\rho\ll 1\), which corresponds to low time-domain correlation in \(g[i,t]\), then the distribution takes a bell-shape similar to \(\frac{\Gamma_{1}G}{\sqrt{\beta_{2}}}\).
In the case of a very strong time-domain correlation (\(\rho\gg 1\)), the distribution of \(G\) can be considered supported in the proximity of \(0\), and thus the distribution of \(u[i,t]\) takes the form close to \(\frac{\beta_{1}g^{(1)}[i]}{\rho\sqrt{\beta_{2}}}\), which is a symmetrical bimodal distribution with the modes centered around \(\pm\frac{\beta_{1}}{\sqrt{\beta_{2}}}.\) It is remarkable that neither of these distributions depends on the scale of the gradient values themselves. Again, due to the versatile Central Limit Theorem used here, the reasoning above could be applied to the distributions beyond Bernoulli. Thus, a similar bimodal picture of the distribution of the Adam update should be expected in the general case of correlated gradient estimations.
Intuitively, this picture can be explained as follows: if for any fixed \(i\) the values of \(g[i,t]\) are the same for all \(t\) (\(g[i,t]=g[i]\) or \(g_{t}=g\)), then the vector of updates consists of the values \(\pm\frac{\beta_{1}}{\sqrt{\beta_{2}}}\) as \(u_{t}{\left|\right._{\varepsilon=0}}=\mathrm{sign}(g)\frac{\beta_{1}}{\sqrt{ \beta_{2}}}\) for all \(t\).
Anywhere between these two regimes (\(\rho\approx 1\)), one would observe a bimodal symmetrical distribution with a significant overlap between the modes. For example, we draw distributions of the update \(u[:,t]\) for time steps \(t\) when the training loss was stalling or showing divergent behavior. Figure 3 contains the probability density functions of \(u[i,t]\) over \(i\in G\) where \(G\) is a group of variables that make up a layer of the network. It is important to note that the majority of the layers in the model weight snapshots used to draw these images were still distributed close to the bell-shaped distribution depicted in Figure 2. Thus, a severe time-domain correlation between gradient estimations in a single layer should be associated with the convergence properties of the entire model. The number of the layers that have changed their update distribution from the bell-shaped distributions to the bimodal distributions has been increasing for our larger models as the training unrolled. Another important remark is that we were not able to detect layers with a bi-modal distribution of the updates in the smaller models that did not experience any perplexity spikes, namely 7b and 30b.
## 5 Time-domain correlation leads to divergence of Adam
In Section 3 we have discussed why time-domain independence between gradient estimation components is important for fast convergence. Here, we continue by showing how its lack leads not only to slow convergence but to divergence of Adam iterations, which allows us to link it to the loss instability phenomenon. In this Section, we assume that the gradient estimates are exact but correlated and show that the learning rate required for convergence falls dramatically as \(\frac{1}{n}.\) We do it by relying on the property that, in a high-dimensional space, a random vector is almost orthogonal to its sign.
Let us look at Taylor's expansion of
\[f(\theta_{t+1})=f(\theta_{t}-\eta_{t}u_{t})=f(\theta_{t})-\eta_{t}\langle \nabla f(\theta_{t}),u_{t}\rangle+\frac{\eta_{t}^{2}}{2}\langle\nabla^{2}f( \theta_{t}),u_{t}\otimes u_{t}\rangle+\ldots.\]
Figure 3: The distribution of \(u[i,t]\) over the domain of model parameters \(i\in G.\) Each plot corresponds to a layer \(G\) of a model at the training iteration \(t\) taken during the period of a brief loss stall in the training dynamic.
The loss difference can be written as
\[f(\theta_{t+1})-f(\theta_{t})=-\eta_{t}\langle\nabla f(\theta_{t}),u_{t}\rangle+ \frac{\eta_{t}^{2}}{2}\langle\nabla^{2}f(\theta_{t}),u_{t}\otimes u_{t}\rangle+o (\eta_{t}^{2}).\]
Considering small values of the step size \(\eta_{t}\), the Adam step leads to a decreased loss value in case the following inequality holds:
\[\langle\nabla f(\theta_{t}),u_{t}\rangle>\frac{\eta_{t}}{2}\langle\nabla^{2}f( \theta_{t}),u_{t}\otimes u_{t}\rangle. \tag{2}\]
Let us consider the ideal case when the gradient estimations are exact (\(\nabla f(\theta_{t})=g_{t}\) for all \(t\)), but there is a strong time-domain correlation between the gradient estimations (\(\nabla f(\theta_{t})=g_{t}=g\) for all \(t\)), which implies that \(m_{t}=\beta_{1}\nabla f(\theta_{t})\) and \(v_{t}=\beta_{2}\nabla f(\theta_{t})^{2}\).
* Assume \(\varepsilon\gg\max_{i}\left|\frac{\partial f(\theta_{t})}{\partial\theta_{t} [i]}\right|\) : We can conclude that in this scenario \(u_{t}=\frac{m_{t}}{\sqrt{v_{t}+\varepsilon}}=0.\) Both the left-hand side and right-hand side of the inequality (2) are equal \(0\), the inequality is not satisfied.
* Assume \(\varepsilon\ll\min_{i}\left|\frac{\partial f(\theta_{t})}{\partial\theta_{t} [i]}\right|\) (e.g. \(\varepsilon=0\)): \[u_{t}=\frac{\beta_{1}}{\sqrt{\beta_{2}}}\text{sign}(\nabla f(\theta_{t}))\] Divide the inequality (2) by \(\|u_{t}\|_{2}\) to obtain \[\frac{\langle\nabla f(\theta_{t}),u_{t}\rangle}{\langle u_{t},u_{t}\rangle}> \frac{\eta_{t}}{2}\langle\nabla^{2}f(\theta_{t}),\frac{u_{t}}{\|u_{t}\|} \otimes\frac{u_{t}}{\|u_{t}\|}\rangle.\] The left-hand side of this inequality is equal to \(\frac{\beta_{1}}{\sqrt{\beta_{2}}}\frac{\|\nabla f(\theta_{t})\|_{1}}{n}\) and the right-hand side is a random eigenvalue of the hessian of \(f\) at \(\theta_{t}\) scaled by \(\eta_{t}/2.\) We should assume that it is an eigenvalue of a random order because there is no relationship between the direction of \(u_{t}\) and the spectrum of the matrix \(\nabla^{2}f(\theta_{t}).\) The spectrum of the hessian of a neural network loss function has been the focus of various studies (Sankar et al., 2021; Liao and Mahoney, 2021; Yao et al., 2020). None of them were considering the question of how the spectrum evolves as the model is being scaled up, and there is a signal that a rigorous study would be extremely difficult to conduct as Liao and Mahoney (2021) pointed out that the spectrum can not be exactly described as a semicircular or even Marchenko-Pastur distribution already for simple generalized generalized linear models (GGLM). Thus, we will reason about the scaling of the average eigenvalue of a randomly generated matrix to provide some perspective on how the eigenvalues of the hessian could scale. According to Wigner's Semicircle law (see e.g. (Liu, 2000)), a symmetric matrix with the upper-triangle part filled with i.i.d. random variables of zero mean, unit variance, and finite higher-order moments, has the distribution of normalized and shifted eigenvalue (\(\frac{\lambda}{2\sqrt{n}}+\frac{1}{2}\)) that converges to \(\text{Beta}(\frac{3}{2},\frac{3}{2})\) distribution as \(n\to\infty\). The density of a normalized eigenvalue (\(\frac{\lambda}{2\sqrt{n}}\)) appears as a semicircle and has the explicit formula \(\mathbb{P}(x)=\frac{2}{\pi}\sqrt{1-x^{2}}\). This distribution is symmetric with the center at \(0\) and thus the expected value of the eigenvalue is equal to zero as well: \(\mathbb{E}\lambda=0\). Throughout optimization, however, for \(t\) large enough, one would expect \(\theta_{t}\) to be in proximity of a locally optimal point, which, by the second-order necessary conditions of local optimality, means that the Hessian \(\nabla^{2}f(\theta_{t})\) ought to be close to being a positive semi-definite matrix. Thus, modeling the Hessian as a matrix of random i.i.d. entries is not entirely correct. We can model it as a square of a random symmetric matrix that is guaranteed to be a positive semi-definite matrix. Since the spectrum of a matrix squared is the square of the spectrum of the matrix, we are interested in the mean value of the square \(\lambda^{2}\) of the eigenvalues of a symmetric random matrix with i.i.d. values. The second moment of the \(\text{Beta}(\frac{3}{2},\frac{3}{2})\) distribution is equal to \(\frac{5}{16}\), which is also equal to the mean value of \((\frac{\lambda}{2\sqrt{n}}+\frac{1}{2})^{2}=\frac{\lambda^{2}}{4n}+\frac{ \lambda}{\sqrt{n}}+\frac{1}{4}.\) Since \(\mathbb{E}\lambda=0\), we conclude that
\(\mathbb{E}\lambda^{2}=\frac{n}{4}\propto n\) for a symmetric positive semi-definite matrix of size \(n\times n\) with the entries of order \(1\), generated as the square of a symmetric matrix filled with i.i.d. entries. With this in mind, we can rewrite the condition on the learning rate stated by the inequality (2) in the form \[\frac{2\beta_{1}}{n\sqrt{\beta_{2}}}>\eta_{t}\frac{\mathbb{E}\lambda\left[ \nabla^{2}f(\theta_{t})\right]}{\|\nabla f(\theta_{t})\|_{1}}.\] If we assume that the entries of \(\nabla f(\theta_{t})\) and \(\nabla^{2}f(\theta_{t})\) are of the same order of magnitude, then both the numerator and denominator of the right-hand side of this inequality should scale linearly with \(n\), thus requiring the learning rate \(\eta_{t}\) to scale as \(\frac{1}{n}\) to avoid divergence. This is not a realistic scale for the learning rate in the context of large-scale machine learning.
Thus, we conclude that the requirement of time-domain incoherence between the gradient estimations is not only one of the necessary conditions for \(\frac{1}{\sqrt{v_{t}}}\) to be an efficient estimation of the diagonal of the hessian inverse, as shown in Section 3, but also a necessary condition for convergence of the Adam algorithm in general.
## 6 Theory of the loss instability
In the previous sections, we have shown that the time-domain independence of gradient estimation components is an important property for the convergence of the Adam algorithm. We have seen that it holds for the training runs that are showing stable behavior and can be violated in the periods of training instability. Further, we investigate the reasons why the time-domain independence of gradient estimates can be violated for some layers.
Let us turn to Figure 4. There, we plot the behavior of the gradient norm and the perplexity in the proximity of two perplexity spikes observed in the model 546b. In the earlier layers of the network, the gradient estimations get the norm that is orders of magnitude smaller than the layers further off in the model. Moreover, there are four layers that get the gradient estimates of a norm significantly below the value of \(\varepsilon=10^{-8}.\) Right before the explosion starts, the norm of the gradient estimation increases for the earlier layers, and decreases for the rest. In the next iteration, the layers that were persistently getting the gradient estimation of a norm below \(\varepsilon,\) experience an increased magnitude of the gradient estimation, and the perplexity value climb up. The gradient estimation norms of the earlier layers keep the high values throughout the loss explosion period and only scale back to the low values after the loss returns to its pre-explosion regime. Although not depicted on the Figure, the norm of the gradient estimation of the entire model significantly increases during the period of the loss spike, despite the majority of the layers getting gradient estimation of a smaller norm.
Since the gradient estimation norm for the earlier layers of the model is comparable to \(\varepsilon\) during normal operation, the approximation \(\varepsilon\approx 0\) is not valid for those layers any longer. Thus, the space-domain distribution of the update components \(u[G,t]\) over a layer \(G\) (\(i\in G\)) with small gradient estimation norm must be different from the normal distribution predicted in Section 4.1 and from the bimodal distribution predicted in Section 4.2. We should expect the distribution of \(u[G,t]\) to concentrate around \(0\) and take the form of a spike. The examples of the spiky distributions of the update components \(u[G,t]\) for the layers of the models of different sizes can be seen in Figure 5. Due to the gradient estimation components taking smaller values in larger models, the concentration effect is much more severe for the larger models than it is for the smaller ones.
Let us separately consider the update \(u[i,t]=\frac{m[i,t]}{\sqrt{v[i,t]+\varepsilon}}\) that depends on \(\varepsilon\) and the ratio of the exponential moving average of the gradient and the exponential moving average of the square of the gradient (further just "ratio" for short):
\[r[i,t]=\frac{m[i,t]}{\sqrt{v[i,t]}}=u[i,t]|_{\varepsilon=0}.\]
In Figure 6, we plot the distributions of the ratio \(r[G,t]\) of the same layers and at the same time steps as were used for Figure 5. The distribution of \(r[G,t]\) takes the bi-modal form discussed in Section 4.2, and, unlike \(u[G,t],\) is supported on an interval of order \(1.\) The distributions of \(u[G,t]\) and \(r[G,t]\) will coincide as soon as the gradient estimation components scale back to the order above \(\varepsilon.\)
Figure 4: The pixels of the heat map to the right represent the log norm of the gradient estimation of a layer (x axis) at an iteration (y axis). The pixels of the strip to the left represent the values of the log perplexity at each iteration (y axis).
We conclude that, due to the large batch size and the vanishing values of the update components, the gradient estimation \(g[i,t]\) for \(i\in G\) have a strong time-domain correlation, thus the distribution of \(r[i,t]\) is shifting to become bimodal. It is important to mention that in the early steps of training the distribution of \(r[G,t]\) was noticed to have a bell shape, as discussed in Section 4.1, even when the distribution of \(u[G,t]\) is already spiked, although it is quickly degrading to the bimodal form further into training. This empirical evidence suggests that the distribution of the ratio remains the same even in the case of vanishing gradient estimation components.
TheoryBased on the experimental and theoretical results, we propose the following explanation for the origins and the behavior of the training instabilities observed in large-scale machine learning. It takes the form of a multi-stage process spanning the short period of a single model perplexity spike:
1. Healthy training: both \(r[i,T]\) and \(u[i,T]\) have uni-modal distributions, close to normal, with a standard deviation of the order of \(1\). This implies a low correlation between gradient estimates at consecutive steps and a high value of the gradient components relative to \(\varepsilon\).
2. The gradients of a group of parameters of the model (e.g. a layer, let us denote it with \(G\)) are vanishing (\(\ll\varepsilon\)) over the course of training. From our observations, this is most likely to happen in the earlier layers of the model, where a good feature representation has been learned, while the rest of the parameters further through the model keep gradients of high magnitude.
3. The optimizer state values \(m[i,t]\) and \(v[i,t]\) for \(i\in G\) vanishing (\(\ll\varepsilon\)). * The update values \(u[i,t]\) for \(i\in G\) are vanishing due to \(m[i,T]\) and \(v[i,t]\) values dropping. The spatial distribution of \(u[i,t]\) over \(i\in G\) spikes at \(0\), dropping its variance. The distribution of \(r[i,T]\) over \(i\in G\) remains uni-modal for now, close to normal, variance of the order of \(1\).
Figure 5: Examples of the update \(u[i,t]\) distribution over a layer \(i\in G\) being concentrated around \(0\) on various training steps \(t\). These distributions form spikes with the variance converging to zero with the growing size \(n\) of the model.
Figure 6: Examples of the ratio \(r[G,t]=u[G,t]|_{\varepsilon=0}=\frac{m_{t}}{\sqrt{\nu_{i}}}\) distribution over a layer \(i\in G\) at training steps \(t\) which coincide with the layers and the training steps described in Figure 5. The distributions take bimodal form, with each of the modes spiking at \(\frac{\beta_{1}}{\sqrt{\beta_{2}}}\approx 0.92\) as the size \(n\) of the model grows.
4. The distribution of \(g[i,t]\) for \(i\in G\) becomes highly correlated across time domain for two reasons: * The model parameters remain unchanged (\(\theta_{t+1}\approx\theta_{t}\)) over the time steps as the update magnitude becomes close to zero. * The batch size for a large model is usually also very large, thus the gradient evaluations have small time-domain variance.
5. The distribution of \(r[i,t]\) over \(i\in G\) changes from uni-modal to bi-modal. The distribution of \(u[i,t]\) over \(i\in G\) remains spiked at 0.
6. The model parameters \(i\) outside the group \(G\) slowly change their values because the values \(u[i,t]\) corresponding to \(i\not\in G\) are still of order of 1. As the model changes, the probability that there would come a batch that would require a "reconsideration" of the feature maps learned in the earlier layers of the model, increases. Thus, increases the probability of a rare event (let's say it happens at time step \(t^{\star}\)), in which the \(g[i,t^{\star}]\) for \(i\in G\) becomes larger than \(\varepsilon\). After this event, the distribution of \(u[i,t]\) is going to depart from the spike form, approaching the bimodal distribution of \(r[i,t]\). According to our analytical study described in Section 5, this implies divergence for the standard learning rate values, which means that the entries of the next gradient estimation \(g[G,t^{\star}+1]\) must get an even larger magnitude, bringing the distribution of \(u[i,t^{\star}+1]\) over \(G\) even closer to the distribution of \(r[i,t^{\star}+1]\). The process described here resembles a chain reaction, which we would expect to observe during a loss explosion. In Figures 4 it could be observed that the explosion of the gradient norm in the earlier layers of the model starts one training step earlier than the spike of perplexity metric. * Note that for a function of the form \(\phi(x,\varepsilon)=\frac{x}{|x|+\varepsilon}\), the derivative \(\frac{\partial\phi}{\partial x}|_{x=0}=1/\varepsilon\) is large for small values of \(\varepsilon\), which means that low changes in the gradient estimation lead to a disproportionately large change in the Adam update magnitude.
7. The spatial distribution of \(u[i,T]\) over \(i\in G\) approaches the bimodal distribution of \(r[i,t]\), leading to divergent behavior as discussed earlier.
8. The gradient estimation components \(g[i,t]\) for \(i\in G\) start to vary widely over large magnitudes, losing correlation in the time domain. Thus, the spatial distribution of \(r[i,t]\) becomes uni-modal. Since \(g[i,t]\gg\varepsilon\), the distribution of \(u[i,t]\) coincides with the one of \(r[i,t]\) becoming uni-modal as well.
9. The training becomes healthy again. If the size of the group \(G\) is relatively small, the loss drops back to pre-explosion value very quickly, learning the good features again. However, it has been observed that the loss does not always come back to the pre-explosion values, leading to divergence.
The description of the process will inform our discussion of the possible remedies for the training instabilities provided in Section 7.
### Snapshots of the large model training run in the unstable regime
In this section, we discuss the snapshots of the 546b model taken at the final stages of the largest training run we report, where the instabilities are the most prominent feature of the training. In Figure 7 we plot the learning curve throughout approximately 500 training iterations. The three distinct training iterations to which we refer as \(t_{1}=150000\), \(t_{2}=150250\), and \(t_{3}=150500\) are marked with red lines. For all three of the iterations, we demonstrate the behavior of the training perplexity value in the proximity of the iteration as well as the distributions of the update \(u[G,t]\) and the ratio \(r[G,t]=u[G,t]\big{|}_{\varepsilon=0}\) components over the embedding layer \(G_{0}\) of the network. Although the majority of layers did not experience a special state at the time steps under consideration, there were other layers in the model that showed a multi-modal behavior of their update distributions at the same training steps. We demonstrate the first layer as a good representative of them. All of the examples are given in Figures 8, 9, and 10. Each of the iterations considered here represents a stage of the loss explosion process. \(t_{2}\) is an iteration where the loss explosion has started, \(t_{1}\) is an iteration at which the loss has
returned to an approximately pre-explosion value and \(t_{3}\) is an iteration on which the loss explosion has ended but went flat instead of dropping back down.
It is important to note that Figures (b)b and (c)c and resemble Figures (b)b and (c)c which are associated with two different phases of a perplexity spike. This highlights the question of when the loss drops back down and when it does not, which remains outside of the scope of the proposed theory. We can hypothesize here that the feature representation of the earlier layers changes more drastically at times when the drop of the loss does not occur than in the cases where the drop happens. As expected at \(t_{2}\), the distribution of \(u[G,t_{2}]\) closely resembles the distribution of \(r[G,t_{2}]\), although both of these distributions have come a long way to meet from the state depicted in Figures (b)b and (c)c.
## 7 Discussion
In this section, we summarize the observations that we have considered to falsify our theory or generally found relevant to the phenomenon being under study. We found it interesting that the distribution of the components of the ratio \(r[:,t]\) can turn from bimodal to a uni-modal and back without a prominent spike in the model perplexity, which is happening due to a slight increase in the magnitude of gradient estimation components which do not surpass the value of \(\varepsilon.\) Thus, the jump in the norm of gradient estimation does not always imply a loss spike, which would be the case if the distributions of \(r[G,t]\) were mostly uni-modal at the time of the spike for all of the layers \(G.\)
Figure 8: Distributions of \(u[i,t_{1}]\) and \(r[i,t_{1}]\) over the parameters \(i\) belonging to the token embedding layer \(G_{0}\) of the 546b model at the time step \(t_{1}=150000\), and the training perplexity curve for the iterations around \(t_{1}\).
Figure 7: Sudden spiky behavior of the training perplexity curve of a large-scale training run. The training iterations \(t_{1}=150000\), \(t_{2}=150250\) and \(t_{1}=150500\) are marked with red vertical stripes.
Figure 10: Distributions of \(u[i,t_{3}]\) and \(r[i,t_{3}]\) over the parameters \(i\) belonging to the token embedding layer \(G_{0}\) of the 546b model at the time step \(t_{3}=150500\), and the training perplexity curve for the iterations around \(t_{3}\).
Figure 9: Distributions of \(u[i,t_{2}]\) and \(r[i,t_{2}]\) over the parameters \(i\) belonging to the token embedding layer \(G_{0}\) of the 546b model at the time step \(t_{2}=150250\), and the training perplexity curve for the iterations around \(t_{2}\).
The theory already explains quite well how the model loss can return to the pre-explosion values shortly after the instability begins. However, it is also critically important for the proposed theory to explain why the trick with skipping batches helps for the stabilization of the training loss curve. We can see that step 6 above requires a rare event to happen, in which the order of magnitude of the gradient estimation components for a particular part of the model parameters experiences a sudden jump above the value of \(\varepsilon.\) Thus, skipping the batch that brought a large gradient value means skipping this rare event that would otherwise start a chain reaction because of the state of the optimizer having a severely bimodal distribution of \(r[:,t].\) Taking a gradient step through the model where the optimizer is in the state with \(r[:,t]\) having a bell shape distribution does not lead to the chain reaction even if it provokes a spike in the norm of gradient estimation.
While in the smaller or shallower models, the distribution of the update components is close to normal with the variance of order one, the deeper and larger models develop layers with the update components distribution spiked at zero rather quickly. It is much more common for decoder-only transformers to develop a small magnitude of the gradient estimates for the layers situated earlier in the network. This feature is likely related to the classical problem of vanishing gradients (Hochreiter et al., 2001), although as could be seen in Figure 4, the vanishing effect is not monotone in the layer number. Our theory draws a connection from the loss instability to the problem of diminishing gradients and to the batch size used for training. Thus, the size of the models, which is tightly bound to the model depth and the batch size, should indeed be the crucial hyper-parameter that strongly correlates with how prone the training is to the observed instabilities.
### How can the loss spikes be mitigated?
We point out several possible ways to fight the training instabilities caused by the time-domain correlation of gradient estimations. Each of them has its limitations and we discuss the trade-offs in detail to inform the future experiment design.
As proposed by Chowdhery et al. (2022), skipping batches is a possible strategy for recovering the loss curve, although it only works well earlier in training. Skipping batches is challenging to implement and operate, as it usually requires manual intervention to keep track of training loss. It is consuming resources that are spent on rolling back the model and saving checkpoints more frequently. Most importantly, the frequency of the spikes increases later in training when the order of magnitude of the gradient estimation entries for the majority of layers drops below \(\varepsilon.\) The situation when the instability of the training loss becomes unbearable is illustrated in Figure 7.
Alternative options include lowering the learning rate, which helps both theoretically and in practice but extends the overall training time. Tuning down the \(\varepsilon\) value should be considered, but it does not go well together with the training in low-precision arithmetic which is popular in large-scale training due to its efficient use of inter-GPU communication bandwidth. Changing the approach to treating the division by zero might be necessary. For example, in the process of studying the ratio values \(r[i,t]=u[i,t]\big{|}_{\varepsilon=0}\) we noticed that the order of magnitude of the ratio components never exceed the order of 1 and the only cases when "Not a Number" values are observed are when both \(m_{t}[i]\) and \(v_{t}[i]\) are equal to 0. No cases of \(m_{t}[i]\neq v_{t}[i]=0\) have been observed. Thus, putting \(\varepsilon=0\) and mapping \(u[i,t]\) to 0 in case \(v_{t}[i]=0\) may be one option to avoid numerical issues and prevent the bi-modal distribution of the ratio from forming. We expect the reduction of batch size to help reduce the frequency of loss spikes due to increased variance in the gradient estimation, but it would also slow down the large-scale training that exploits data parallel distribution paradigm over a large number of machines, as the batch size per GPU might be too small to make use of the GPU acceleration of tensor operations.
Reducing the averaging constants \(\beta_{1}\) and \(\beta_{2}\) would lead to averaging gradients over a larger number of time steps and would increase the time frame over which the correlation between the gradient estimations would need to persist for the severe bi-modality to appear in the distribution of the update components. On the downside, this would make the update stale with not enough up-to-date information about the gradient during the normal phase of training.
It was observed in our experiments on a smaller scale that the composition of the training dataset can reduce the number of layers with the diminishing gradient values down to zero. For the natural language processing tasks, this happened when the text corpus consisted of high-quality data with a lower variety of text modalities.
A conceptually different way to take care of training instabilities would be to keep track of a statistic that measures uni-modality of the distribution of the ratio \(r_{t}=\frac{m_{t}}{\sqrt{v_{t}}}\), and tune down the \(\varepsilon\) value, or even completely reinitialize the optimizer state, whenever the distribution changes its shape. One example of such a statistic is the dip statistic proposed by Hartigan and Hartigan (1985). Initial experiments in high-precision training have shown that this strategy allows preventing the bi-modal distribution of the updates from forming.
## 8 Related Work
In this Section, we briefly discuss (a not exhaustive) list of prior works on instabilities and divergence of the Adam algorithm. We focus on the questions raised in these papers and their main contributions to highlight why none of the results of the prior studies could explain the phenomenon of instability of the large-scale training runs.
There is a series of works refuting the initial claims of Adam convergence, setting limits for its applicability. Reddi et al. (2019) pointed out an issue with the theoretical convergence of Adam even in a low-dimensional setting. They considered the Adam update to be the component-wise product of the average gradient and a vector of "learning rates". They notice that each component of the vector of learning rates does not monotonically decrease throughout optimization steps. Using this observation, they come up with examples of the divergent behavior of Adam and propose alternative algorithms called AMSGrad and AdamNC. Wang and Klabjan (2022) propose another simple one-dimensional unconstrained problem of population loss minimization that illustrates the divergent behavior of Adam. The example shows the model drifts away from the optimal solution over the course of iteration, despite the strong convexity of the objective and the absence of constraints. They identify the variance of the gradient estimates as the key concern and propose a variance-reduced Adam algorithm that manages to converge under certain assumptions. This line of work is aimed at a different phenomenon, namely the drifting divergence, rather than the rapid spiky divergence we are concerned with.
Chen et al. (2018) conducted an analysis that involved comparing the direction of the gradient and the direction of the update at each step of Adam dynamics, which is an idea we exploit as well. Their study resulted in conditions on certain characteristic dynamic quantities, sufficient for the convergence of the Adam algorithm. The quantities could theoretically be monitored but the conditions can not be easily imposed on a training problem in practice. We did not follow the suggested quantities throughout the training run in our experiments, although it may give additional insights into the cause of the spike. Another sufficient condition on the convergence of Adam-like algorithms was presented by Zou et al. (2019). Their conditions are set on the hyper-parameters of the algorithms, rather than dynamic quantities. They establish convergence rate guarantees for Adam and conclude that these rates do not hold for the commonly used version of the Adam algorithm due to hyper-parameter values. The issues discussed by the authors are also more relevant to explaining the drifting divergence phenomenon. Another work aimed at drifting divergence was done by Zaheer et al. (2018) who proposed a new method, Yogi, motivated by the concern that the dependence of Adam updates on the past gradient estimations decay exponentially with the number of steps.
Zhou et al. (2018) argue that the fundamental reason behind Adam divergence is the negative correlation between the scale of gradient estimation and the vector of learning rates, which results in a small step size for a large gradient, and a large step size for a small gradient. This reasoning does not agree with our observations, as the distribution of the Adam update was repeatedly seen to have the same shape independently of the scale of the gradient estimation (given that the gradient estimation is much larger than the stability constant \(\varepsilon\)).
## 9 Conclusion
In this work, we argue that the training loss instabilities observed in large-scale training should be associated with the time-domain correlation between the gradient estimates of earlier layers in the deep-learning models. Based on the identified connection, we propose several ways to mitigate the instabilities, along with the heuristic method that was known in the literature. We conclude that at this point, there is no silver bullet to solve the problem, and the appropriate remedy depends on the specific setup of the large-scale training run.
## Acknowledgement
The authors are grateful to Sho Yaida for his time spent carefully reading and commenting on the paper.
|
2303.02983 | Emergent competition shapes the ecological properties of multi-trophic
ecosystems | Ecosystems are commonly organized into trophic levels -- organisms that
occupy the same level in a food chain (e.g., plants, herbivores, carnivores). A
fundamental question in theoretical ecology is how the interplay between
trophic structure, diversity, and competition shapes the properties of
ecosystems. To address this problem, we analyze a generalized Consumer Resource
Model with three trophic levels using the zero-temperature cavity method and
numerical simulations. We find that intra-trophic diversity gives rise to
``emergent competition'' between species within a trophic level due to
feedbacks mediated by other trophic levels. This emergent competition gives
rise to a crossover from a regime of top-down control (populations are limited
by predators) to a regime of bottom-up control (populations are limited by
primary producers) and is captured by a simple order parameter related to the
ratio of surviving species in different trophic levels. We show that our
theoretical results agree with empirical observations, suggesting that the
theoretical approach outlined here can be used to understand complex ecosystems
with multiple trophic levels. | Zhijie Feng, Robert Marsland III, Jason W. Rocks, Pankaj Mehta | 2023-03-06T09:20:40Z | http://arxiv.org/abs/2303.02983v1 | # Emergent competition shapes the ecological properties of multi-trophic ecosystems
###### Abstract
Ecosystems are commonly organized into trophic levels - organisms that occupy the same level in a food chain (e.g., plants, herbivores, carnivores). A fundamental question in theoretical ecology is how the interplay between trophic structure, diversity, and competition shapes the properties of ecosystems. To address this problem, we analyze a generalized Consumer Resource Model with three trophic levels using the zero-temperature cavity method and numerical simulations. We find that intra-trophic diversity gives rise to "emergent competition" between species within a trophic level due to feedbacks mediated by other trophic levels. This emergent competition gives rise to a crossover from a regime of top-down control (populations are limited by predators) to a regime of bottom-up control (populations are limited by primary producers) and is captured by a simple order parameter related to the ratio of surviving species in different trophic levels. We show that our theoretical results agree with empirical observations, suggesting that the theoretical approach outlined here can be used to understand complex ecosystems with multiple trophic levels.
## I Introduction
A defining feature of natural ecosystems is their immense complexity. This complexity is especially prominent in diverse ecosystems with many different types of interacting species and resources. It is common to think about ecosystems in terms of energy flows: energy is harvested from the environment by primary producers (e.g., photosynthetic organisms) and then flows through the ecosystem via the food chain [1]. Energy flows in ecosystems can be understood by organizing species into trophic levels: sets of organisms that occupy the same level in a food chain [2; 3]. A classic example is a food pyramid consisting of three trophic levels: primary producers (organisms that can directly harvest energy from the environment, e.g., plants), primary consumers (organisms that derive energy by consuming the primary producers, e.g., herbivores), and secondary consumers (organisms that derive energy from predation of the primary consumers, e.g., carnivores).
Understanding the ecological consequences of such trophic structures remains an open problem in modern ecology [4]. To simplify the complexity of such systems, previous theoretical studies have often ignored the effects of intra-trophic level diversity, focusing entirely on coarse-grained energy flows between trophic levels. This approach has yielded numerous insights, including the incorporation of top-down and bottom-up control, the role of vertical diversity, and scaling laws for organism size and metabolism under different regimes [5; 6; 7; 8; 9]. However, the use of coarse-grained trophic levels makes it difficult to understand the effects of species diversity and competition on ecosystem structure and function. Given the importance of biodiversity and competition as ecological drivers [10; 11; 12], there is a need for theoretical approaches that allow for the simultaneous study of trophic structure, diversity, and competition.
Here, we address this shortcoming by building upon a series of recent works that utilize ideas from statistical physics to understand the effects of competition and diversity in large ecosystems with many species [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. In particular, we focus on a three trophic level generalization of the MacArthur Consumer Resource Model (MCRM), a prominent ecological model for competition. First introduced by Levins and MacArthur, the MCRM considers an ecosystem with two trophic levels corresponding to primary producers (resources) and primary consumers [24; 25; 26]. In the MCRM, consumers are defined by a set of consumer preferences that encode how likely each consumer is to consume each resource. Competition occurs when species have similar consumer preferences and hence occupy similar niches.
Our model generalizes the MCRM in two ways. First, we introduce an additional trophic level into the system. In addition to the primary producers, or resources, of the bottom level and consumers of the top level, we introduce a middle level where species play the role of both consumers and resources. Second, inspired by the success of "random ecosystems" in capturing the properties of real ecosystems [27; 28; 20; 21; 23], we consider a large ecosystem with many species at each trophic level where all consumer preferences and ecological parameters are drawn from random distributions. The use of random parameters has a long history in theoretical ecology and allows us to model typical behaviors we expect to encounter [29].
To study this model, we make use of analytic calculations based on the zero-temperature cavity method and numerical simulations. In particular, we derive analytic expressions for steady-state distributions of species at all three trophic levels, allowing us to explore the interplay
between trophic structure, diversity, and competition and construct ecological phase diagrams for ecosystem behaviors.
## II Multi-Trophic Consumer Resource Model
### Theoretical setup
We begin by presenting a generalization of the MCRM to multi-trophic systems. We consider an ecosystem consisting of three trophic levels: a bottom trophic level consisting of \(M_{R}\) species of primary producers (e.g., plants) whose abundances we denote by \(R_{P}\) (\(P=1,\ldots,M_{R}\)), a middle trophic level consisting of \(M_{N}\) species of primary consumers (e.g., herbivores) with abundances \(N_{i}\) (\(i=1,\ldots,M_{N}\)), and a top level consisting of \(M_{X}\) secondary consumers (e.g. carnivores) \(X_{a}\) (\(\alpha=1,\ldots,M_{X}\)). We note that while we present results for three levels, this model and the corresponding mean-field cavity solutions presented in the next section can easily be generalized to an arbitrary number of trophic levels (see Appendix).
The dynamics of the ecosystem are described by a set of non-linear differential equations of the form
\[\frac{\mathrm{d}X_{\alpha}}{\mathrm{d}t} =X_{\alpha}\left[\sum_{j}d_{\alpha j}N_{j}-u_{\alpha}\right] \tag{1}\] \[\frac{\mathrm{d}N_{i}}{\mathrm{d}t} =N_{i}\left[\sum_{Q}c_{iQ}R_{Q}-m_{i}-\sum_{\beta}d_{\beta i}X_{ \beta}\right]\] \[\frac{\mathrm{d}R_{P}}{\mathrm{d}t} =R_{P}\left[K_{P}-R_{P}-\sum_{j}c_{jP}N_{j}\right],\]
where \(c_{jP}\) is a \(M_{N}\times M_{R}\) matrix of consumer preferences for the the \(M_{N}\) primary consumers and \(d_{aj}\) is a \(M_{X}\times M_{N}\) matrix of consumer preferences for the \(M_{X}\) secondary consumers. We also define the carrying capacity \(K_{P}\) for each primary producer \(P\), along with the death rates \(m_{i}\) for each primary consumer \(i\) and \(u_{\alpha}\) for each secondary consumer. These dynamics share key assumptions with the original MCRM on how energy flows
Figure 1: **(a)** Schematic of food web interaction with three-level trophic structure, colored corresponding to the model equation in (e). **(b)** Simulated dynamics of a system with \(M_{X}=50\) species of carnivores, \(M_{N}=56\) herbivores and \(M_{R}=62\) plants with \(k=4\), \(m=1,u=1\), \(\sigma_{c}=\sigma_{d}=0.5\), \(\mu_{c}=\mu_{d}=1,\sigma_{k}=\sigma_{m}=\sigma_{u}=0.1\). **(c)** Histograms of the steady-state distributions reached by the simulated dynamics in (b) and the distributions predicted by our cavity solutions. **(d)** Schematic of the coarse-grained view of the three-level trophic structure, colored corresponding to equations in (f). **(d)** Equations of the three-level trophic structure model corresponding to (a). **(f)** Effective mean-field (TAP) equations for steady-states have additional emergent competition and random variation terms proportional to \(D_{eff}^{\mathcal{A}}\) (\(\mathcal{A}=X,N,R\)) and \(\sigma_{\mathcal{B}}\) (\(\mathcal{B}=u_{eff},m_{eff},K_{eff}\)), respectively.
from the environment to different species and how species interact with each other. The major difference between the two models is the addition of the intermediate trophic level, \(N_{i}\), where species act as both "resources" to the secondary consumers above and "consumers" of the primary producers below. To provide intuition, we will use the terms "carnivores", "hebivores" and "plants" in later text to refer to "secondary consumers", "primary consumers" and "primary producers," respectively.
In Fig. 1(a), we depict an example of this model graphically with species organized into three distinct trophic levels composed of carnivores, herbivores, and plants. At the bottom, there is a constant flux of energy into the system from the environment. In the absence of herbivores, plants in the bottom level grow logistically to their carrying-capacity \(K_{P}\). Predation reduces the resource abundances at the bottom, resulting in an upward flow of energy. Energy returns to the environment through death, represented by death rates \(u_{\alpha}\) and \(m_{i}\).
In addition to energy flows, the ecosystem is structured by competition between species through the consumer preference matrices \(d_{\alpha j}\) and \(c_{iP}\). As in the original MCRM, species within a trophic level with similar consumer preferences compete more and consequently, can competitively exclude each other [30]. One qualitatively new feature of the multi-trophic MCRM is that niches in the herbivore level are defined by both the consumer preferences \(c_{iP}\) for the species in the bottom level and the ability to avoid predation by carnivores through their consumer preferences \(d_{\alpha j}\). The consumer preferences \(c_{iP}\) and \(d_{\alpha j}\) control both energy flows between trophic levels and competition between species within a trophic level.
To proceed, we specify the free parameters \(c_{jP}\), \(d_{\alpha j}\), \(K_{P}\), \(m_{i}\), and \(u_{\alpha}\). Because we are interested in the _typical_ behaviors of large multi-trophic ecosystems (the thermodynamic limit, \(M_{R}\), \(M_{N}\), \(M_{X}\gg 1\)), we follow a rich tradition in theoretical ecology and statistical physics of drawing parameters randomly from distributions [29; 31]. We consider the case where the consumer preferences \(d_{ia}\) are drawn independently and identically with mean \(\mu_{d}/M_{N}\) and standard deviation \(\sigma_{d}/\sqrt{M_{N}}\). We parameterize the variation in \(d_{ia}\) in terms of the random variables \(\gamma_{id}\) so that
\[\begin{split} d_{\alpha i}=\frac{\mu_{d}}{M_{N}}+\sigma_{d} \gamma_{\alpha i}\\ \langle\gamma_{\alpha i}\rangle=0,\qquad\langle\gamma_{\alpha i }\gamma_{\beta j}\rangle=\frac{\delta_{\alpha\beta}\delta_{ij}}{M_{N}}.\end{split} \tag{2}\]
Similarly, we draw the consumer preferences \(c_{iA}\) independently and identically with mean \(\mu_{c}/M_{R}\) and standard deviation \(\sigma_{c}/\sqrt{M_{R}}\), parameterized in terms of the random variables \(\epsilon_{jP}\),
\[\begin{split} c_{iP}=\frac{\mu_{c}}{M_{R}}+\sigma_{c}\epsilon_{ iP}\\ \langle\epsilon_{iP}\rangle=0,\qquad\langle\epsilon_{iP}\epsilon _{jQ}\rangle=\frac{\delta_{ij}\delta_{PQ}}{M_{R}}.\end{split} \tag{3}\]
For convenience, we choose to scale the means and variances of the consumer preferences with the number of species, \(1/M_{N}\) or \(1/M_{R}\). We note that this does not affect the generality of our results, but greatly simplifies the mathematical treatment in the thermodynamic limit.
With the knowledge that niches overlaps of consumers depend on the ratio of the mean versus standard deviation of consumer preferences [26], we fix \(\mu_{c}=1\) and \(\mu_{d}=1\). In most simulations we also choose to draw the consumer preferences from Gaussian distributions. However, we note that our results also generalize to other distributions that obey the above statistical properties such as the uniform distribution where coefficients are strictly positive (see Fig. 7).
Finally, we choose the parameters \(u_{\alpha}\), \(m_{i}\), and \(K_{P}\) to be independent Gaussian random variables with means \(u\), \(m\), and \(k\) and standard deviations \(\sigma_{u}\), \(\sigma_{m}\), and \(\sigma_{K}\), respectively. We also fix \(\sigma_{K}=0.1\), \(\sigma_{u}=0.1\), and \(\sigma_{m}=0.1\).
In Fig. 1(b), we depict the typical dynamical evolution of such a system, where the biomass of each species fluctuates for a finite time before reaching equilibrium. While the dynamics of consumer-resource models can display rich behavior, we choose focus on the steady-state behavior of this model. In the physical regime where the mean values of each parameter and the initial biomass of each species is positive, there always exists a unique and stable steady-state.
### Derivation of cavity solutions
In a very large ecosystem, understanding the detailed behaviors of each species is not possible. For this reason, we focus on developing a statistical description of the ecological dynamics in steady-state. This is made possible by the observation that the each species interacts with many other species in the ecosystem, allowing us to characterize the effects of interactions using a mean-field theory. This philosophy originates from the statistical physics of spin glasses and has more recently been imported into the study of ecological systems [32; 33; 34; 15; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 87; 89; 91; 88; 89; 92; 89; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 11; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 187; 188; 189; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 222; 231; 232; 241; 25; 256; 257; 268; 279; 281; 290; 211; 233; 258; 282; 291; 22; 232; 259; 269; 270; 283; 292; 293; 284; 294; 295; 296; 297; 298; 299; 300; 310; 320; 331; 332; 333; 334; 340; 35
butions, given by
\[\begin{split} X&=\max\left(0,\frac{g_{eff}^{X}+\sigma_{g_{ eff}^{X}}z_{x}}{D_{eff}^{X}}\right)\\ N&=\max\left(0,\frac{g_{eff}^{N}+\sigma_{g_{ eff}^{N}}z_{N}}{D_{eff}^{N}}\right)\\ R&=\max\left(0,\frac{g_{eff}^{R}+\sigma_{g_{ eff}^{R}}^{R}z_{R}}{D_{eff}^{R}}\right)\!,\end{split} \tag{4}\]
where \(z_{X},z_{N},z_{R}\) are independent Gaussian random variables with zero mean and unit variance and the effective parameters are given by the expressions
\[\begin{split} g_{eff}^{X}&=-u+\mu_{d}\left\langle N \right\rangle\\ g_{eff}^{N}&=-m-r_{1}\mu_{d}\left\langle X\right\rangle +\mu_{c}\left\langle R\right\rangle\\ g_{eff}^{R}&=K-\mu_{c}r_{2}\left\langle N\right\rangle \\ \sigma_{g_{eff}^{X}}^{2}&=\left\langle N^{2} \right\rangle\sigma_{d}^{2}+\sigma_{u}^{2}\\ \sigma_{m\,eff}^{2}&=\sigma_{c}^{2}\left\langle R ^{2}\right\rangle+\sigma_{d}^{2}r_{1}\left\langle X^{2}\right\rangle+\sigma _{m}^{2}\\ \sigma_{g_{eff}^{R}}^{2}&=\sigma_{k}^{2}+\sigma_{c} ^{2}r_{2}\left\langle N^{2}\right\rangle\\ D_{eff}^{X}&=-\sigma_{d}^{2}\nu\\ D_{eff}^{N}&=\sigma_{e}^{2}\kappa-r_{1}\sigma_{d}^{2} \chi\\ D_{eff}^{R}&=1-r_{2}\sigma_{c}^{2}\nu.\end{split} \tag{5}\]
We use the notation \(\left\langle.\right\rangle\) to denote averages over the distributions in Eq. (4). With this notation, we define the the mean abundance of species at each trophic level, \(\left\langle R\right\rangle\), \(\left\langle N\right\rangle\), and \(\left\langle X\right\rangle\), the second moments of the species abundances, \(\left\langle R^{2}\right\rangle\), \(\left\langle N^{2}\right\rangle\), and \(\left\langle X^{2}\right\rangle\), and the mean susceptibility of each trophic level biomass with respect to the change of direct energy flow in or out from the environment at that level, \(\chi=\left\langle\frac{\partial X}{\partial u}\right\rangle\), \(\nu=\left\langle\frac{\partial N}{\partial m}\right\rangle\), and \(\kappa=\left\langle\frac{\partial R}{\partial K}\right\rangle\).
In the Appendix, we provide a detailed explanation of how Eqs. (4) and (5) can be used to derive a set of self-consistent cavity equations to solve for the means and second moments of the abundances, the susceptibilities, and the fraction of surviving species at each trophic level. Fig. 1(c) shows a comparison between the predictions of the steady-state distributions of \(R\), \(N\), and \(X\) and direct numerical simulation of Eq. (1). We can see that there is remarkable agreement with simulations results. This suggests that the cavity method accurately captures the large scale properties of multi-trophic ecosystems.
## III Emergent competition
### Effective coarse-grained picture
The cavity solutions from the previous section allow us to calculate the biomass of species in each trophic level. A key feature of these equations is that the effect of species competition is summarized by self-consistent Thouless-Anderson-Palmer (TAP) corrections proportional to the parameters \(D_{eff}^{X}\), \(D_{eff}^{R}\), and \(D_{eff}^{N}\) [see Eq. (4)]. We now show that these three parameters have a natural interpretations as encoding the "emergent competition" between species within each trophic level mediated by interactions with other trophic levels.
To see this, we note that Eq. (4) can also be rearranged to give effective steady-state equations for a typical species at each level,
\[\begin{split} 0=&\frac{\mathrm{d}X}{\mathrm{d}t}=X \Big{[}g_{eff}^{X}+\sigma_{g_{eff}^{X}}z_{x}-D_{eff}^{X}X\Big{]}\\ 0=&\frac{\mathrm{d}N}{\mathrm{d}t}=N\Big{[}g_{eff}^{ N}+\sigma_{g_{eff}^{N}}z_{N}-D_{eff}^{N}N\Big{]}\\ 0=&\frac{\mathrm{d}R}{\mathrm{d}t}=R\Big{[}g_{eff}^{ R}+\sigma_{g_{eff}^{R}}z_{R}-D_{eff}^{R}R\Big{]}.\end{split} \tag{6}\]
Rewriting the steady-state solutions in this form clarifies the meaning of \(D_{eff}^{X}\), \(D_{eff}^{N}\), and \(D_{eff}^{R}\). Species at each trophic level have an effective description in terms of a logistic growth equation, with the parameters \(D_{eff}^{X}\), \(D_{eff}^{N}\), and \(D_{eff}^{R}\) controlling how much individuals within each trophic level compete with each other. In addition, Eq. (6) demonstrates that the species within each trophic level can be thought of as having effective carrying capacities drawn from Gaussian distributions with means \(g_{eff}^{X}\), \(g_{eff}^{N}\), and \(g_{eff}^{R}\), and standard deviations \(\sigma_{g_{eff}^{X}}\), \(\sigma_{g_{eff}^{N}}\), and \(\sigma_{g_{eff}^{R}}\), respectively. This coarse-grained view of the resulting ecological dynamics is illustrated in Fig. 1(d) with the correspondence between terms in the original and coarse-grained equations depicted in Figs. 1(e) and (f).
### Relation to species packing
To better understand the origins of this emergent competition, we relate \(D_{eff}^{X}\), \(D_{eff}^{N}\), and \(D_{eff}^{R}\) to the number of surviving species and the species packing fractions. One of the key results of niche theory is the competition-exclusion principle which states that the number of species that can be packed into an ecosystem is bounded by the number of realized (available) niches [30; 35]. In Consumer Resource Models (CRMs), the number of realized niches is set by the number of surviving species at each trophic level. For the top trophic level, the competitive exclusion principle states that the number of surviving carnivores \(M_{X}^{*}\) must be smaller than the number of surviving herbivores \(M_{N}^{*}\),
\[M_{X}^{*}\leq M_{N}^{*}. \tag{7}\]
For herbivores which reside in the middle trophic levels, niches are defined by both the ability to consume plants in the bottom trophic level and the ability to avoid predation by carnivores in the top trophic level. For this reason, competitive exclusion on herbivores takes the form
\[M_{N}^{*}\leq M_{R}^{*}+M_{X}^{*}, \tag{8}\]
where \(M_{R}^{*}\) is the number of plants that survive at steady-states. In other words, for herbivores there are \(M_{R}^{*}+M_{X}^{*}\) potential realized niches of which \(M_{N}^{*}\) are filled.
The cavity equations derived from Eq. (4) naturally relate species packing fractions to the effective competition coefficients \(D_{eff}^{X}\), \(D_{eff}^{N}\), and \(D_{eff}^{R}\) in Eq. (6). Before proceeding, it is helpful to define the ratio
\[f=\frac{M_{R}^{*}+M_{X}^{*}-M_{N}^{*}}{M_{N}^{*}-M_{X}^{*}}\] \[=\frac{\#\text{ of unfilled realized niches in middle trophic level}}{\#\text{ of unfilled realized niches in top trophic level}} \tag{9}\]
and the ratio \(\phi_{N}=M_{N}^{*}/M_{N}\), the fraction of species in the regional species pool that survive in the middle level. Using these ratio, in the Appendix, we show that the effective competition coefficients can be written
\[D_{eff}^{X}= \frac{\sigma_{d}^{2}}{\sigma_{c}^{2}r_{2}}\frac{1}{f} \tag{10}\] \[D_{eff}^{N}= \phi_{N}\sigma_{c}^{2}r_{2}f\] \[D_{eff}^{R}= 1+\frac{1}{f}.\]
These expressions show that there is a direct relationship between the amount of emergent competition at each trophic level and the number of occupied niches (species packing properties). The effective competition coefficient for herbivores, \(D_{eff}^{N}\), decreases with the number of unoccupied niches in the top trophic level, and shows a non-monotonic dependence on the number of species in the middle level. Moreover, direct examination of the expressions in Eq. (10) shows that the amount of competition in the top and bottom levels is positively correlated, in agreement with the well-established ecological intuition for trophic levels separated by an odd number of levels [36, 37, 38].
To better understand these expressions, we used the cavity equations to numerically explore how emergent competition parameters at each trophic level depend on the diversity of the regional species pool (as measured by \(\sigma_{c}^{2}\) and \(\sigma_{d}^{2}\)) and environmental parameters (\(k\), \(u\), \(r_{1}\), and \(r_{2}\)). We summarize these results in Table 1 and Figs. 5 and 6 in the Appendix. One consistent prediction of our model is that the effective competition in each level always decreases with the size of the regional species pool of that level. This effect has been previously discussed in the ecological literature under the names "sampling effect" and "variance in edibility" [39, 40, 36, 41]. We also find that in almost all cases, the effective competition coefficients change monotonically as model parameters are varied. One notable exception to this is the effect of changing the amount of energy supplied to the ecosystem as measured by the average carrying capacity \(k\) of plants (resources) in the bottom level. We find that often the amount of emergent competition in the bottom level, \(D_{eff}^{R}\), first increases with \(k\) then decreases, and this non-monotonic behavior propagates to \(D_{eff}^{N}\) and \(D_{eff}^{X}\). Finally, we observe the that \(D_{eff}^{X}\), \(D_{eff}^{N}\), and \(D_{eff}^{R}\) generally increase with \(\sigma_{c}\) and \(\sigma_{d}\).
ecosystem that exhibits top-down control. Increasing the death rate of predators results in increased populations of herbivores (middle level) but decreased populations of predators (top level) and plants (bottom levels). This alternating behavior across trophic levels is characteristic of ecosystems with top-down control. In contrast, the biomass in the middle is largely insensitive to changes in the carrying capacity \(k\) of plants in the bottom level.
### Measuring top-down versus bottom-up control
Historically, it was assumed that ecosystems could not simultaneously exhibit both top-down and bottom-up control [37, 43, 44]. However, recent evidence - such as the impact of overfishing on aquatic ecosystems - has overturned this view leading to a consensus that most ecosystems are impacted by both types of control and that their relative importance can shift over time [45, 46, 47, 48, 48]. Building on these ideas, recent theoretical works suggest that ecosystems can shift between bottom-up and top-down control dominated regimes as one varies model parameters [49, 50, 51, 9]. Here, we revisit and extend these works using CRMs and our cavity solution to investigate the effects of species diversity and other environmental factors on top-down versus bottom-up control.
One important challenge we must overcome is the lack of a consensus in the ecology literature on how to quantify bottom-up versus top-down control in an ecosystem. Empirical studies often use the structure of correlations in time series of species abundances across trophic levels [45, 46, 48]. An alternative experimental approach is based on the ability to create small ecosystems with slightly different environments and/or compositions of predators in the top trophic level [52, 50, 40]. Unfortunately, conclusions between these two frameworks often do not agree with each other [53]. For this reason, it is necessary to revisit the problem of quantifying bottom-up and top-down control.
One common proposal for characterizing the response of ecosystems to perturbations in both empirical and theoretical studies is looking at the biomass distribution of different trophic levels. It has been argued that in a system with bottom-up control, we should expect the total biomass of the bottom trophic level to be larger than the total biomass of the top trophic level, \(M_{R}\langle R\rangle>M_{X}\langle X\rangle\) In contrast, in a system with top-down control, we expect the opposite, \(M_{X}\langle X\rangle>M_{R}\langle R\rangle\). Other existing theoretical works make use of derivatives to measure the results of various perturbations [8]. The most direct quantities we can look at are the derivatives \(\frac{d\langle N\rangle}{dk}\) and \(\frac{d\langle N\rangle}{du}\) that capture the change in the average biomass \(\langle N\rangle\) of species
Figure 2: Schematic and simulations of bottom-up control and top-down control. **(a)** Bottom-up control. Increasing the total energy energy influx \(k\) to primary producers in the bottom trophic level increases the average biomass \(\langle N\rangle\) of herbivores in the middle trophic level. **(b)** Top-down control. Increasing the death rate \(u\) of predators in the top trophic level increases the biomass of the middle trophic level. **(c)** Average biomass at each trophic level obtained from cavity solutions as a function of \(k\) with \(u=3,r_{1}=0.2,r_{2}=1.2,\sigma_{c}=0.5,\sigma_{d}=0.5\). **(d)** Same as (c) except \(r_{1}=1.3,r_{2}=0.3\).
in the middle trophic level in response to changes in the average carrying capacity \(k\) of plants (bottom trophic level) and changes in the average death rate \(u\) of carnivores (top trophic level).
### Cavity-inspired order parameters
Here, we use our cavity solution to the multi-trophic MCRM to propose two informative and intuitive order parameters to assess whether an ecosystem has top-down or bottom-up control. We then show that they qualitatively agree with each other and the definitions based on derivatives \(\frac{d\left(N\right)}{dk}/\frac{d\left(N\right)}{du}\) discussed above (see Fig. 4).
#### iii.2.1 Biomass-based order parameter
To create our first order parameter, we rewrite the form of the effective growth rate for the biomass in the middle trophic level [Eq. (5)] as
\[\begin{split} g_{eff}^{N}=-m+g_{eff}^{N,top}+g_{eff}^{N,bottom} \\ g_{eff}^{N,top}=-r_{1}\mu_{d}\left\langle X\right\rangle,\qquad g_{ eff}^{N,bottom}=\mu_{c}\left\langle R\right\rangle\end{split} \tag{11}\]
Each of the three terms in \(g_{eff}^{N}\) captures distinct ecological processes of herbivores in the middle level: (i) the first term proportional to \(m\) is the intrinsic death rate, (ii) the middle term, \(g_{eff}^{N,top}\), captures the effect of predation due to carnivores in the top trophic level, and (iii) the third term, \(g_{eff}^{N,bottom}\), measures the consumption of plants in the bottom trophic level. Based on this interpretation, we propose the following ratio as a natural measure of top-down versus bottom-up control:
\[\left|\frac{g_{eff}^{N,top}}{g_{eff}^{N,bottom}}\right|=r_{1}\frac{\mu_{d} \left\langle X\right\rangle}{\mu_{c}\left\langle R\right\rangle}. \tag{12}\]
This ratio measures the relative contributions of the top and bottom trophic levels on the growth rate of species in the middle level. Notice that in addition to the biomass, this definition also accounts for the strength of competition between species via \(\mu_{c}\) and \(\mu_{d}\), along with differences in the regional species pool sizes via the extra factor \(r_{1}=M_{X}/M_{N}\).
#### iii.2.2 Species packing-based order parameter
We also construct an order parameter for top-down versus bottom-up control based on the relative contributions of the top and bottom trophic levels to the emergent competition coefficient of the middle level, \(D_{eff}^{N}\). Using the definition in Eq. (5), we rewrite this coefficient as
\[\begin{split} D_{eff}^{N}=D_{eff}^{N,top}+D_{eff}^{N,bottom}\\ D_{eff}^{N,top}=r_{1}\sigma_{d}^{2}\chi,\qquad D_{eff}^{N,bottom} =\sigma_{c}^{2}\kappa\end{split} \tag{13}\]
where \(D_{eff}^{N,top}\) and \(D_{eff}^{N,bottom}\) capture feedbacks from the top and bottom trophic levels, respectively, onto the middle level. Based on this, we define the corresponding order parameter as
\[\frac{D_{eff}^{N,top}}{D_{eff}^{N,bottom}}=-\frac{r_{1}\sigma_{d}^{2}\chi}{ \sigma_{c}^{2}\kappa}=\frac{M_{X}^{*}}{M_{N}^{*}-M_{X}^{*}}, \tag{14}\]
where in the second line we have used the cavity solutions to relate the susceptibilities to species packing fractions (see Appendix). Since \(M_{X}^{*}/M_{N}^{*}\) is the fraction of realized niches that are filled in the top level, this order parameter corresponds to
\[\frac{D_{eff}^{N,top}}{D_{eff}^{N,bottom}}=\frac{\text{\# occupied niches in top level}}{\text{\# of unfilled niches in top level}}.\]
Note that \(D_{top}^{N}/D_{bottom}^{N}\) is always positive because competition exclusion leads to \(M_{X}^{*}<M_{N}\). By construction, if \(D_{eff}^{N,top}/D_{eff}^{N,bottom}>1\), then an ecosystem exhibits more top-down control than bottom-up control, while \(D_{eff}^{N,top}/D_{eff}^{N,bottom}<1\) indicates the opposite is true.
Figure 3: **(a)** The emergent competition coefficient for the middle level, \(D_{eff}^{N}\), can be written as the sum of two terms resulting from feedbacks from the top trophic level, \(D_{top}^{N}\), and the bottom trophic level, \(D_{bottom}^{N}\). The order parameter \(D_{top}^{N}/D_{bottom}^{N}\) quantifies the sensitivity to top-down versus bottom-up control. **(b)** Comparison of three order parameters discussed in the main text for measuring top-down versus bottom-up control: \(\frac{d\left(N\right)}{dk}/\frac{d\left(N\right)}{du}\), \(g_{eff}^{N,top}/g_{eff}^{N,bottom}\), \(D_{eff}^{N,top}D_{eff}^{N,bottom}\). Each point corresponds to an ecosystem with different choices of \(k,u,r_{1}\), and \(r_{2}\).
### Order parameters are consistent with ecological intuitions
To better understand if these species-packing order parameters capture traditional intuitions about top-down versus bottom-up control, we compare \(D_{eff}^{N,top}/D_{eff}^{N,bottom}\), \(\left|g_{eff}^{N,top}/g_{eff}^{N,bottom}\right|\), and \(\frac{d\langle N\rangle}{dk}/\frac{d\langle N\rangle}{du}\) to each other for ecosystems where we varied the model parameters \(k\), \(u\), \(r_{1}\), and \(r_{2}\). The results are shown in Fig. 3. Notice that all three quantities are highly correlated, especially at the two extreme ends. This suggests that the order parameter \(D_{eff}^{N,top}/D_{eff}^{N,bottom}1\) is an especially useful tool to infer whether an ecosystems is more susceptible to bottom-up or top-down control, as it requires us to simply count the number of surviving species in the top and middle trophic levels. If we have more occupied niches in the top level than unoccupied niches (\(D_{eff}^{N,top}/D_{eff}^{N,bottom}>1\) or equivalently, \(M_{X}^{*}/M_{N}^{*}>0.5\)), the ecosystem is more susceptible top-down control. If the opposite is true (\(D_{eff}^{N,top}/D_{eff}^{N,bottom}<1\) or equivalently, \(M_{X}^{*}/M_{N}^{*}<0.5\)), then the ecosystem is more susceptible to bottom-up control.
## V Phase diagram changes with diversity
Having established that \(D_{top}^{N}/D_{bottom}^{N}\) is a good order parameter for assessing the relative importance of bottom-up and top-down control, we now use this quantity to construct phase diagrams. One important ecological parameter of interest is the total energy entering the ecosystem. In our model, this is controlled by the average carrying capacity \(k\) of plants at the bottom trophic level. Another ecologically important parameter is the predator death rate \(u\) which controls the biomass in the top trophic level. The number and diversity of species in the ecosystem is set by \(r_{1}=M_{X}/M_{N}\) and \(r_{2}=M_{N}/M_{R}\), which determine the relative sizes of the regional species pools at each trophic level, and \(\sigma_{c}\) and \(\sigma_{d}\), which control the trait diversity via the standard deviation of consumer preferences. Fig. 4(a) shows the dependence of \(D_{top}^{N}/D_{bottom}^{N}\) on \(k\), \(u\), \(r_{1}\), and \(r_{2}\), while the phase diagrams in Fig. 4(b) explore the dependence of \(D_{top}^{N}/D_{bottom}^{N}\) on \(\sigma_{c}\) and \(\sigma_{d}\).
Notice that \(D_{top}^{N}/D_{bottom}^{N}\) always increases with \(k\) and decreases with \(u\). These trends agrees with our expectation that ecosystems are more likely to exhibit top-down (bottom-up) control when they are limited by the top (bottom) trophic level. A larger \(k\) reduces the survival stress on species in the middle level from food limitations, decreasing the importance of bottom-up control. Analogously, a larger \(u\) reduces the stress from predators, decreasing the importance of top-down control.
The amount of top-down control \(D_{top}^{N}/D_{bottom}^{N}\) also increases with \(r_{1}\) and decreases with \(r_{2}\). This observation is consistent with what is known in the ecological literature as the "sampling effect", where larger regional species pool size leads to a higher fitness of surviving species [39; 41]. A smaller \(r_{1}\) and larger \(r_{2}\) correspond to increasing the size of the regional species pool of the middle trophic level relative to the top level or bottom level, respectively. This increases the odds that herbivores can cope with the survival stress from predators and/or more efficiently consume plants.
In Fig. 4(b), we also show how \(D_{top}^{N}/D_{bottom}^{N}\) depends on the trait diversity via \(\sigma_{c}\) and \(\sigma_{d}\). Notice that the amount of top-down control decreases as the diversity of the herbivores increases via \(\sigma_{c}\), while it increases as predators in the top trophic level become more diverse via \(\sigma_{d}\). One notable exception is a small region in phase space with large \(k\), small \(u\), small \(r_{1}\), and small \(\sigma_{c}\), where \(D_{top}^{N}/D_{bottom}^{N}\) decreases with \(\sigma_{d}\). A similar dependence on \(\sigma_{d}\) is observed for \(D_{eff}^{R}\), suggesting that this idiosyn
Figure 4: Phase diagrams of the order parameter \(D_{top}^{N}/D_{bottom}^{N}\)**(a)** as a function of energy influx to primary producers \(k\) and death rate of carnivores \(u\) for four different ratios of regional species pool, \(r_{1},r_{2}\in\{0.4,1.2\}\), indicated by the pie chart, with \(\sigma_{c}=0.5\) and \(\sigma_{d}=0.5\), and **(b)** as a function of the species trait diversity, \(\sigma_{d}\) and \(\sigma_{c}\), for four different ratios of regional species pools with environmental parameters \(k=4.2,u=2.6\) and \(k=4.2,u=1\).
cratic behavior may be mediated by a complex feedback involving both carnivores and plants.
## VI Predictions, proposed experiments, and comparison to ecological literature
Tables 1 and 2 compare the prediction of our model for emergent ecosystem properties to the ecological literature. We primarily focus on predictions concerning how the effective competition strength at each trophic level (\(D_{eff}^{X},D_{eff}^{N},D_{eff}^{R}\)) and the relative strength of top-down versus bottom-up control (\(D_{top}^{N}/D_{bottom}^{N}\)) vary with the number of species (\(r_{1},r_{2}\)), species diversity (\(\sigma_{c},\sigma_{d}\)) and environmental parameters (\(k,u\)). In particular, Table 2 summarizes the predictions from our model in simple terms and presents observations/hypothesis from the ecological literature consistent with our model predictions. Overall, it is quite striking how many different qualitative observations/hypothesis are reproduced by our generalized MCRM with three trophic levels.
The predictions of our models can also be directly tested using current experimental techniques. One prediction of our theory is that whether a three trophic level ecosystem exhibits top-down or bottom-up control can be determined by counting the number of species in the middle and top trophic levels. In principle, this can be done using perturbative experiments on synthetic microcosms under different conditions [51]. Another interesting direction for testing our predictions is to use existing food web data, focusing on the number of coexisting species and biomass at each trophic level. One potential setting for doing this is to compare properties of aquatic and terrestrial food webs since aquatic ecosystems are generically thought to be more susceptible to top-down control than terrestrial ecosystems [65; 66].
## VII Conclusion
In this paper, we proposed a new model for three-level trophic ecosystems based on generalized Consumer Resource Models. Using the zero-temperature cavity method from spin glass physics, we derived analytic expression for the behavior of this model that are valid for large ecosystem with many species at each trophic level. We found that intra-trophic diversity gives rise to "emergent competition" between species within a trophic level arising from feedbacks mediated by other trophic levels. The strength of this competition depends on both environmental parameters (energy influxes, death rates) and the diversity of the regional species pool. Using analytic solutions, we defined new order parameters for assessing whether an ecosystem is more susceptible to top-down or bottom-up control. Surprisingly, we found that one of these order parameters depends on ecosystem properties only through the fraction of occupied niches. Our analysis suggests that the relative importance of top-down control compared to bottom-up control increases with: (1) higher energy influx into the ecosystem, (2) lower death rate of predators (top level), (3) a larger fraction of species residing in the middle trophic level in the regional species pool, a (4) lower fraction of carnivores and plants in the regional species pool (species in the top and bottom trophic levels). We also found that the amount of top-down control increases as predators in the top trophic level increase their trait diversity, and decreases as herbivores increase their trait diversity.
Our theoretical work can be generalized to accommo
\begin{table}
\begin{tabular}{c|l|l} Model behavior & Observation/hypothesis & References \\ \hline \multirow{2}{*}{1} & Increased species richeness in a trophic level lead to higher biomass and resource consumption in its level & [39; 54; 55] \\ \cline{2-3} & Herbivore diversity may increase bottom-up control and decrease top-down control through & [42; 53] \\ \hline \multirow{2}{*}{2, 5} & Increasing prey richness increase the chance of resistance to predator (variance in ability hypothesis) & [40] \\ \cline{2-3
date more realistic structures. For instance, our analysis can be generalized to any number of levels, which would allow for investigations into how perturbations propagate through the entire food chain with damping and amplification across levels. Moreover, adding other more complex ecological interactions such as omnivorism, crossfeeding and decomposition could lead to a more realistic and specific understanding of different types of ecosystems [67, 68, 52]. Practically, our theoretical predictions also suggest that a simple way to determine if a three-level system exhibits top-down or bottom-up control is to count the number of carnivores and herbivores. These predictions, summarized in Tables 1 an II, also provide simple, qualitative rules of thumb for understanding how ecosystem properties change with the shifting species composition of regional species pools and environmental variables.
## VIII Acknowledgement
This work was supported by NIH NIGMS grant 1R35GM119461 and a Simons Investigator in the Mathematical Modeling of Living Systems (MMLS) award to PM. We thank Maria Yampolskaya for useful discussions. The authors also acknowledge support from the Shared Computing Cluster administered by Boston University Research Computing Services.
|
2302.04003 | Dynamics of Molecular Gas in the Central Region of the Quasar
I$\,$Zwicky$\,$1 | We present a study of the molecular gas distribution and kinematics in the
cicumnuclear region (radii $\lesssim 2\,$kpc) of the $z\approx0.061$ quasar
I$\,$Zwicky$\,$1 using a collection of available Atacama Large
Millimeter/submillimeter Array (ALMA) observations of the carbon monoxide (CO)
emission. With an angular resolution of $\sim0.36''$ (corresponding to
$\sim\,400\,\rm pc$), the host galaxy sub-structures including the nuclear
molecular gas disk, spiral arms, and a compact bar-like component are resolved.
We analyzed the gas kinematics based on the CO image cube and obtained the
rotation curve and radial distribution of velocity dispersion. The velocity
dispersion is about $30\,\rm km\,s^{-1}$ in the outer CO disk region and rises
up to $\gtrsim 100\,\rm km\,s^{-1}$ at radius $\lesssim 1\,$kpc, suggesting
that the central region of disk is dynamically hot. We constrain the CO-to-$\rm
H_2$ conversion factor, $\alpha_{\rm CO}$, by modeling the cold gas disk
dynamics. We find that, with prior knowledge about the stellar and dark matter
components, the $\alpha_{\rm CO}$ value in the circumnuclear region of this
quasar host galaxy is $1.55_{-0.49}^{+0.47}\,M_\odot\,\left(\rm
K\,km\,s^{-1}\,pc^2\right)^{-1}$, which is between the value reported in
ultra-luminous infrared galaxies and in the Milky-Way. The central 1$\,$kpc
region of this quasar host galaxy has significant star formation activity,
which can be identified as a nuclear starburst. We further investigate the high
velocity dispersion in the central region. We find that the ISM turbulent
pressure derived from the gas velocity dispersion is in equilibrium with the
weight of the ISM. This argues against extra power from AGN feedback that
significantly affects the kinematics of the cold molecular gas. | Qinyue Fei, Ran Wang, Juan Molina, Jinyi Shangguan, Luis C. Ho, Franz E. Bauer, Ezequiel Treister | 2023-02-08T11:36:56Z | http://arxiv.org/abs/2302.04003v2 | # Dynamics of Molecular Gas in the Central Region of the Quasar I Zwicky 1
###### Abstract
We present a study of the molecular gas distribution and kinematics in the circumnuclear region (radii \(\lesssim 2\) kpc) of the \(z\approx 0.061\) quasar I Zwicky 1 using a collection of available Atacama Large Millimeter/submillimeter Array (ALMA) observations of the carbon monoxide (CO) emission. With an angular resolution of \(\sim 0.36\arcsec\) (corresponding to \(\sim\) 400 pc), the host galaxy sub-structures including the nuclear molecular gas disk, spiral arms, and a compact bar-like component are resolved. We analyzed the gas kinematics based on the CO image cube and obtained the rotation curve and radial distribution of velocity dispersion. The velocity dispersion is about 30 km s\({}^{-1}\) in the outer CO disk region and rises up to \(\gtrsim 100\) km s\({}^{-1}\)at radius \(\lesssim 1\) kpc, suggesting that the central region of disk is dynamically hot. We constrain the CO-to-H\({}_{2}\) conversion factor, \(\alpha_{\rm CO}\), by modeling the cold gas disk dynamics. We find that, with prior knowledge about the stellar and dark matter components, the \(\alpha_{\rm CO}\) value in the circumnuclear region of this quasar host galaxy is \(1.55^{+0.47}_{-0.49}\,M_{\odot}\left({\rm K\,km\,s^{-1}\,pc^{2}}\right)^{-1}\), which is between the value reported in ultra-luminous infrared galaxies and in the Milky-Way. The central 1 kpc region of this quasar host galaxy has significant star formation activity, which can be identified as a nuclear starburst. We further investigate the high velocity dispersion in the central region. We find that the ISM turbulent pressure derived from the gas velocity dispersion is in equilibrium with the weight of the ISM. This argues against extra power from AGN feedback that significantly affects the kinematics of the cold molecular gas.
AGN host galaxies (2017); Quasars (1319); Galaxy kinematics (602); Galaxy dynamics (591); Molecular gas(1073) 0000-0002-4002]Qinyue Fei
0000-0002-2882-7886]Ran Wang
0000-0002-4883-0883]Juan Molina
0000-0002-1883-0883]Jinyi Shangguan
0000-0002-4883-0883]Luis C. Ho
0000-0002-4883-0883]Franz E. Bauer
0000-0002-4883-0883]Ezequiel Treister
## 1 Introduction
The scaling relationships between the supermassive black holes (SMBHs) and their host galaxies suggest that their early evolutionary progress are tightly coupled (e.g., Magorrian et al., 1998; Ferrarese & Merritt, 2000; Gebhardt et al., 2000; Kormendy & Ho, 2013). The active galactic nuclei (AGNs) represent the most active phase of the SMBH-galaxy co-evolution (Schawinski et al., 2007; King, 2010; Feruglio et al., 2010; Rupke & Veilleux, 2011; Fabian, 2012; Cicone et al., 2014; Fiore et al., 2017; Fluetsch et al., 2019). Cold molecular gas provides fuel for both star formation and SMBH growth (Carilli & Walter, 2013; Vito et al., 2014). Studying the distribution and kinematics of the molecular gas is therefore crucial for understanding the physical process involved in the coevolution between SMBH and their host galaxies (Sanders et al., 1991; Feruglio et al., 2010; Sturm et al., 2011).
The low-order rotational transitions of carbon monoxide (CO) are the most common tracer for studies (e.g., Barvainis et al., 1989; Carilli & Walter, 2013; Bolatto
et al., 2017; Alonso-Herrero et al., 2018; Tan et al., 2019; Molina et al., 2021; Yajima et al., 2021). Massive molecular outflows reported in previous CO observations of AGN host galaxies are considered as evidence of negative AGN feedback, which expels gas and dust from the host galaxy (Haan et al., 2009; Feruglio et al., 2010; Cicone et al., 2014; Morganti et al., 2015). However, recent studies with large samples of optically selected quasars suggest that their host galaxies are falling on, and even above the main sequence of star-forming galaxies, with the host galaxy star formate rate (SFR) and SMBH accretion rate being tightly correlated (e.g., Mullaney et al., 2012; Chen et al., 2013; Lanzuisi et al., 2017; Zhuang et al., 2021). From an observational point of view, the impact of AGN feedback on host galaxy evolution is still under debate, and it is necessary to study the physical processes embedded in AGN host galaxies that govern the coevolution between SMBHs and host galaxies.
Quasars, as the most luminous population of AGNs, are ideal targets for studying the impact of AGN feedback. Shangguan et al. (2020) observed the CO (2-1) line emission from a sample of 23 \(z<0.1\) Palomar-Green (Schmidt and Green, 1983) quasars using the Atacama Compact (Morita) Array (ACA). Molina et al. (2021) provided follow-up ALMA observations of six PG quasars at \(\sim\) kpc-scale resolution to study the distribution and kinematics of molecular gas in their host galaxies. Their results suggest that quasar hosts and inactive star-forming galaxies have similar gas fractions (Shangguan et al., 2020), but more centrally concentrated (Molina et al., 2021); luminous quasars do not efficiently remove cold gas from the host galaxy.
Accurate measurements of the cold gas mass are key to understand the ISM evolution in quasar host galaxies. The molecular gas masses are mainly measured by using the line luminosities of the low-\(J\) CO transitions based on assumptions of the CO (1-0) luminosity-to-mass conversion factor \(\alpha_{\rm CO}\), i.e., \(M_{\rm H_{2}}=\alpha_{\rm CO}L^{\prime}_{\rm CO\,(1-0)}\). A Millky-Way-like value of 3.1 (Sandstrom et al., 2013) is usually assumed for local Seyferts and quasars that are hosted in spiral galaxies (Evans et al., 2006; Shangguan et al., 2020; Koss et al., 2021), while the ULIRG-like value of \(\alpha_{\rm CO}=0.8\,M_{\odot}\,\rm(K\,km\,s^{-1}\,pc^{2})^{-1}\)(Downes and Solomon, 1998) is also considered for AGNs that are hosted in starburst systems (Xia et al., 2012). Studies of star-forming galaxies from local to high-\(z\) suggest that the \(\alpha_{\rm CO}\) factor varies over a wide range (Solomon et al., 1987; Lombardi et al., 2006; Narayanan et al., 2011; Papadopoulos et al., 2012; Sandstrom et al., 2013) and depends on the metallicity of the ISM (Israel, 1997; Wolfire et al., 2010; Leroy et al., 2011). However, there are still few direct measurements of \(\alpha_{\rm CO}\) in quasar host galaxies (Shangguan et al., 2020).
High resolution molecular CO line imaging with ALMA opens a unique opportunity to measure the gas dynamics in the nearby quasar host galaxies (e.g., Tan et al., 2019). The rotation curve traced by the CO line velocity field constrains the dynamical mass of the host galaxy, allowing a detailed study of the mass budget from the gas and stellar content and providing an independent way to measure the \(\alpha_{\rm CO}\) factor. In this work, we present a case study of the quasar I Zwicky 1 (hereafter I Zw 1). The CO (2-1) emission from its host galaxy was observed by ALMA at \(\sim 400\) pc scale, the highest spatial resolution (by a factor of \(\sim 2-3\)) among the six objects presented in Molina et al. (2021), which allows us to resolve the gas content in the central few kpc region. The high-resolution data allows the possibility of dynamical analysis, which is widely used in investigating the accurate mass-to-light ratio in galaxies (e.g., de Blok et al., 2008).
I Zw 1 possesses one of the most complete sets of multi-wavelength spectral energy distribution (SED) data coverage (Phillips, 1976; Barvainis and Antonucci, 1989; Gallo et al., 2004; Bruhweiler and Verner, 2008; Silva et al., 2018; Lyu et al., 2019). Spectroscopic observations indicate that it is a narrow line Seyfert 1 system with FWHM\({}_{\rm H\beta}=1400\) km s\({}^{-1}\)(Osterbrock, 1977), and a BH mass of \(9.30^{+1.26}_{-1.8}\times 10^{6}M_{\odot}\) given by reverberation mapping (Huang et al., 2019). With a bolometric luminosity of \(L_{\rm bol}=3\times 10^{45}\) erg s\({}^{-1}\), I Zw 1 is quantified as a super-Eddington source with \(\lambda_{\rm Edd}=2.58\). Long term X-ray monitoring indicates the existence of an ultra fast outflow in the nucleus of I Zw 1 (Ding et al., 2022). Detailed morphological analysis based on Hubble space telescope (HST) \(0.^{\prime\prime}1\) resolution image showed a prominent pseudo-bulge (Sersic index \(n\approx 1.69\), effective radius \(r_{e}\approx 1.6\) kpc), and a relatively faint and extended disk (Zhao et al., 2021). The pseudo-bulge implies a black hole to bulge mass of \(\sim 10^{-4}\), smaller than that of classical bulges and elliptical galaxies by a factor of 50 (Huang et al., 2019). The SED decomposition analysis yields a far-infrared (FIR) luminosity of \(\log L_{\rm FIR}/L_{\odot}=11.94\pm 0.30\)(Shangguan et al., 2018), in the range of Luminous Infrared Galaxies (LIRGs). The star formation activity distribution was confirmed with newly developed integrated field units (IFU) observations (Perna et al., 2021; Molina et al., 2022; Lamperti et al., 2022). The significant star formation activity is also confirmed by the combination of optical and sub-mm observations (Molina et al., 2022). Previous IRAM and ALMA observations already suggested that I Zw 1 has a rich molecular gas reservoir mainly concentrated
in its circumnuclear zone (Barvainis et al., 1989; Eckart et al., 1994; Schinnerer et al., 1998; Tan et al., 2019).
This paper is organized as follows: In Section 2 we present the available ALMA archival data of CO (1-0) and CO (2-1) observations and describe the data reduction. In section 3 we model the molecular gas distribution and kinematics. In Section 4, we model the gas dynamics and estimate the mass of each component, with prior knowledge of stellar distribution and dark matter halo properties. In section 5 we discuss the CO emission line ratios and the surface density of SFRs, and investigate whether we detect significant AGN feedback. We summarize in Section 6. For standard cosmological parameters of \(\Omega_{m}=0.308\), \(\Omega_{\Lambda}=0.692\), and \(H_{0}=67.8\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\)(Planck Collaboration et al., 2016), the redshift of \(z=0.06115\) corresponds to a luminosity distance of 283 Mpc.
## 2 CO data of I Zw 1
We collect available observations of the CO (2-1) line emission of I Zw 1 from the ALMA archive. The final data are combined from three ALMA programs, 2017.1.00297.S, 2018.1.00006.S (PI: Franz Bauer) and 2018.1.00699.S (PI: Pereira Santaella, Miguel) (Shangguan et al., 2020; Molina et al., 2021; Lamperti et al., 2022). The first observation is our Atacama Compact (Morita) Array (ACA) survey, with 2.5 hours on-source integration time and an angular resolution of 7\({}^{\prime\prime}\)(Shangguan et al., 2020). The second observation is our follow-up high-angular resolution observation, with about 11 minutes on-source time and an angular resolution of 0.4\({}^{\prime\prime}\)(Molina et al., 2021). The third observation is a part of the "Physics of ULIRGs with MUSE and ALMA" (PUMA; Lamperti et al., 2022) project, with 50 minutes on-source time and an angular resolution of 0.3\({}^{\prime\prime}\). We list the details of these observations in Table 1.
We use the Common Astronomy Software Application (CASA) version 5.6.1(McMullin et al., 2007) to reduce the ALMA observation data. All of these observations are concatenated with the CASA task concat. The continuum data are fitted and subtracted with CASA task uvcontsub. We then imaged and cleaned the line data cube and continuum data with Briggs weighting (robust = 0.5) and a stop threshold 2.5 times the root mean square (rms) of the off-source channels. For CO (2-1) emission line we set a channel resolution of 7.812 MHz, which corresponds to \(\sim\)11 km s\({}^{-1}\) at \(z=0.061\). We set gridder = mosaic during tclean, and employ the auto-multithresh masking procedure (McMullin et al., 2007). We set the noise-, sidelobe-, and lownoise-threshold as 4.25, 2.0, and 1.5 as recommended by CASA guideline.1 The other additional parameters were not modified. Finally, we obtain a CO (2-1) datacube with a synthesized beam size of \(0.36^{\prime\prime}\times 0.32^{\prime\prime}\), and typical channel root mean square (rms) noise of 0.28 mJy beam\({}^{-1}\). We derive the velocity-integrated flux map, intensity-weighted velocity, and velocity dispersion maps using the CASA task immoments. The beam size of the 1.3 mm continuum is \(0.31^{\prime\prime}\times 0.28^{\prime\prime}\), and the rms of the continuum map is 0.012 mJy beam\({}^{-1}\).
Footnote 1: [https://casaguides.nrao.edu/index.php/Automasking_Guide](https://casaguides.nrao.edu/index.php/Automasking_Guide)
We also investigate the surface density distribution of CO (1-0), in order to image the CO (2-1)/CO (1-0) emission line ratio of this target. We build the ALMA CO (1-0) data, which is adapted from the ALMA program 2015.1.01147.S (Tan et al., 2019). We reduce the data following the procedure described previously. The beam size of CO (1-0) data is \(0.62^{\prime\prime}\times 0.57^{\prime\prime}\).
## 3 Results and analysis
### Distribution of the molecular gas
We present the velocity-integrated intensity map in Figure 1 (a). The CO (2-1) line emission in I Zw 1 traces a disk with a diameter of \(\sim 5\) kpc, which is consistent with the source size of the CO (1-0) line emission (Tan et al., 2019). In Figure 1 (a), we note that the central contours (above 24\(\sigma\)) are elongated along the northeast-southwest direction while the outer lower surface brightness region has a different major axis position angle. This indicates that the molecular gas disk can be described morphologically by two components, one extremely compact bar-like structure and an extended circumnuclear disk (CND), extends up to \(\sim\)1 kpc at an position angle of \(\sim 30^{\circ}\). Such elongated structure could also be a massive bipolar gas outflow; however, the further kinematic analysis do not show evidence of any non-circular motions (Sec. 3.2). The large intensity gradient in the nucleus also implies that the CO emission may also exhibit a central compact core component unresolved by ALMA.
The complex molecular gas distribution described above can be well-described by fitting the CO (2-1) line intensity map with three components: two Sersic (Sersic, 1963) components for the extended emission (equation 1) and one Gaussian component for the unresolved core (equation 2):
\[I_{s}(r) =I_{e}\exp\left\{-b_{n}\left[\left(\frac{r}{r_{e}}\right)^{1/n}-1 \right]\right\}, \tag{1}\] \[I_{g}(r) =I_{G}\exp\left\{-\frac{r^{2}}{2\sigma^{2}}\right\}, \tag{2}\]
,where \(I_{e}\) is the surface brightness measured at \(r_{e}\), the effective radius, \(n\) is the Sersic index, and \(b_{n}\) is the numerical coefficients that ensures \(r_{e}\) corresponding to the half-light radius (Sersic, 1963). We use these two Sersic profiles to describe the bar-like structure and disk component, respectively, and use the Gaussian profile to describe the central compact core. We built this three-component model with Astropy (Astropy Collaboration et al., 2013), which is then convolved with the observation synthesized beam to produce the model of the observed line intensity map. The three-component model contains sixteen free parameters, including \(I_{e},\,r_{e},\,n\), minor-to-major axis ratio (\(b/a\)), position angle (\(\phi_{\rm s}\)) for each of the two Sersic components, Gaussian amplitude (\(I_{G}\)), full-width at half-maximum along the major and minor axes (FWHM\({}_{\rm x},\,\)FWHM\({}_{\rm y}\)), position angle of the major axis (\(\phi_{\rm G}\)) for the Gaussian component, and the center location (\(x_{0},\,y_{0}\)) that is shared with all the three components.
To find the best-fitting model we use the Python package emcee(Foreman-Mackey et al., 2013). The emcee package implements the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) sampling method to sample the posterior probability distribution function (PDF). We optimize the log-likelihood function:
\[\log\mathcal{L}\equiv-\frac{1}{2}\sum_{i}^{N}\left[\frac{(z_{i}-z_{i}^{m})^{2} }{\sigma_{i}^{2}}+\ln(2\pi\sigma_{i}^{2})\right], \tag{3}\]
where \(z_{i}\) denotes the surface brightness at each pixel, \(\sigma_{i}\) is the \(1\sigma\) noise, and \(z_{i}^{m}\) correspond to the model value at same pixel. The best fitting model along with residuals are shown in Figure 1, and the best fitting parameters are presented in Table 2.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Project} & Date of & Observational & \multicolumn{1}{c}{Configuration} & Antenna & On-target & \multicolumn{1}{c}{References} \\ Code & Observation & band & & number & time & \\ & & & & & (s) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline
2017.1.00297.S & Nov. 2017 & Band 6 & ACA & 11 & 8991 & Shangguan et al. (2020) \\
2018.1.00006.S & Nov. 2018 & Band 6 & C43–5 & 44 & 699 & Molina et al. (2021) \\
2018.1.00699.S & Oct. 2018 & Band 6 & C43–5 & 45 & 2992 & Lamperti et al. (2022) \\ \hline \end{tabular} Note — (1) The project code of ALMA observations. (2) The date of ALMA observations. (3) The ALMA band used during the observation. (4) The configuration of ALMA during the observation. (5) The number of antennas that are used during the observation. (6) The total on-target time of observation. (7) Papers that first report the observation.
\end{table}
Table 1: ALMA CO (2–1) observations of I Zw 1
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{} & \(I_{e}\) & \(R_{e}\) & \(n\) & \(b/a\) & \(\phi_{\rm s}\) \\ & (Jy beam\({}^{-1}\) km s\({}^{-1}\)) & (\({}^{\prime\prime}\)) & & & (\({}^{\circ}\)) \\ & (1) & (2) & (3) & (4) & (5) \\ \hline CND & \(0.44^{+0.01}_{-0.01}\) & \(1.30^{+0.01}_{-0.01}\) & \(0.48^{+0.01}_{-0.01}\) & \(0.80^{+0.01}_{-0.01}\) & \(141.90^{+0.40}_{-0.40}\) \\ Bar & \(2.37^{+0.05}_{-0.05}\) & \(0.50^{+0.01}_{-0.01}\) & \(0.30^{+0.20}_{-0.20}\) & \(0.32^{+0.01}_{-0.01}\) & \(33.65^{+0.11}_{-0.11}\) \\ \hline \hline & \(I_{G}\) & FWHM\({}_{\rm maj}\) & FWHM\({}_{\rm min}\) & \(\phi_{\rm G}\) & \\ & (Jy beam\({}^{-1}\) km s\({}^{-1}\)) & (mas) & (mas) & (\({}^{\circ}\)) & \\ & (6) & (7) & (8) & (9) & \\ \hline Core & \(29.90^{+0.73}_{-0.70}\) & \(221.35^{+0.01}_{-0.01}\) & \(122.45^{+0.01}_{-0.01}\) & \(16.59^{+1.36}_{-1.56}\) & \\ \hline \end{tabular} Note — (1) Intensity at effective radius. (2) Effective radius. (3) Sérsic index. (4) The minor-to-major axis ratio. (5) Position angle of the major axis. North = \(0^{\circ}\), East = \(90^{\circ}\). (6) Amplitude of Gaussian function. (7) and (8) FWHM of the major and minor axis. (9) The position angle of the major axis of the Gaussian component.
\end{table}
Table 2: Fitting parameters of CO (2–1) intensity map
With the assumption that the CND and stellar disk are coplanar, we estimate the inclination angle of the disk following the formula in Hubble (1926),
\[\cos^{2}i=\frac{(b/a)^{2}-q_{0}^{2}}{1-q_{0}^{2}}, \tag{4}\]
where \(q_{0}\) is the intrinsic galaxy thickness, \(b/a\) is the minor-to-major axis of the CND. We assume \(q_{0}=0.14\) for the molecular gas disk, which is similar with that reported for edge-on galaxies at low redshifts (Mosenkov et al., 2015). We obtain a host galaxy inclination \(i=38\,^{\circ}\).
From Figure 1, we can see that our model presents a reasonable description of the gas distribution in the circumnuclear scale. The residuals are likely to be produced by partially resolved out spiral arms as can be seen at the South West edge.
### \({}^{3\rm D}\)Barolo fitting
The intensity-weighted velocity and velocity dispersion maps of CO (2-1) line emission are shown in panel (a) and panel (d) in Figure 2. As discussed in Molina et al. (2021), the gas velocity field is dominated by circular rotation. The velocity dispersion in the outer region is almost constant (\(\sim 30\,{\rm km\,s^{-1}}\)) with small variations along the radius, while in the inner 1 kpc region, the velocity dispersion rises up to \(100\,{\rm km\,s^{-1}}\).
Assuming that the non-circular motions are negligible, we fit the velocity field with a tilted ring model (Rogstad et al., 1974). The rotating disk is decomposed into a series of thin rings, and the kinematic properties of each ring can be described by seven parameters:
1. \((x_{0},y_{0})\): the sky coordinates of the ring center;
2. \(V_{\rm sys}\): the systematic velocity of the center of the ring related to the observer;
3. \(V_{\rm rot}(R)\): the rotation velocity of the ring;
4. \(\sigma\): the velocity dispersion of the ring;
5. \(\phi(R)\): the position angle of the kinematic major axis on the receding half of the galaxy, with respect to the north direction;
6. \(i(R)\): the inclination angle between the normal to the ring and the line-of-sight, \({\rm Inc.}=0^{\circ}\) represents a face-on disk;
7. \(z_{0}\): the scale height of the gas layer.
The line-of-sight velocity field [\(V_{\rm los}(x,y)\)] that we observed is related to the above parameters:
\[V_{\rm los}(x,y)=V_{\rm sys}+V_{\rm rot}(R)\sin i(R)\cos\theta\] \[\cos\theta=\frac{-(x-x_{0})\sin\phi+(y-y_{0})\cos\phi}{R},\]
where \(R\) is the radius of each ring.
In order to obtain the intrinsic kinematics of molecular gas in this galaxy, we model the ALMA datacube using the 3D-Based Analysis of Rotating Objects from
Figure 1: The comparison between observed and modeled intensity maps of the CO (2–1) line emission. Contour levels in each panel correspond to \([-1,1,2,4,8,16,32]\times 3\sigma\), where \(\sigma\) is the rms, with the value of \(0.043\,{\rm Jy\,beam^{-1}\,km\,s^{-1}}\). Panel (a) and (b) represent the velocity-integrated map of the data and the model, respectively. Panel (c) represents the residuals between the data and the model. The North and East direction is shown as arrow at the lower right corner in panel (a). The synthesized beam (\(0.36^{\prime\prime}\times 0.32^{\prime\prime}\)) is plotted at the bottom left corner of each panel.
Line Observations (\({}^{3\rm D}\)Barolo, version 1.6; Di Teodoro & Fraternali 2015). \({}^{3\rm D}\)Barolo fits the three dimensions of data cubes with a tilted-ring model. By directly modeling the data cube instead of the 2D velocity map, it fully accounts for the beam smearing effect, providing a reasonable model of the intrinsic circular velocity and velocity dispersion field for circular rotating systems (see Di Teodoro & Fraternali 2015, for more details).
We fit the gas kinematics with \({}^{3\rm D}\)Barolo in two steps following the procedure described in Alonso-Herrero et al. (2018), but with some revisions. In the first step, we set the galaxy center (\(x_{0},y_{0}\)), systematic velocity \(V_{\rm sys}\), rotation velocity \(V_{\rm rot}\), velocity dispersion \(\sigma\), position angle \(\phi\), inclination \(i\), and disk height \(z_{0}\) parameters to be free. We adopt a ring width of 0.1 arcsec in the fitting, roughly one-third of the beam size. Initial guesses for the position angle and inclination are adopted from the morphological model results (\(\phi=142\,^{\circ}\) and \(i=38\,^{\circ}\); Section 3.1). Initial guesses for the kinematic center are set to be the same as the morphological center. We find that the output kinematic centers of each ring from this initial center are almost constant along the CND, in radii
Figure 2: The results of \({}^{3\rm D}\)Barolo fitting. Panel (a) and (b) represent the line-of-sight velocity map of the CO (2–1) data and the best-fitting model given by \({}^{3\rm D}\)Barolo. Panel (c) represents the residual between the observation and the model. Panel (d) and (e) represent the velocity dispersion of data and \({}^{3\rm D}\)Barolo model. Panel (f) represents the residual of the velocity dispersion map. Contours in panel (a) and (b) start from -200 km s\({}^{-1}\) and in step of 40 km s\({}^{-1}\). Contours in panels (d) and (e) are from 5 km s\({}^{-1}\) and in step of 40 km s\({}^{-1}\). Contours in residual maps range from -50 km s\({}^{-1}\) to 50 km s\({}^{-1}\)and in step of 20 km s\({}^{-1}\). The white star in each sub-panel indicates the kinematics center. The dash-dotted and dotted lines represent the major- and minor-kinematic axes.
between 0.8 and 2.1 kpc. However, the fitting kinematic centers of the inner and outer rings show large scatter and uncertainties, which is possibly due to the limited resolution, poor sampling, and complex dynamics caused by compact central bar-like structure and possible companion interaction in the outer region (Shangguan et al., 2020). Other fitting parameters, such as \(V_{\rm sys}\), are almost constant within the CND. During the second fitting step, we fix the kinematic center and systematic velocity to the mean values over CND scale that are obtained from the first fitting step, then to fit the rotation velocities, velocity dispersions, position angles, and inclination angles for each ring. The \({}^{3\rm D}\)Barolo best-fitting results are shown in Figure 3.
The 3-D model successfully describes the CND cold gas kinematics, with rms model residuals \(\approx 20\) km s\({}^{-1}\) and \(\approx 10\) km s\({}^{-1}\) for the rotation velocity and velocity dispersion fields, respectively. The latter is comparable to the velocity resolution of the observation.
### Global kinematics
In panels (a) and (b) of Figure 3 we show the velocity and angles radial profiles derived by \({}^{3\rm D}\)Barolo. The rotation velocity rises to the flattened part at \(\sim 0.8\) kpc, with a value of \(\sim 270\) km s\({}^{-1}\), and slightly increases toward a larger radius in the spiral arm region (\(r>2.1\) kpc). In this region, the velocity dispersion is \(\sim 30\) km s\({}^{-1}\), which indicates a cold gas disk with \(V/\sigma\approx 9\).
The velocity dispersion profile increases from \(30\) km s\({}^{-1}\) at \(\sim 0.8\) kpc to \(100\) km s\({}^{-1}\) at \(\sim 0.3\) kpc. A similar high central velocity dispersion was also reported in Molina et al. (2021). This enhanced velocity dispersion in the central region is unlikely spurious due to the beam smearing effect as \({}^{3\rm D}\)Barolo is designed to take this into account (Di Teodoro and Fraternali, 2015). To further check this, we build a mock disk model adopting an intrinsic rotation curve of I Zw 1 and the constant velocity dispersion of \(\sigma=30\) km s\({}^{-1}\) at all radii. We set the inclination angle and position angle equal to \(41^{\circ}\) and \(130^{\circ}\), which are the same as those in I Zw 1. We then simulate the ALMA observational data cube in CASA using the simobserve task and fit the mock data cube with \({}^{3\rm D}\)Barolo. Through our simulated data, we find that the beam-smearing effect can only increase the velocity dispersion value by a factor of \(\sim 1.3\), insufficient to account for the observational increase of a factor of \(\sim 4\). Thus we conclude that the velocity dispersion is intrinsically high in the center region. (see detailed discussion in Appendix A). We investigate the origin of such high-velocity dispersion in Section 5.3.
Naturally, \({}^{3\rm D}\)Barolo poorly fits the datacube in the zones where the inner spiral arms are present, but those regions display significant non-circular motions reflecting the local perturbation of the gravitational potential field. Additional kinematic components may be included for a more accurate model for those regions.
Figure 3: The rotation velocity, velocity dispersion, inclination angle, and kinematic position angle derived from \({}^{3\rm D}\)Barolo fitting to the CO (2–1) data. Panel (a) represents the rotation velocities (blue points) and velocity dispersions (green points) as a function of radius. Red points represent the rotation velocities extracted from the CO (1-0) data at a resolution of \(\sim 600\) pc for comparison. Panel (b) represents the position angle of the kinematic major axis (purple points) and the inclination angle (orange points) as a function of radius. The uncertainties of parameters are shown as colored shades. The blue and red vertical shaded region represents the beam size of the observation for CO(2–1) and CO(1–0) observations.
Molina et al. (2021) investigated the non-circular motion of the CO (2-1) line velocity field of this object with KINEMETRY (Krajnovic et al., 2006), finding that non-circular motions are negligible. However, the compact bar-like structure presented in our morphology analysis could still introduce non-circular components in the velocity field of the very central region (\(\lesssim 1\) kpc), which requires higher resolution observations to fully resolve the bar-like structure kinematics.
### Continuum
Figure 4 shows the 1.3 mm map of this galaxy. Although the size of the continuum map is more compact than that of the CO (2-1) line-emitting region, we still can see the structure which is elongated in a northeast-to-southwest direction. The continuum source has a position angle similar to that of the molecular bar-like structure. The total flux density within the 3\(\sigma\) "contour" region is \(1.1\pm 0.1\)mJy, which covers 73% of the continuum flux density obtained from the previous ACA observation, and 34% of the continuum flux density from global far-IR SED fitting prediction (Shangguan et al., 2018, 2020).
We fit the size of the continuum using the imfit task in CASA, which performs synthesized beam deconvolution and two-dimensional (2D) Gaussian fitting to the images. The resulting deconvolved full width at half maximum (FWHM) of major axis and minor axis sizes are \(0.174\pm 0.014\,\arcsec\) and \(0.100\pm 0.017\,\arcsec\), corresponding to \((0.21\pm 0.02)\times(0.02\pm 0.01)\) kpc\({}^{2}\), with a position angle of \(23.5\pm 9.9\,\arcdeg\).
## 4 Dynamical Modeling and the CO-to-H\({}_{2}\) Conversion Factor
The rotation curve modeled with \({}^{3\rm D}\)Barolo provides an independent constraint on the mass distribution within the CO line-emitting region. In this section, we fit the rotation curve with a multi-component dynamical model to investigate the mass budget of the stellar, molecular gas, and dark matter halo. In particular, with knowledge of the dynamical mass measured with the rotation curve and the stellar mass from the HST images (Zhao et al., 2021), we can constrain the mass of molecular gas and estimate the CO-to-H\({}_{2}\) conversion factor, \(\alpha_{\rm CO}\).
Here we model the gas dynamics and fit it to the rotation curve derived from 3D Barolo within the \(0.4\sim 2.1\) kpc radial zone (Figure. 3). The inner and outer regions are not considered in the fitting due to the large uncertainties and possible affects from asymmetric structure/spiral arms discussed in Section 3.3. During the fitting procedure, we assume that the rotation velocity is mainly contributed by four components: stellar bulge, stellar disk, molecular gas disk, and dark matter (DM) halo. We neglect the HI gas component as it is usually much more extended than stars and molecular gas, and thus it only dominates the gas mass on a larger scale (Walter et al., 2008; Wang et al., 2016). We also do not consider the contribution from the SMBH which has a mass of \(9.30^{+1.26}_{-1.38}\times 10^{6}\,M_{\odot}\)(Huang et al., 2019) and has neglectable contribution to the rotation velocity on kpc scale. Thus, the total rotation velocity is calculated as follows:
\[V^{2}_{\rm circ,tot}=V^{2}_{\rm bulge}+V^{2}_{\rm disk}+V^{2}_{\rm DM}+V^{2}_ {\rm gas},\]
where \(V_{\rm bulge}\), \(V_{\rm disk}\), \(V_{\rm DM}\) and \(V_{\rm gas}\) are circular velocities contributed by stellar bulge, stellar disk, dark matter halo and molecular gas, respectively.
### Circular velocities
For the spherical stellar bulge component, we adopt the deprojected symmetric three-dimensional model from Prugniel and Simien (1997),
\[\rho(r)=\rho r^{-\alpha}\exp\left(-b_{n}r^{1/n}\right) \tag{5}\]
where \(\alpha\) can be estimated as \(\alpha=1-1.188/2n+0.22/4n^{2}\)(see Equation B7 in Prugniel and Simien, 1997). A traditional 2D-Sersic profile can be well reproduced by integrating the spatial densities along the line of sight. The
Figure 4: The 1.3 mm continuum map of I Zw 1 host galaxy. The contour levels correspond to \([-1,\,1,\,2,\,4,\,8,\,16,\,32]\times 3\,\sigma\), where \(\sigma=0.012\)mJy beam\({}^{-1}\). The synthesized beam (\(0.31\arcsec\times 0.28\arcsec\)) is plotted at the lower left corner.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \(\log M_{\rm b}\) & \(r_{\rm e,b}\) & \(n\) & \(\log M_{\rm d}\) & \(r_{\rm e,d}\) & \(\log f_{\star}\) & \(\log c\,^{a}\) & \(\alpha_{\rm CO}\) \\ & \(M_{\odot}\) & kpc & & \(M_{\odot}\) & kpc & & & \(M_{\odot}\left({\rm K\,km\,s^{-1}pc^{2}}\right)^{-1}\) \\ & (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline
**Prior** & (9.5, 12.5) & \(1.62\pm 0.05\) & \(1.69\pm 0.05\) & (9.1, 12.1) & \(10.97\pm 0.50\) & (-2.25, -1.30) & (0.52, 1.12) & (0, 20) \\
**Posterior** & \(10.70^{+0.08}_{-0.10}\) & \(1.61^{+0.05}_{-0.05}\) & \(1.70^{+0.05}_{-0.05}\) & \(10.46^{+0.60}_{-0.84}\) & \(11.00^{+0.49}_{-0.51}\) & \(-1.74^{+0.32}_{-0.33}\) & \(0.83^{+0.34}_{-0.33}\) & \(1.55^{+0.47}_{-0.49}\) \\ \hline \end{tabular} Note — (1) The stellar bulge mass. (2) The effective radius of the stellar bulge. (3) The Sérsic index of the stellar bulge. (4) The mass of the stellar disk. (5) The effective radius of the stellar disk. (6) The stellar-to-DM mass ratio \(f_{\star}=M_{\star}/M_{h}\), where \(M_{\star}\) is the total stellar mass and \(M_{h}\) is the DM halo mass. (7) The concentration of the DM halo. (8) The CO-to-H\({}_{2}\) conversion factor. The uniform prior limits of parameters are denoted as “(lower, upper)”. The Gaussian priors of parameters are denoted as \(\mu\pm\sigma\). In our MCMC fitting, we set Gaussian priors for \(r_{\rm e,b}\), \(n\) and \(r_{\rm e,d}\) from Zhao et al. (2021). We set upper and lower limits for \(M_{\rm b}\), \(M_{\rm d}\), \(f_{\star}\) and \(c\) from literature (Zhao et al., 2021; Behroozi et al., 2010; Dutton & Maccio, 2014).
\({}^{a}\) Dutton & Maccio (2014) suggested a relationship between dark matter halo concentration and dark matter halo mass from numerical simulations, with an uncertainty of \(\sim 0.1\,\)dex. Here we adopt this simulation-driven concentration value \(c_{\rm th}\) as the prior knowledge in our dynamical analysis.
\end{table}
Table 3: Constraints and results of dynamical parameters
Figure 5: **Left panel:** Derived rotation curve from \({}^{3\rm D}\)Barolo and from the best-fitting result. The blue line represents the rotation curve derived from \({}^{3\rm D}\)Barolo and its surrounding blue shaded region represents the uncertainties. The black solid line shows the rotation curve of the stellar bulge and the dash-dotted line shows the rotation curve of the molecular gas. The dashed line shows the rotation curve of the stellar disk. The dotted line shows the rotation curve of dark matter. The thick red solid line represents the result of the best-fit model rotation velocity. The vertical gray shaded region represents the region within the central synthesized beam area, in which the data points are not used in the fitting. **Right panel:** The posterior distribution of the stellar bulge mass, stellar disk mass, and the CO-to-H\({}_{2}\) conversion factor. The vertical and horizontal dashed lines represent the mean value of each parameter, which are adopted as the best-fitting values and are listed in Table 3. The stellar disk mass is poorly constrained, as their contribution is minor in the nuclear region and therefore is heavily degenerate with the stellar bulge component.
circular velocity contributed by the stellar bulge can be written as:
\[V_{\rm bulge}(r)^{2} =\frac{GM(r)}{r},\] \[M(r) =M_{0}\frac{\gamma\left[n(3-p),bx^{1/n}\right]}{\Gamma\left[n(3-p) \right]},\]
where \(r\) is the spatial radius, \(M_{0}\) is the total stellar mass of the bulge. \(\Gamma\) and \(\gamma\) are gamma and incomplete gamma functions, \(x\equiv r/r_{e}\) is the reduced radius. And when the Sersic index and radius satisfy the relation \(0.6<n<10\) and \(10^{-2}\leq r/r_{e}\leq 10^{3}\), the value of \(p\) can be computed as \(p=1.0-0.6097/n+0.05563/n^{2}\)(Prugniel and Simien, 1997). Three parameters are used to describe the bulge mass distribution, the total mass \(M_{b}\), the effective radius \(r_{e,b}\) and the Sersic index \(n\).
For the disk component, we use the traditional exponential thin disk model adopted from Binney and Tremaine (2008):
\[V_{\rm disk}(r)^{2}=4\pi G\Sigma_{0}R_{d}y^{2}\left[I_{0}(y)K_{0}(y)-I_{1}(y)K _{1}(y)\right],\]
where \(\Sigma_{0}=M_{d}/2\pi r_{e,d}^{2}\) is the surface density, \(R_{d}=r_{e,d}/1.68\) and \(y\equiv r/2R_{d}\). Here, \(M_{d}\) and \(r_{e,d}\) are the stellar disk mass and the disk effective radius. \(I_{i}\) and \(K_{i}\) are Bessel functions. Considering the main purpose of our study is to constrain the mass component decomposition, we constrain the \(r_{e,b}\) and \(r_{e,d}\) from Zhao et al. (2021). Both parameters have Gaussian priors, with a typical standard deviation of 0.05. The Sersic index for the stellar disk is fixed to 1.
For the dark matter component, we adopt the simulation-motivated NFW model (Navarro et al., 1996), and the circular velocity can be calculated by:
\[\left[\frac{V_{\rm DM}(r)}{V_{\rm vir}}\right]^{2}=\frac{1}{x}\frac{\ln(1+cx) -(cx)/(1+cx)}{\ln(1+c)-c/(1+c)},\]
where \(x=r/r_{\rm vir}\) is the radius in units of virial radius, \(V_{\rm vir}\) is the virial velocity and \(c\) is the halo concentration (see Navarro et al., 1996, for more details). Considering that our rotation curve only traces the nuclear region, where the contribution from DM is minor and cannot be well constrained, we let the DM parameters satisfy some empirical correlations from numerical simulations. The stellar mass fractions satisfies \(-2.3\leq\log(M_{*}/M_{h})\leq-1.3\) for halo mass ranges between \(10^{11}M_{\odot}\) and \(10^{13}M_{\odot}\)(Behroozi et al., 2010). The concentration follows the function \(\log c=a+b\log(M/10^{12}h^{-1}M_{\odot})\), and we assume an intrinsic standard deviation of 0.1 dex (Dutton and Maccio, 2014). Other DM profile (e.g., Burkert, 1995) is not considered here as the fitting is not sensitive to different assumptions of DM profiles.
We calculate \(V_{\rm gas}\) for the molecular gas component following Equation (10) in Noordermeer (2008), which derived rotation curve for an axisymmetric bulge with arbitrary flattening. In this model, the mass density can be written as \(\rho=\rho(m)\), with \(m=\sqrt{x^{2}+y^{2}+(z/q)^{2}}\) and \(q\) is the intrinsic axis ratio of the bulge isodensity surfaces (Noordermeer, 2008). Models in Noordermeer (2008) have four parameters, surface density of gas \(\Sigma_{\rm g}\), effective radius \(r_{e,g}\), Sersic index \(n_{g}\) and the intrinsic axis ratio \(q\). Since we know the surface brightness of CO emission from our ALMA observation, we can directly convert the CO line surface brightness to molecular gas surface density with \(\alpha_{\rm CO}\). We adopt an intrinsic axis ratio of \(q=H/r_{e}=0.15\) during fitting, by assuming the scale height of molecular gas disk \(H\sim 150\) pc and \(r_{e}\sim 1\) kpc from previous studies of gas-rich systems (Wilson et al., 2019; Molina et al., 2021). Based on the line ratio distribution from Figure 6, we adopt \(R_{21}=0.9\) for the central region, and \(R_{21}=0.6\) for the outer region (\(R>0.8\) kpc) to convert the CO (2-1) line intensity to the CO (1-0) line.
### Asymmetric drift correction
The ISM pressure gradients can also provide support to the gas against galaxy self-gravity. This effect needs to be considered and corrected for the rotation curve (asymmetric drift correction; Burkert et al., 2010; Lang et al., 2017) as:
\[V_{\rm rot}^{2}=V_{\rm circ,tot}^{2}+\frac{1}{\rho}\frac{d(\rho\sigma^{2})}{d \ln r}\]
where \(V_{\rm circ}\) is the circular velocity derived from the mass model, \(V_{\rm rot}\) is the observed rotation velocity, and the rightmost term models the effect of asymmetric drift, makeing \(V_{\rm rot}<V_{\rm circ}\). In this term, \(\sigma\) is the isotropic velocity dispersion, \(\rho\) is the gas density and \(r\) is the galactic-to-center radius. Traditionally, \(\sigma\) is assumed to be a constant during the application of this asymmetric drift correction (e.g., Burkert et al., 2010), however, we find that in this galaxy, \(\sigma\) is not constant along the radius (Sec. 3.2), which means that we cannot directly use this formula.
Considering this, we re-calculate the asymmetric drift correction with our assumption of vertical hydrostatic equilibrium of the molecular gas:
\[P_{\rm ISM}=\mathcal{W},\]
where \(P_{\rm ISM}\propto\rho\sigma^{2}\) is the ISM turbulent pressure. This assumption is further confirmed in Section 5.3. We find that the asymmetric drift correction can be written as:
\[V_{\rm rot}(r)^{2}=V_{\rm circ,tot}(r)^{2}+\sigma^{2}\frac{d\ln\Sigma_{\rm g}} {d\ln r}+\sigma^{2}\frac{d\ln\Sigma_{\rm tot}}{d\ln r}, \tag{6}\]
where \(\Sigma_{\rm g}\) is the gas surface density and \(\Sigma_{\rm tot}\) is the surface density of the total disk, including gas and stellar component. Since \(\sigma\) has been provided by our \({}^{\rm 3D}\)Barolo fitting, and \(\Sigma_{\rm g}\) and \(\Sigma_{\rm tot}\) can be derived from \(\alpha_{\rm CO}\) and stellar mass during the fitting, we can constrain these parameters with our dynamical model. Although \(h_{\rm g}\) remains uncertain, we find the variation of this parameter does not affect the asymmetric drift correction significantly, so we assume a constant value of 150 pc. We also note that when gravity is dominated by gas or stars, Equation 6 reduces to its traditional form in Burkert et al. (2010).
In general, we have 8 free parameters in our fitting: the mass of stellar bulge and stellar disk, the Sersic index of the stellar bulge, the effective radius of the stellar bulge and the stellar disk, the dark matter halo mass and concentration, and the CO-to-H\({}_{2}\) conversion factor.
We use the emcee to determine the best-fit values. In order to minimize the free parameters and avoid degeneracy, we set prior constraints following the available stellar galaxy morphology models for the host galaxy, as well as the relation between the DM halo and stellar mass content. We set Gaussian priors for the effective radius of the stellar bulge and stellar disk, and the Sersic index of bulge following the fitting results and uncertainties from the \(B\) and \(I\) band HST image modeling (Zhao et al., 2021). The prior constraints and posterior derived values are listed in Table 3.
### Best-fit dynamical modeling and results
We start the MCMC sampling by using 400 walkers, with 1000 steps after a burn-in of 400 steps. We then adopt the 50th percentile of samples as the best-fit values, and estimate uncertainties using 16th and 84th percentiles of the samples. The best-fit results and the posterior distribution of bulge mass, disk mass and \(\alpha_{\rm CO}\) are shown as Figure 5.
We find that the geometry parameters (effective radius and Sersic index) are the same as their prior constraints, which suggests that the rotation curve cannot provide enough information to determine all of the parameters. We still use these prior constraints rather than fix them in order to take into account the uncertainties of these parameters from Zhao et al. (2021). We find that the masses of the stellar bulge and stellar disk are consistent with that in Zhao et al. (2021), while the stellar disk mass has quite large uncertainty. Such large uncertainties are also found in the posterior distribution of the stellar fraction and concentration parameters of dark matter halo. The large uncertainties imply that they are hardly constrained in our dynamical model; the gravitational potential of the region traced by CO (2-1) is dominated by stellar bulge and molecular gas. Other components only have minor contributions, and the fitting results are heavily affected by the uncertainties of the stellar bulge and molecular gas masses. We also test the fitting results with different initial setups, and in most cases, the posterior probability distributions are consistent with each other (see more details in Appendix B).
We derive \(\alpha_{\rm CO}=1.55^{+0.47}_{-0.49}\,M_{\odot}\left(\rm K\,km\,s^{-1}\,pc^{2} \right)^{-1}\)from our dynamical method. This value is between the MW-like [\(\alpha_{\rm CO}\approx 4.3\,M_{\odot}\left(\rm K\,km\,s^{-1}\,pc^{2}\right)^{ -1}\)] and ULIRG-like value [\(\alpha_{\rm CO}\approx 0.8\,M_{\odot}\left(\rm K\,km\,s^{-1}\,pc^{2}\right)^{ -1}\); Downes and Solomon, 1998; Bolatto et al., 2013). This value is \(\sim 2\) times smaller than that in nearby star-forming galaxies (Sandstrom et al., 2013). The \(\alpha_{\rm CO}\) value we derived here is only valid within the \(\sim 2\) kpc region of the quasar host galaxy, where the high gas surface density and star formation rate surface density suggest a nuclear starburst (see Section 5.2). The current CO (2-1) data cannot trace the molecular gas in the extended galactic disk and spiral arm region where a higher \(\alpha_{\rm CO}\) value may remain more appropriate to estimate the molecular gas mass. We also check whether this \(\alpha_{\rm CO}\) value is reasonable given some theoretical prescriptions (e.g., Bolatto et al., 2013). Bolatto et al. (2013) indicated that \(\alpha_{\rm CO}\) could have large variations depending on metallicity and gas surface density. However, we note that I Zw 1 presents a metallicity of \(\log\left(\rm O/H\right)=8.77\) by adopting the \(M_{*}-Z\) relationship obtained for SDSS galaxies with the Pettini and Pagel (2004) calibration (Kewley and Ellison, 2008), which is close to solar metallicity. Therefore, we do not expect any significant \(\alpha_{\rm CO}\) variation due to high metallicity. We estimate an \(\alpha_{\rm CO}\) value of \(\sim 1.9\,M_{\odot}\left(\rm K\,km\,s^{-1}\,pc^{2}\right)^{-1}\) by solving the \(\alpha_{\rm CO}\)-\(\Sigma_{\rm mol}\) relation of Bolatto et al. (2013). This value is consistent with our dynamical \(\alpha_{\rm CO}\) value considering the uncertainties.
By adopting this new \(\alpha_{\rm CO}\), we estimate a total cold molecular gas mass at a value of \(\log M_{\rm H_{2}}/M_{\odot}=9.94^{+0.18}_{-0.31}\), and the gas fraction is \(f_{\rm gas}=0.10^{+0.12}_{-0.08}\). The value of the gas fraction is similar to that in inactive star-forming galaxies and hard X-ray selected AGN host galaxies (Shangguan et al., 2020; Koss et al., 2021), and is smaller than that in local LIRGs by a factor of \(\sim\)2 (Larson et al., 2016).
Using our dynamical method, we firstly investigate the \(\alpha_{\rm CO}\) value in this quasar host galaxy, and find that the value of \(\alpha_{\rm CO}\) is between that in ULIRGs and in the MW (Bolatto et al., 2013; Molina et al., 2020). In the rest part of this work, we estimate the molecular gas mass by adopting the median CO-to-H\({}_{2}\) conversion factor value derived from our best dynamical model.
## 5 Discussion
### Distribution of the CO (2-1)-to-CO (1-0) line ratio
A CO (2-1) to CO (1-0) line luminosity ratio of \(R_{21}=0.63\pm 0.02\) was reported in Shangguan et al. (2020) based on previous ACA measurements of the total gas content, which is within the typical range for subthermal CO-excited molecular gas in galactic disks (\(R_{21}<0.8\); Leroy et al., 2013; Rosolowsky et al., 2015; Saintonge et al., 2017). Here we report the surface brightness ratio distribution estimated from resolved ALMA images of CO (2-1) and CO (1-0) that are shown in Section 2. We smooth the CO (2-1) line image with CASA task imsmooth to match the angular resolution of the CO (1-0) data. We estimate the surface brightness ratio with immath within the region where both signal-to-noise ratios are larger than 5. The emission line ratio map is shown in Figure 6.
The line intensity ratio is close to 1 within the radius of \(\sim\)1 kpc in the quasar host galaxy, suggesting that the molecular gas in the central region is optically thick and thermalized. The high \(R_{21}\) value in the nuclear region is consistent with the previous result reported by Staguhn et al. (2004) based on Berkeley-Illinois Maryland Association (BIMA) and Plateau de Bu Interferometer (PdBI) observations at a lower angular resolution of \(\sim\)0.7\({}^{\prime\prime}\) for CO(1-0) and of \(\sim 0.9^{\prime\prime}\) for CO(2-1) data. A similar emission line ratio distribution with a higher value toward the center is commonly found in nearby spiral galaxies (Braine & Combes, 1992; den Brok et al., 2021; Yajima et al., 2021), local IR luminous galaxies (Papadopoulos et al., 2012) and high redshift galaxies (Carilli & Walter, 2013; Daddi et al., 2015).
In the outer disk region, the emission line ratio is relatively low (\(\lesssim 0.6\)). It is likely that at larger radii, the molecular gas becomes subthermally excited (Husemann et al., 2017) or has a lower temperature (Braine & Combes, 1992).
### Star formation law of the nuclear region
We check the Kennicutt-Schmidt relation (Kennicutt, 1998) in the nuclear region of I Zw 1 based on the CO (2-1)
Figure 6: Surface brightness ratio between CO (2–1) and CO (1–0) in I Zw 1. The contour level corresponds to \([0.4,0.6,0.8,0.9,1.0]\). The ellipse in bottom left represents beam size of \(0.61^{\prime\prime}\times 0.52^{\prime\prime}\) for the CO (1–0) observation.
Figure 7: The surface density of molecular gas vs. surface density of SFR in the nuclear of I Zw 1. The open markers represent star-forming (circles; de los Reyes & Kennicutt, 2019) and starburst galaxies (diamonds; Kennicutt & De Los Reyes, 2021) in the local universe. The orange and purple solid lines represent the KS-law for local star-forming and starburst galaxies (de los Reyes & Kennicutt, 2019; Kennicutt & De Los Reyes, 2021), whose surrounding shaded region represents the scatter on a order of \(\sim 0.3\) dex. The filled red point shows the mean surface density measured within the 3 \(\sigma\) contour region of the continuum map (Figure 4) while the filled blue point gives the peak values of the molecular gas and SFR surface densities measured within the central beam. The filled stars represent the gas surface density if a MW-like \(\alpha_{\rm CO}\) is adopted. The dotted lines show the trends with gas deption timescales \(\tau_{\rm dep}=10\) Myr, 100 Myr, and 1 Gyr.
1) line and continuum maps and the new \(\alpha_{\rm CO}\) value of \(\sim 1.5\,M_{\odot}\left({\rm K\,km\,s^{-1}\,pc^{2}}\right)^{-1}\) derived from our dynamical modeling fitting.
The ALMA continuum image reveals an 1.3 mm continuum flux density of \(\sim 1.1\) mJy from the central 3 \(\sigma\) contour region. Molina et al. (2022) decomposed the SED of I Zw 1. They estimated that the AGN contribution, including the non-thermal synchrotron emission extrapolated from the radio bands and the thermal free-free emission, contribute about 28% (\(\sim\)0.30 mJy) of the ALMA continuum, and the remaining 72% is likely to be from the thermal dust heated by nuclear star formation. Based on this decomposition of the ALMA continuum, they calculated a nuclear star formation rate of \(5.43\,M_{\odot}\) yr\({}^{-1}\).
The AGN contribution to the millimeter dust continuum emission could also be estimated and removed based on the empirical luminosity relations. Kawamuro et al. (2022) presented a relationship between the rest-frame 1.3 mm-wave (\(\nu L_{\nu,\,\rm mm}\)) and 2-10 keV X-ray luminosities (\(L_{2-10\,\rm keV}\)) for AGNs (Table 1 in Kawamuro et al., 2022). Based on this relation and adopting the 2-10 keV luminosity of I Zw 1 from (Piconcelli et al., 2005), we estimate an AGN contribution to the 1.3 mm continuum flux density of \(0.35^{+0.65}_{-0.23}\) mJy. This flux density has been corrected to the observing frame assuming a mm-wave spectral index of 0.5 (\(S_{1.3\,\rm mm}\propto\nu^{-0.5}\)). This value is consistent with that derived from the synchrotron and free-free components in the SED decomposition, considering the large uncertainty of 0.45 dex of the \(\nu L_{\nu,\,\rm mm}\) -- \(L_{2-10\,\rm keV}\) relation. Therefore, we adopt the nuclear star formation rate of \(5.43\,M_{\odot}\) yr\({}^{-1}\) from Molina et al. (2022) in the analysis here.
The face-on size of the star-forming region is estimated by \(A=S/\cos i\), where \(S\) is the area within the 3\(\sigma\) contour region of continuum map, and \(i\) is the inclination angle. Thus the mean surface density of star formation rate in the nuclear region can be estimated by \(\Sigma_{\rm SFR}={\rm SFR}/A\).
We then estimate the mean molecular gas surface density using the CO (2-1) flux in the aformentioned region. By assuming \(R_{21}\) following the CO emission line ratio map presented in Figure 6, we estimate the CO (1-0) emission line flux in this region. Finally we estimate the gas surface density with \(\Sigma_{\rm mol}=\alpha_{\rm CO}\times L^{\prime}_{\rm CO\,(1-0)}/A\), where \(\alpha_{\rm CO}\)= \(1.5\,M_{\odot}\left({\rm K\,km\,s^{-1}\,pc^{2}}\right)^{-1}\)is the CO-to-H\({}_{2}\) conversion factor and \(L^{\prime}_{\rm CO\,(1-0)}\) is the CO (1-0) luminosity in the nuclear region. We also present the estimation of molecular gas surface density by adopting an \(\alpha_{\rm CO}\) value of \(4.3\,M_{\odot}\left({\rm K\,km\,s^{-1}\,pc^{2}}\right)^{-1}\), which is the typical value of the MW-like galaxy, for comparison (Bolatto et al., 2013).
We compare the derived surface densities of SFR and of molecular gas mass in the plot of the KS-relation in Figure 7. The mean SFR and molecular gas surface densities in the nuclear region (filled blue circle) of I Zw 1 is comparable to the typical values of starburst galaxies (open diamonds in Figure 7; Garcia-Burillo et al., 2012; Kennicutt and De Los Reyes, 2021), and larger than that of the normal star-forming galaxies (open circles in Figure 7; Leroy et al., 2013; de los Reyes and Kennicutt, 2019). We also investigate the surface densities of SFR and molecular gas for the central peak (filled red circle), and find that this data point locates well within the starburst source region. The gas depletion timescale is derived using \(\tau_{\rm dep}=\Sigma_{\rm mol}/\Sigma_{\rm SFR}\). We found \(\tau_{\rm dep}\sim 300\) Myr, which is close to that of local starburst systems. We also present the gas surface densities adopting \(\alpha_{\rm CO}=4.3\,M_{\odot}\left({\rm K\,km\,s^{-1}\,pc^{2}}\right)^{-1}\) from Bolatto et al. (2013), which are shown as filled red and blue stars in Figure 7 and locate below the KS-relation for starburst systems.
We find that this quasar host galaxy has enhanced star-forming activity in its central \(\lesssim 500\) pc region. The starburst activity suggests that AGN feedback plays a minor role in stopping ongoing star formation, and a positive influence can also be plausible. Assuming a MW-like \(\alpha_{\rm CO}\) increases the molecular gas surface density significantly, while the data points are still close to ULIRG-like KS relation, and are still well above the KS relation for local star-forming galaxies (Figure 7). Nuclear starbursts were also found in other low-\(z\) quasars (Cresci et al., 2004; Schweitzer et al., 2006; Molina et al., 2022).
### Does AGN perturb the cold molecular gas?
We measure an intrinsically large velocity-dispersion in the galactic nucleus from our kinematic analysis in Section 3.2, which is \(3\sim 4\) times higher than the values measured in the outer region with radii \(>0.8\) kpc. Such a large velocity-dispersion indicates that the molecular gas in this central 1 kpc region has a large turbulent energy. There are several possible mechanisms that may contribute to the turbulent ISM. This large velocity-dispersion might be related to the high gas surface density, since the ISM turbulent pressure (\(P_{\rm ISM}\)) should be in equilibrium with the weight (\(\mathcal{W}\)) of the ISM (Sun et al., 2020; Ostriker and Kim, 2022). The enormous energy released by central AGN could also perturb the ISM. The central starburst may also enhance the gas velocity dispersion by means of stellar feedback. To check whether such large velocity-dispersion originates from the AGN and stellar feedback, we try to identify whether there is an excess in the ISM turbulent pres
sure (\(P_{\rm ISM}\)), comparing to the weight (\(\mathcal{W}\)) of the ISM. Any excess ISM turbulent pressure should represent the energy released by central AGN or starburst.
The weight of the ISM can be expressed as follows (Ostriker and Kim, 2022):
\[\mathcal{W}=\pi G\Sigma_{\rm g}^{2}/2+4\pi\zeta_{\rm d}G\Sigma_{\rm g}\rho_{\rm sd }h_{\rm g}. \tag{7}\]
The first term is the weight due to the self-gravity of the ISM disk (Spitzer, 1942; Elmegreen, 1989). Here the surface density of ISM should have at least two components in principle, i.e., \(\Sigma_{\rm g}=\Sigma_{\rm H_{2}}+\Sigma_{\rm HI}\). However, we note that the atomic gas is negligible in the galaxy center. Therefore we replace \(\Sigma_{\rm g}\) with \(\Sigma_{\rm H_{2}}\) in the remainder context. The second term is the weight of the ISM due to external gravity including the stellar component and dark matter halo. The numerical value of \(\zeta\) depends on, but not sensitively to, the geometric distribution of gas disk, thus can be assumed as a constant of \(\sim 1/3\)(see Equation 6 of Ostriker et al., 2010), and \(\rho_{\rm sd}\) is the external density. In the galaxy center where the external gravitational potential is dominated by the stellar bulge, \(\rho_{\rm sd}\) could be estimated using the bulge mass density of \(\rho_{\rm b}\). This term also accounts for the half-thickness of the gas disk, \(h_{\rm g}\), whose typical value is about \(100-200\) pc (Wilson et al., 2019).
The ISM turbulent pressure at the midplane is defined by the difference in the total vertical momentum flux across the gas layer and thus can be expressed as:
\[P_{\rm ISM}=\rho_{\rm mid}\sigma_{\rm g}^{2}(1+\alpha+\beta)=\frac{\Sigma_{ \rm g}}{2h_{\rm g}}\sigma_{\rm g}^{2}(1+\alpha+\beta), \tag{8}\]
where \(\sigma_{\rm g}\) is the velocity dispersion of molecular gas and can be measured from the CO emission line. The parameters \(\alpha\approx 0.3\) and \(\beta\approx 0.0\) are the factors accounting for vertical-magnetic and cosmic-ray pressures contribution (Kim and Ostriker, 2015; Wilson et al., 2019).
As was pointed out in Ostriker and Kim (2022), the vertical hydrostatic equilibrium requests that the ISM weight \(\mathcal{W}\) must be equal to the ISM turbulent pressure \(P_{\rm ISM}\),
\[\mathcal{W}=P_{\rm ISM}. \tag{9}\]
We examine the relationship between the ISM turbulent pressure and the weight of ISM in this quasar host galaxy, pixel-by-pixel. We estimate the pixel-wise velocity dispersion using the following methods. In the first step, we generate the moment 2 map by applying a blanking mask using python package maskmoment.2 The mask was created by starting at \(5\sigma\) peaks in the cube, and expanding down to the surrounding \(2\sigma\) contour. The final moment maps were refered as 'dilated-mask' moment maps in Bolatto et al. (2017). This dilated moment 2 map can simultaneously capture low-level signal and avoid noise, and can well reproduce the observed velocity dispersion. In the second step, we build a rotating disk model using \({}^{3\rm D}\)Barolo to simulate the velocity dispersion that is solely caused by beam-smearing effect. The model has the same CO (2-1) line intensity and rotation velocity with that from the I Zw 1 CO (2-1) data (Section 3.2), but the velocity dispersion of the model is set to be zero. Then the model is convolved with the synthesized beam of I Zw 1 CO (2-1) data. We generate the simulated velocity dispersion from this model with a \(2\sigma\) cutoff threshold. This simulated velocity dispersion is then removed from the dilated moment 2 map in quadrature to generate the intrinsic \(\sigma_{\rm g}\) map (Levy et al., 2018).
Figure 8: The ISM turbulent pressure (\(P_{\rm ISM}\)) as a function of ISM weight (\(\mathcal{W}\)). Data points represent pixels in CO (2–1) map and are color-coded by the distance between pixels and the center of this galaxy, respectively. The open squares and circles represent the mean ISM turbulent pressure of each ISM weight bin, and denote data points in group (2) and group (3) as described in Section 5.3. The gray shaded region represents the lower limits of both pressure by considering the CO (2–1) detection limits. The dashed line denotes equality. The blue dash-dotted lines represent the best-fitting power-law results. The typical uncertainty is plotted in the lower right corner. This uncertainty is dominated by \(\alpha_{\rm CO}\), which is \(\sim 0.2\) dex.
Based on the line ratio map discussed in Section 5.1, we estimate the CO (1-0) line surface brightness assuming \(R_{21}=0.9\) at \(R<0.8\,\mathrm{kpc}\), and \(R_{21}=0.62\) at \(R>0.8\,\mathrm{kpc}\)(Shangguan et al., 2020). We then estimate the molecular gas mass by adopting \(\alpha_{\mathrm{CO}}=1.55\,M_{\odot}\left(\mathrm{K\,km\,s^{-1}\,pc^{2}} \right)^{-1}\) from the dynamical modeling as shown in Section 4.3. The \(\rho_{\mathrm{sd}}\) is estimated from dynamical modeling and follow Equation (5). We estimate these two pressures by assuming a constant gas disk scale height of \(h_{\mathrm{g}}=150\,\mathrm{pc}\), which is a typical value of nearby ULIRG and starburst systems (Wilson et al., 2019; Molina et al., 2020). The relation between \(P_{\mathrm{ISM}}\) and \(\mathcal{W}\) are plotted in Figure 8. We manually separate pixels into four groups depending on their radii: (1) \(R<0.4\,\mathrm{kpc}\), (2) \(0.4\,\mathrm{kpc}<R<0.8\,\mathrm{kpc}\), (3) \(0.8\,\mathrm{kpc}<R<2.1\,\mathrm{kpc}\), and (4) \(R>2.1\,\mathrm{kpc}\). We do not include data from group (4) as the S/N of the CO (2-1) line in this region is too low. Also, we avoid presenting the data points from group (1) due to severe beam-smearing effect.
Figure 8 shows the relationship between \(P_{\mathrm{ISM}}\) and \(\mathcal{W}\), spanning three orders of magnitude. The color of each data point represents the distance between each pixel and the galaxy center. In order to highlight the difference in group (2) and (3), we calculate the mean and scatter trends for each group in 0.2 dex wide bins of fixed ISM weight.
We fit the relation between \(P_{\mathrm{ISM}}\) and \(\mathcal{W}\) in logarithmic space using the python package linmix(Kelly, 2007). This yields the best fitting power-law relations (blue dash-dotted line in Figure 8):
\[\log\left(\frac{P_{\mathrm{ISM}}}{k_{\mathrm{B}}\,\mathrm{K\,cm^{ -3}}}\right) =\left(-0.38^{+0.58}_{-0.56}\right)\] \[+\left(1.05^{+0.08}_{-0.07}\right)\log\left(\frac{\mathcal{W}}{k _{\mathrm{B}}\,\mathrm{K\,cm^{-3}}}\right).\]
The best fitting result is consistent with the equality relation (black dashed line in Figure 8) considering the uncertainty. This result suggests that the origin of the high turbulent energy of the cold molecular gas (with \(\sigma\sim 100\,\mathrm{km\,s^{-1}}\)) can be explained by the self-gravity of the galaxy alone.
### The lack of negative AGN feedback
The ALMA CO (2-1) image reveals that the molecular gas in the host galaxy of I Zw 1 is centrally concentrated with a high surface density in the central kpc region where intense star formation is taking place. This result is contradictory to the scenario in which AGN feedback can efficiently blow out the star-forming gas from the nuclear region, and results in the depletion of cold gas in galaxy center (Rupke and Veilleux, 2011; Ellison et al., 2021). In addition, there is no evidence of AGN-driven outflow in the nuclear region like other AGN host galaxies (e.g., Feruglio et al., 2020). As an alternative, we find an enhancement of gas velocity dispersion in the nuclear region, which indicates that the nuclear gas is dynamically hot compared with gas in the circumnuclear disk. However, we find that ISM turbulent pressure is in equilibrium with the weight of ISM, suggesting that the kinematics of molecular gas could be regulated by the host galaxy's self-gravity. The large velocity dispersion is naturally required to satisfy the hydrostatic equilibrium. There is no external energy budgets/pressure, e.g., from AGN feedback, that is expelling the cold gas from the galaxy center
So far, Shangguan et al. (2020) reported that I Zw 1 is a CO luminous system and there is no evidence of galactic scale molecular gas outflows. (Lamperti et al., 2022) also report non-detection of molecular gas outflow from this object based on the ALMA high resolution CO (2-1) data. Molina et al. (2021) found that the molecular gas in this galaxy is centrally concentrated, and rotating in a disk with negligible non-circular motions. Moreover, the continuum map reveals a centrally enhanced star formation (see also Molina et al., 2022) which also argue against the suppression of star formation from AGN feedback. From our kinematic and dynamical analysis, we find no evidence of AGN driven outflow or external gas energy budget. In addition, ionized gas components with high-velocity dispersions were detected in some nearby quasar host galaxies from recent optical IFU data (Husemann et al., 2019; Singha et al., 2022). However, Molina et al. (2022) reported that the kinetic energy of these gas components with high-velocity dispersions is only \(\lesssim 0.1\%\) of the AGN bolometric luminosities. This suggests that only a negligible percentage of the AGN power is coupled to the ISM. All these results suggest a lack of negative AGN o the cold molecular gas and star formation in the quasar host galaxy.
## 6 Conclusions
We present a study of CO (2-1) line emission in the nuclear region of I Zw 1 based on ALMA observations. A combination of all available data from the ALMA archive resolves the CO source on \(0.36\arcsec\) scale with a spectral sensitivity of \(0.28\,\mathrm{mJy\,beam^{-1}per}\) channel. In the central \(1\,\mathrm{kpc}\) region, molecular gas forms a high-density bar-like structure, which has a different position angle compared to that of the main disk.
* With \({}^{3\mathrm{D}}\)Barolo fitting, we obtain the intrinsic rotation velocity and velocity dispersion as a function of radius. This galaxy is a rotation-dominated sys
tem, similar to other star-forming galaxies in the local universe. The mean rotation velocity to dispersion ratio is about nine, which suggests that the molecular gas forms a cold disk. Meanwhile, the fitting results from the \({}^{3\rm D}\)Barolo model suggests an enhancement of velocity dispersion in the central sub-kpc scale region. We check the velocity field carefully and find that the pure beam-smearing effect cannot lead to such a large velocity dispersion. The velocity dispersion of the molecular gas in the central region of nuclear disk is intrinsically \(\sim 3\) times higher compared to that in the disk region.
* The map of the emission line ratio between two CO emission lines represents a clear gradient of \(R_{21}\) along the radius. The central value is close to the theoretical prediction under the assumption of thermalized, optically thick ISM conditions. In contrast, the main circumnuclear disk has relatively lower values.
* We fit the rotation curve of the molecular gas disk and constrain the mass budget of the quasar host galaxy using a dynamical model. We take into account the constraints on gas distribution from the ALMA CO data and stellar morphology/mass from the HST image, and fit the CO-to-H\({}_{2}\) conversion factor. We find a best-fit \(\alpha_{\rm CO}=1.55^{+0.47}_{-0.49}\,M_{\odot}\,({\rm K\,km\,s^{-1}\,pc^{2}})^ {-1}\), which is between the ULIRG-like and MW-like \(\alpha_{\rm CO}\) value [\(\alpha_{\rm CO,ULIRG}\approx 0.8\,M_{\odot}\,\big{(}{\rm K\,km\,s^{-1}\,pc^{2}} \big{)}^{-1}\), \(\alpha_{\rm CO,MW}\approx 4.3\,M_{\odot}\,\big{(}{\rm K\,km\,s^{-1}\,pc^{2}}\big{)}^ {-1}\)].
* We check the star formation rate and molecular gas surface densities in the central region, finding that the star formation activity follows the Kennicutt-Schmidt relation of local starburst galaxies, which suggests a nuclear starburst activity.
* By comparing the ISM turbulent pressure (\(P_{\rm ISM}\)) and the weight of the ISM (\(\mathcal{W}\)), we find these two parameters are almost equal to each other. The ISM turbulent pressure is in equilibrium with galaxy gravity, which suggests molecular gas in this galaxy is regulated by its self-gravity, and there is no external energy budgets that are exploring the cold gas. This result indicates that the central AGN, though luminous in the optical, is unlikely to introduce extra pressure to the molecular gas in the nuclear region.
We acknowledge supported by the National Science Foundation of China (11991052, 11721303, 12173002, 12011540375) and the China Manned Space Project (CMS-CSST-2021-A04, CMS-CSST-2021-A06); ANID grants PIA ACT172033 (E.T.), Basal-CATA PFB-062007 and AFB170002 grants (E.T., F.E.B.), FONDECYT Regular 1160999, 1190818 (E.T., F.E.B.), and 1200495 (E.T., F.E.B.), and Millennium Science Initiative ICN12_009 (F.E.B.). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.01147.S, #2017.1.00297.S, #2018.1.00006.S, #2018.1.00699.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. ALMA
A. Testing the gas velocity dispersion with simulated data
We build a mock observational data cube to test whether \({}^{3\rm D}\)Barolo is able to reproduce the intrinsic gas velocity dispersion with the reduction of beam smearing effect. We build the rotating disk model with \({}^{3\rm D}\)Barolo galmod task. The model has the same CO (2-1) line intensity and rotation velocity with that from the I Zw 1 CO (2-1) data. But the model has a constant velocity dispersion (\(\sigma=30\,{\rm km\,s^{-1}}\)) along all radii. We then simulate the visibility data with CASA task simobserve, and and image and clean this simulated visibility using the same procedure mentioned in Section 2. We adjust the total integration time and ALMA configuration to obtain a similar signal-to-noise ratio and angular resolution (the angular resolution of mock observation is \(0.33^{\prime\prime}\times 0.30^{\prime\prime}\)). We then use \({}^{3\rm D}\)Barolo to fit the simulated data cube and the result is shown in Figure 9. We can see that, if a gaseous rotating disk has a con
stant velocity dispersion with a value of \(30\,\mathrm{km\,s^{-1}}\) along all radii, the beam smearing effect can boost the velocity dispersion up to \(\sim 60\,\mathrm{km\,s^{-1}}\) in its center. With \({}^{3\mathrm{D}}\)Barolo analysis, although we cannot completely reduce the beam smearing effect, the velocity dispersion has an error of \(\lesssim 30\%\) in the central region (\(R\lesssim 1\,\mathrm{kpc}\)). This result indicates that the centrally enhanced gas velocity dispersion (\(\sigma\gtrsim 100\,\mathrm{km\,s^{-1}}\)) that is found in the host galaxy of I Zw 1 may not be solely produced by beam smearing effect. Molecular gas in the center of this galaxy should have a very large velocity dispersion (\(\sigma\gtrsim 100\,\mathrm{km\,s^{-1}}\)) intrinsically. We also find that the velocity dispersion decreases at a large radius. This result is caused by the decreasing of S/N of the simulated CO emission at the disk edge.
## Appendix B Dynamical models with different prior constraints
In this section, We try to fit the mass of the stellar bulge, stellar disk, and the CO-to-H\({}_{2}\) conversion factor by optimizing different prior constraints. In order to the degeneracy of different components and different initial guess of parameters, we consider four main situations with total 9 cases:
1. We set \(\alpha_{\mathrm{CO}}\) as a free parameter in the fitting and limit the stellar mass within the lower and upper limits of the stellar mass estimate Zhao et al. (2021).
2. We set stellar mass as a free parameter and constrain the fitting range of \(\alpha_{\mathrm{CO}}\).
3. We try to fit \(M_{b}\), \(M_{d}\), and \(\alpha_{\mathrm{CO}}\) simultaneously with larger parameter spaces, thus those parameters are free.
4. We fit \(M_{b}\), \(M_{d}\), and \(\alpha_{\mathrm{CO}}\) without applying the asymmetric drift correction, to evaluate how significant pressure gradient support against self-gravity is in this object.
Figure 10: The posterior distribution function of \(\alpha_{\mathrm{CO}}\) of eight cases, which have different prior constraints. The vertical lines represent the median value of \(\alpha_{\mathrm{CO}}\) distribution, which are also shown in Table 4.
Figure 9: Panel (a), (b), and (c) show the velocity-integrated intensity map, the flux-weighted line-of-sight velocity map, and the velocity dispersion map of simulated data. The synthesized beam is shown as a gray ellipse in the bottom left corner of each panel. The scale bar is shown in the bottom right corner of each panel. Contours are as same as those in Figure 1. Panel (d) represents the velocity dispersion extracted from the mock observation through \({}^{3\mathrm{D}}\)Barolo. The red curve represents the best-fitting velocity dispersion from \({}^{3\mathrm{D}}\)Barolo. And the red shaded region represents the uncertainties. The horizontal line represents the input velocity dispersion of simulated data. The synthesized beam (\(0.33^{\prime\prime}\times 0.30^{\prime\prime}\)) is plotted at the lower left corner of first 3 panels.
In each case, \(r_{\rm e,b}\), \(n\), \(r_{\rm e,d}\), \(f_{*}\) and \(c\) share the similar prior constraints (see Table 3). In case A.1, we assume Gaussian priors for \(\log(M_{b}/M_{\odot})\) with a centered value adopted from Zhao et al. (2021) and a standard deviation of 0.5. In case A.2, the adopted Gaussian prior is similar to that in case A.2, while the standard deviation is three times larger. In case A.3, we fix the mass of each stellar component and study the \(\alpha_{\rm CO}\) value. In case B.1, we assume a Gaussian prior for \(\alpha_{\rm CO}\) with a standard deviation of 1 that is centered on the MW-like \(\alpha_{\rm CO}\) value, and we bound each stellar component mass within \(\log(M_{*}/M_{\odot})\in[8,15]\). In case B.2, we fix the \(\alpha_{\rm CO}\) value to that of the MW and fit the stellar mass. In the case of C, we only bound the stellar mass and let \(\alpha_{\rm CO}\) without any further prior assumption, e.g., Gaussian distribution. In the case of D, we bound the stellar mass and \(\alpha_{\rm CO}\), but fit the rotation velocities without the asymmetric drift correction. All nine case conditions and their fitting results are listed in Table. 4.
We find that the \(\alpha_{\rm CO}\) value in case D is relatively smaller than that in cases A and C by a factor of \(\sim 0.75\), which indicates the effect of asymmetric drift correction. We also find that in case A.3, when we fix stellar mass, \(\alpha_{\rm CO}\) has an extremely low value that is immoderate. And in case B, if we adopt a MW-like value, the rotation velocity is dominated by molecular gas components. This case leaves very little room for the stellar bulge in the central region. The stellar bulge mass is less than 10 percent of the value derived from the stellar continuum image. This also results in a large stellar disk mass of \(10^{11}\) solar mass to account for the rotation velocity in the outer part. This result requires a mass-to-light ratio that is different from the values adopted in Zhao et al. (2021), based on the B and I band color. Thus, the MW \(\alpha_{\rm CO}\) value in case B is unlikely to be a good assumption. Case A.1 and A.2 with a much lower \(\alpha_{\rm CO}\) value present a more reasonable fitting for both the gas and stellar masses. As a consequence, we find that the ULIRG-like value \(\alpha_{\rm CO}\) is reasonable in this quasar host galaxy.
|
2310.04212 | Platform and environment requirements of a satellite quantum test of the
Weak Equivalence Principle at the $10^{-17}$ level | The Space Time Explorer and QUantum Equivalence principle Space Test
(STE-QUEST) recently proposed, aims at performing a precision test of the weak
equivalence principle (WEP), a fundamental cornerstone of General Relativity.
Taking advantage of the ideal operation conditions for high-precision quantum
sensing on board of a satellite, it aims to detect possible violations of WEP
down to the $10^{-17}$ level. This level of performance leads to stringent
environmental requirements on the control of the spacecraft. We assume an
operation of a dual-species atom interferometer of rubidium and potassium
isotopes in a double-diffraction configuration and derive the constraints to
achieve an E\"otv\"os parameter $\eta=10^{-17}$ in statistical and systematic
uncertainties. We show that technical heritage of previous satellite missions,
such as MICROSCOPE, satisfies the platform requirements to achieve the proposed
objectives underlying the technical readiness of the STE-QUEST mission
proposal. | Christian Struckmann, Robin Corgier, Sina Loriani, Gina Kleinsteinberg, Nina Gox, Enno Giese, Gilles Métris, Naceur Gaaloul, Peter Wolf | 2023-10-06T12:59:01Z | http://arxiv.org/abs/2310.04212v1 | # Platform and environment requirements of a
###### Abstract
The Space Time Explorer and QUantum Equivalence principle Space Test (STE-QUEST) recently proposed, aims at performing a precision test of the weak equivalence principle (WEP), a fundamental cornerstone of General Relativity. Taking advantage of the ideal operation conditions for high-precision quantum sensing on board of a satellite, it aims to detect possible violations of WEP down to the \(10^{-17}\) level. This level of performance leads to stringent environmental requirements on the control of the spacecraft. We assume an operation of a dual-species atom interferometer of rubidium and potassium isotopes in a double-diffraction configuration and derive the constraints to achieve an Eotvos parameter \(\eta=10^{-17}\) in statistical and systematic uncertainties. We show that technical heritage of previous satellite missions, such as MICROSCOPE, satisfies the platform requirements to achieve the proposed objectives underlying the technical readiness of the STE-QUEST mission proposal.
+
Footnote †: These authors contributed equally.
+
Footnote †: These authors contributed equally.
## I Introduction
The fundamental physics of nature is described by General Relativity (GR) and the Standard Model of particle physics (SM) [1; 2]. Both theories have been separately extensively tested without showing any discrepancy but their unification remains an unresolved problem. The validity of GR at the quantum level is still unknown and the discovery of new forces beyond the SM is not excluded. Moreover, the SM accounts only for the visible matter in the Universe, while the dominant component of matter is dark and its quantum nature is still unclear. On the other hand, the SM and quantum mechanics are very successful at explaining the microscopic phenomena, but also pose fundamental questions such as the measurement problem and the quantum-classical transition. The ultimate theoretical challenge may be to construct a theory of quantum gravity that reconciles SM and GR, which may require modifying or extending one or both of these frameworks. Several quantum gravity models, unifying all non-gravitational interactions with gravity predict a violation of the Einstein Equivalence Principle (EEP), a cornerstone of GR, yet not a fundamental symmetry of Nature. It is consequently of fundamental importance to search for possible violations of the EEP, which has three facets [3]: Local Lorentz Invariance, Local Position Invariance and Universality of Free Fall, also referred to as Weak Equivalence Principle (WEP). Schiff's conjecture speculates that a violation of one implies the violation of the two others [2]. If the WEP holds, the trajectory of a freely falling, uncharged test-body only depends on the initial position and velocity of the test-body but is independent of its mass, composition, form or spin [2]. A convenient figure of merit for all WEP tests is the Eotvos ratio \(\eta\). It quantifies the differential free-fall acceleration of two test masses of different composition, thereby measuring a possible violation of the WEP. Although it is a useful tool for comparing different experiments, it cannot account for the diversity of possible underlying theories, e.g., different types of couplings depending on the source and test objects, or couplings to space-time-varying background fields other than local gravity. Thus, not only the best performance in the Eotvos ratio is required, but also a large diversity of test objects and source masses of different nature.
At what performance of a WEP test do we expect to see a violation? There is no firm and widely accepted value, but a number of models predict violations in the \(10^{-10}-10^{-22}\) region based on unification scenarios [4], supersymmetry and dark matter [5; 6; 7], or Lorentz symmetry breaking at the Planck scale [8]. If one also takes into account cosmological inflation scenarios the possible region of WEP breaking is reduced to \(10^{-10}-10^{-19}\)[9; 7; 10; 11].
Much of this region (down to \(10^{-15}\)) is already excluded by experiments. A major discovery may thus be "just around the corner". Today the best result with classical test masses has been obtained with the MICROSCOPE space mission at the level of \(\eta=[-1.5\pm 2.3(stat)\pm 1.5(syst)]\times 10^{-15}\)[12] while equivalent ground tests are ultimately limited by the Earth's gravitational environment to \(\eta\approx 10^{-13}\)[13; 14].
Quantum tests of the WEP (Q-WEP) can be performed through matter wave interferometry where precision measurements are obtained by mapping the physical quantity of interest (the acceleration) to a phase shift determined using interferometric techniques. Matter-wave interferometers played a key role in the development of quantum theory [15; 16] and have been widely used to accurately determine the fine struc
ture constant [17; 18] or the gravitational constant [19; 20]. Besides testing fundamental laws of nature, quantum sensors have been developed to measure inertial forces and used as gravimeters, gradiometers, and gyroscopes [21]. Furthermore, in order to exploit the enhanced sensitivity of long interrogation times, large-scale setups are currently in planning or construction for the detection of gravitational waves and dark matter [22; 23; 24; 25; 26; 27]. Indeed, long free fall interrogation times already enable a Q-WEP on ground at the level of \(\eta=[1.6\pm 1.8(stat)\pm 3.4(syst)]\times 10^{-12}\) with different atomic isotopes [28; 29; 30; 31; 32], made possible by the extremely low expansion energies accessible with ultracold ensembles [33; 34; 35; 36; 37]. Longer interrogation times of some tens of seconds being accessible in space, unlock the potential of Q-WEP tests at the level of \(\eta\) in the range of \(10^{-15}\) to \(10^{-17}\), as explored in this paper. This outlook is supported by the significant progress made in the last decade on the technological readiness level of cold and ultra-cold atomic inertial sensors, as demonstrated in micro-gravity experiments in 0-g flights [38; 39; 40], drop-towers [41; 36; 42], sounding rockets [43; 44; 45; 46] and on-board the International Space Station [47; 48; 37].
In this article, we investigate the realistic requirements on the atom interferometer and the spacecraft platform to perform a space-borne Q-WEP test with a dual-species atom interferometer of \({}^{87}\)Rb and \({}^{41}\)K isotopes. In Sec. II we describe the working principle of the inertial sensor. In Sec. III we investigate the constraints on the atom interferometer environment and those on the satellite control in Sec. IV. A discussion about the feasibility of a Q-WEP satellite-borne test is discussed in Sec. V before concluding in Sec. VI.
## II Atom interferometry
### Principle
Dual-species atom interferometers are powerful tools for measuring differential acceleration by exploiting quantum-mechanical effects. In this study, we focus on a Mach-Zehnder double-diffraction configuration [49; 50; 51; 29]. The interferometric sequence of each species, denoted by \(A\) and \(B\), consists of three atom-light interaction pulses as illustrated in Fig. 1. A first \(\pi/2\)-pulse creates a coherent quantum superposition of momentum states and leads to a spatial separation of the interferometer arms. The trajectories are then reflected by a \(\pi\)-pulse and finally recombined at a final \(\pi/2\)-pulse. In between the pulses, each superposition freely evolves for a duration \(T_{i}\) and accumulates a phase leading to a final phase difference \(\Phi_{i}\), \(i\) being \(A\) or \(B\). That phase difference is evaluated by measuring the relative atom numbers at the output ports of each interferometer, which differ only in momentum in the case of Bragg atom-light diffraction or in momentum and internal states in the case of Raman diffraction [51]. We write the general linear phase combination, \(\Phi_{\text{gen}}\), as
\[\Phi_{\text{gen}}=\mathcal{A}\Phi_{\text{A}}+\mathcal{B}\Phi_{\text{B}}. \tag{1}\]
Here \(\mathcal{A},\mathcal{B}\) are freely selectable constants in the data analysis and \(\Phi_{\text{i}}\) are the unmodified raw data of the interferometer instrument. Note that the interrogation time of each species can differ, \(T_{A}\neq T_{B}\).
The atom interferometer phase is related to the acceleration in the local inertial frame through the relation,
\[\Phi_{i}=2k_{i}a_{i}T_{i}^{2}, \tag{2}\]
where \(k_{i}=4\pi/\lambda_{i}\) is the effective wave number and \(a_{i}\) is the total acceleration experienced by each species. We further decompose the acceleration \(a_{i}\), as the sum of the gravitational acceleration, \(g_{i}=g_{0}+\gamma_{i}\), where \(g_{0}\) is the universal gravitational acceleration at the location of the experiment (see Tab. 1) and \(\gamma_{i}=g_{i}+a_{b,i}\) is the deviation of species \(i\). This deviation encompasses the hypothetical Q-WEP violation we want to extract, \(g_{i}\), plus all spurious bias phase shift terms \(\Phi_{b,i}\) that can be interpreted as an acceleration of the form \(a_{b,i}=\Phi_{b,i}/2k_{i}T_{i}^{2}\). The sensitivity on the Eotvos parameter, \(\eta\),
\[\eta=\frac{\gamma_{A}-\gamma_{B}}{g_{0}}, \tag{3}\]
is bound either by the uncertainty on the bias acceleration \(\Delta\delta a_{b}\), i.e., the uncertainty of \(\delta a_{b}=a_{b,A}-a_{b,B}\)1, or by the uncertainty on the measurement itself, limited by the standard quantum limit for classically correlated particles. This paper aims to study the different contributions in \(a_{b,i}\) coming from the interferometer environment and/or the satellite platform to highlight the requirements to test the Q-WEP at different values of the Eotvos parameter.
Footnote 1: Throughout this paper we will use \(\delta\) to represent the difference of two quantities, and \(\Delta\) to indicate the uncertainty of a quantity.
Figure 1: Dual-species atom interferometer sequence. The space-time diagram shows the trajectories of the two species \(A\equiv\text{rubidium}\) (**—**) and \(B\equiv\text{potassium}\) (**-** -**) atoms. The atomic ensemble of each species are split in a superposition of momentum state, redirected and recombined using double diffraction \(\pi/2\), \(\pi\) and \(\pi/2\) pulses at times \([-T,0,T]\) (**-** -**). Note that the pulse separation time \(T\) can differ between both species, but we choose \(T_{A}=T_{B}=T\) (see Sec. IV.1). \(T_{d}\) denotes the dead-time and is equally split between the state preparation and the detection time as an illustration. The output ports of each species interferometer can be distinguished by the different external states \(\pm\hbar k_{i}\) and \(0\hbar k_{i}\). The presence of an external potential is highlighted by the dotted line in the satellite tube along the sensitive axis of the interferometer (**-**).
### Signal demodulation in satellite setups
Space-borne platforms allow for a long interrogation time, \(T_{i}\gg 1\) s, and therefore high sensitivities (see Eq. (2)). Additionally, for space-borne setups, the projection of the gravitational potential onto the sensitive measurement axis depends on the position and attitude of the satellite. As a consequence, the differential phase shift \(2\eta g_{0}kT^{2}\) is naturally modulated at certain frequencies. For example, in the case of circular orbit with inertial attitude, \(g(t)\) is modulated at orbital frequency, \(f_{\rm orb}=\omega_{\rm orb}/2\pi\). Systematic effects modulated at different frequencies can therefore be reduced by at least \(2/(\Omega_{c}T_{c}\epsilon_{\rm orb})\) where \(\Omega_{c}\) denotes the total number of measurements and \(T_{c}\) the cycle time of the measurement sequence [52, 53]. Thus, the more stringent requirements are only on systematic effects modulated at \(f_{\rm orb}\).
In more detail, several strategies, listed below, can be employed to drastically reduce the impact of a parasitic systematic effect on the desired signal. It is worth to note that these strategies can potentially accumulated. Here, we consider a parasitic effect at frequency \(f_{p}\) and amplitude \(A_{p}\), searching for a signal at frequency \(f_{sig}\):
(i) If \(f_{p}\) differs from \(f_{sig}\), the perturbation can be decorrelated from our science signal provided that \(|f_{p}-f_{sig}|>1/T_{sc}\), where \(T_{sc}\) denotes the science time. Consequently, for any parasitic periodic effect of amplitude \(A_{p}\) at frequency \(f_{p}\), one only needs to consider its amplitude \(A_{sys}\) at \(f_{sig}\). For example, as observed in the MICROSCOPE mission [54, 12], the DC self-gravity perturbation and its residual effect at \(f_{sig}\), based on a typical thermal expansion coefficient of \(10^{-5}\)/K for the satellite, and a typical peak to peak temperature variation of about 1 K at orbital frequency lead to a reduction factor of about \(10^{5}\). A more precise evaluation would require a detailed design and thermoelastic model of the satellite, and is beyond the scope of this paper.
(ii) The effect of \(A_{sys}\) can further be reduced by a likely phase mismatch and controlled phase jumps, which will be present in the searched signal but unlikely to be fully present in the systematic effect. An interesting strategy, which consists of rotating the satellite by a fixed angle \(\theta_{R}\) every given \(N_{\rm orb}\) orbits, is discussed in more detail in Sec. IV.4. Such a strategy would relax the constraints on systematic effects modulated at \(\omega_{\rm orb}\) provided they are not at all, or only partly, affected by these controlled rotations. In Ref. [53] the authors estimate that this procedure could lead to a further reduction factor of about \(10^{3}\), mainly limited by the imperfect knowledge of the parameters (angles, timing,...) and correlations of the perturbation and the induced angular steps.
(iii) Finally, if the systematic effect can be modelled, possibly with unknown parameters, its impact at \(f_{sig}\) can be efficiently corrected provided this effect has also significant components at other frequencies different from \(f_{sig}\), allowing the fitting of the model parameters to the data. A prime example of this is the effect of gravity gradients in MICROSCOPE whose amplitude \(A_{sys}\) at \(f_{sig}\) could be reduced by more than \(10^{7}\) with respect to \(A_{p}\) (at \(f_{p}=2f_{sig}\)) by fitting the model parameters to the data [54], although the actual (unfitted) component \(A_{sys}\) was only \(\sim 10^{3}\) times smaller than \(A_{p}\).
To conclude, the stringent requirements that are derived in the rest of the paper will in practice be relaxed by large amounts. The order of magnitude of those reduction factors is indicated in the individual sections. However, an exact evaluation is beyond the scope of this paper as it requires a detailed and specific satellite design.
### Parameters of the interferometer sequence
Throughout this paper we consider a Q-WEP test at the level of \(\eta=10^{-15}\) and \(\eta=10^{-17}\). The typical mission parameters, as envisioned for the STE-QUEST space mission scenarios [52, 53, 55], are explicitly given in Tab. 1 and divided in three categories. The first one refer to the satellite plate-form. The second one refer to interferometer sequence and include details on the atomic species. The last one highlight the constrains on the quantum-state engineering of the two test masses.
## III Constraints on the interferometer environment
In this section we now focus on the constraints specific to the interferometer environment in micro-gravity, even though our treatment can be generalized to ground-based environments. In the following, we choose the coordinate system such that the sensitive axis of the interferometer is along the
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Parameters & \(\eta=10^{-15}\) & \(\eta=10^{-17}\) \\ \hline \hline \multicolumn{3}{c}{**Mission**} \\ \hline Orbit, altitude (km) & \multicolumn{2}{c}{Circular, 1400} \\ Attitude & Inertial + modulation \\ Local gravity \(g_{0}\) (m.s\({}^{-2}\)) & 6.6 \\ Gravity gradient \(\partial g_{0}/(2\partial\nu)\) (s\({}^{-2}\)) & \(8.5\times 10^{-7}\) \\ Orbital frequency \(f_{\rm orb}\) (Hz) & \(1.46\times 10^{-4}\) \\ Mission duration \(T_{H}\) (months) & 36 \\ Science time \(T_{sc}\) (months) & 24 \\ \hline \multicolumn{3}{c}{**Interferometer**} \\ \hline Atom number \(N\) & \(1\times 10^{5}\) & \(2.5\times 10^{6}\) \\ Wave number \(k_{4}\) for Rb (nm\({}^{-1}\)) & \(2\times 2\pi/780\) \\ Wave number \(k_{g}\) for K (nm\({}^{-1}\)) & \(2\times 2\pi/767\) \\ Interoregtion time \(2T\) (s) & 9 & 50 \\ Max. separation Rb (m) & 0.11 & 0.59 \\ Max. separation K (m) & 0.23 & 1.27 \\ Cycle time \(T_{c}\) (s) & 15 & 60 \\ Total number of measurements \(\Omega_{c}\) & \(2.5\times 10^{6}\) & \(7.9\times 10^{5}\) \\ Contrast \(C\) & 1 \\ \hline \multicolumn{3}{c}{**Atomic source**} \\ \hline Diff. init. c.m. pos. \(\delta x_{0}\) (\(\mu\)m) & 1 & \\ Diff. init. c.m. vel. \(\delta v_{0}\) (\(\mu\)m.s\({}^{-1}\)) & 1 & 0.1 \\ Expansion energy (pK) & 50 & 10 \\ Expansion velocity \(\sigma_{v,kb}\) (\(\mu\)m.s\({}^{-1}\)) & 70 & 31 \\ Expansion velocity \(\sigma_{v,k}\) (\(\mu\)m.s\({}^{-1}\)) & 101 & 45 \\ Init. pos. spread \(\sigma_{v,0}\) (\(\mu\)m) & 100 & 500 \\ \hline \end{tabular}
\end{table}
Table 1: Operational parameters of the atom interferometer to test the Q-WEP at the level of \(\eta=10^{-15}\) and \(10^{-17}\).
\(x\)-axis whereas the origin coincides with the initial center of mass (c.m.) positions of the atoms.
### Statistical error
In the case of a classically correlated atomic ensemble, the phase sensitivity is ultimately limited to the quantum projection noise, where the statistical uncertainty per shot is defined as \(\Delta\Phi_{i,\mathrm{SN}}=1/(C_{i}\sqrt{N_{i}})\) with \(C_{i}\) being the contrast and \(N_{i}\) the atom number of interferometer \(i\). For a dual-species atom interferometer, the standard quantum noise per measurement cycle is given by \((\Delta\delta\Phi_{\mathrm{SN}})^{2}=\mathcal{A}^{2}(\Delta\Phi_{A,\mathrm{SN }})^{2}+\beta^{2}(\Delta\Phi_{B,\mathrm{SN}})^{2}\), following the notation of Eq. (1). In terms of a differential acceleration \(\delta a\) this leads to the uncertainty
\[(\Delta\delta a_{\mathrm{SN}})^{2}=\left(\frac{\mathcal{K}\Delta\Phi_{A, \mathrm{SN}}}{2k_{A}T_{A}^{2}}\right)^{2}+\left(\frac{\beta\Delta\Phi_{B, \mathrm{SN}}}{2k_{B}T_{B}^{2}}\right)^{2}. \tag{4}\]
Integrating the measurement over \(\mathcal{H}_{c}=T_{\mathrm{sc}}/T_{c}\) repetitions, where \(T_{\mathrm{sc}}\) is the total measurement time, \(T_{c}=2T+T_{\mathrm{d}}\) is the cycle time and \(T_{\mathrm{d}}\) is the dead time, leads to
\[(\Delta\eta)^{2}=2\frac{(\Delta\delta a_{\mathrm{SN}})^{2}}{g_{0}^{2}\,\eta_{ c}(T)}. \tag{5}\]
The extra coefficient 2 accounts for the sinusoidal varying local value of the gravitational acceleration due to a circular orbit [52]. Evaluating Eq. (5) with the parameters of Tab. 113 shows that shot noise is below the goal for \(\Delta\eta\) with some margin. In continuous operation the goal is reached in \(\sim 12\) months for the \(\eta=10^{-15}\) case and \(\sim 20\) months for the \(\eta=10^{-17}\) one, well below the assumed 24 months science time.
Footnote 3: We choose the _acceleration free combination_\(\mathcal{A}=2k_{B}/(k_{A}+k_{B})\) and \(\mathcal{B}=-2k_{A}/(k_{A}+k_{B})\), see Sec. IV.1, thus \(\mathcal{A}\approx-\mathcal{B}\approx 1\).
### Systematic effects
The presence of any kind of potential contributes to the interferometer phase and can lead to bias acceleration terms, ultimately limiting the sensitivity to the Eotvos coefficient of Eq. (3). Contributions to bias phase terms are of two kinds. On the one hand, there are effects coming from the contribution of potential gradients, acting as forces. On the other hand, there are effects coming from the presence of potential energy differences inducing Aharonov-Bohm-like phase shifts [56]. Effects of the first kind directly act on the mean trajectories of the matter-wave while the ones of the second type do not. In this section, we analyse the bias phase terms induced by an arbitrary potential and derive constraints for specific effects.
#### iv.2.1 Model of the phase accumulation
It should be noted that different approaches have been proposed to calculate the phase shift caused by a non-trivial potential whose scaling is more than quadratic in position [57; 58]. Here, we use the perturbative methods developed in Ref. [57] and summarized in Appendix A. For species \(i\), the total accumulated phase \(\Phi_{i}\) can be decomposed as
\[\Phi_{i}=\Phi_{i,0}+\Phi_{i,\mathrm{pert}} \tag{6}\]
where \(\Phi_{i,0}\) and \(\Phi_{i,\mathrm{pert}}\) denote respectively the _unperturbed phase_ induced by a quadratic potential and the momentum transfer, as well as a _perturbative_ phase contribution. In the following, we consider a polynomial potential of order \(N\) of the form
\[V_{i}(x)=\sum_{n=1}^{N}c_{i,n}x^{n}, \tag{7}\]
which can be seen as an expansion of an arbitrary potential around the initial c.m. position of the atoms \(x_{0}=0\). In this study we only consider the contributions up to order \(N=4\). The coefficients \(c_{i,n}\) also include an index for the species that account for species-dependent potentials discussed below. Appendix A features the derivation of \(\Phi_{i,\mathrm{pert}}\) for a perturbative potential for a single atomic species. The contribution of terms beyond \(N=4\) in lowest order can easily be obtained using Eq. (10). When working with different expansion coefficients, one has to check of course the required order of the perturbative expansion, as shown in the example given in Ref. [57].
Since the species have different masses \(m_{i}\), different effective wave numbers \(k_{i}\), different interrogation times \(T_{i}\), different initial positions \(x_{0,i}\), velocities \(v_{0,i}\), position widths \(\sigma_{x,i}\) and velocity widths \(\sigma_{v,i}\), we equip all quantities with index \(i\) and find the phase
\[\begin{split}\Phi_{i}=&-\frac{2k_{i}T_{i}^{2}}{m_{ i}}c_{i,1}-2\frac{2k_{i}T_{i}^{2}}{m_{i}}x_{i}(T_{i})c_{i,2}+\kappa_{i}c_{i,3}\\ &+4\left[\kappa_{i}x_{i}(T_{i})+\frac{k_{i}T_{i}^{2}}{m_{i}}(4x_ {i}^{3}(T_{i})-2T_{i}^{3}v_{0,i}\sigma_{v,i}^{2})\right]c_{i,4},\end{split} \tag{8}\]
with abbreviation
\[\kappa_{i}=-\frac{k_{i}T_{i}^{2}}{m_{i}}\left[6x_{i}^{2}(T_{i})+6\sigma_{x,j}^{2 }+T_{i}^{2}\left(v_{0,i}^{2}+\left(\frac{hk_{i}}{m_{i}}\right)^{2}+7\sigma_{v,i }^{2}\right)\right], \tag{9}\]
and \(x_{i}(T_{i})=x_{0,i}+v_{0,i}T_{i}\).
In the following we calculate the constraints on the expansion coefficients including their uncertainties by imposing that the uncertainty in the differential acceleration for each order \(n\) be below the target uncertainty i.e.
\[\Delta\delta a^{(n)}=\Delta\left(\frac{\Phi_{A}^{(n)}}{2k_{A}T_{A}^{2}}-\frac{ \Phi_{B}^{(n)}}{2k_{B}T_{B}^{2}}\right)\leq\eta g_{0}\,, \tag{10}\]
for a self-gravity potential, black body radiation, as well as the second-order Zeeman effect. Note that we do not assign an uncertainty to \(k_{i}\), \(T_{i}\) and \(m_{i}\) as they are known sufficiently well to not be limiting (see e.g. Sec. IV.1.3).
Self gravity potential
An inhomogeneous distribution of the satellite's mass yields a gravitational potential inducing a spurious bias phase shift limiting the sensitivity to a possible WEP violation. To study this effect we expand the gravitational potential of the satellite around a point corresponding to the nominal position \(x_{0}=0\) of the c.m. of the BECs. For simplicity we only focus our analysis along the sensitive axis of the experiment, where the effect is largest. We write the Newtonian gravitational potential of a spherically symmetric source mass acting on the atoms as
\[V_{i,\mathrm{SG}}=-\frac{GMm_{i}}{x_{M}}\left[1+\sum_{n=1}^{N}\left(\frac{x}{ x_{M}}\right)^{n}\right], \tag{11}\]
where \(G\) is the gravitational constant, \(M\) is the source-mass, \(m_{i}\) is the mass of the atoms, \(x\) is the position of the atoms and \(x_{M}\) is the position of the source mass. Comparing Eq. (11) to Eq. (7), one has, \(c_{i,n}=-GMm_{i}/x_{M}^{n+1}\), \(\forall n\in\mathbb{N}\).
Tab. 2 provides constraints on the different self-gravity contributions and their uncertainties _that are synchronous with the signal_. Note that, as discussed in section II.2, the corresponding static constraints, i.e., the actual knowledge of the satellite's mass distribution, are up to 8 orders of magnitude less stringent.
It should be emphasized here that the \(n=2\) coefficient is a local gravity gradient which can be compensated the same way as the Earth's gravity gradient following the gravity gradient cancellation method discussed in Refs. [52, 59].
#### ii.2.3 Black body radiation
The effect of thermal radiation leads to black body radiation (BBR) acting as an extra external potential of the form [60],
\[V_{i,\mathrm{BBR}}(x)=\frac{2\alpha_{i}\sigma}{c\epsilon_{0}}T_{\mathrm{tube}} ^{4}(x), \tag{12}\]
where \(\alpha_{i}\) is the static polarizability of atomic species \(i\), \(\sigma\) the Stephan-Boltzmann constant, \(\epsilon_{0}\) the vacuum permittivity and \(T_{\mathrm{tube}}(x)\) the temperature profile inside the vacuum tube at position \(x\) along the sensitive axis. To calculate the effect we expand \(T_{\mathrm{tube}}(x)\) around \(x_{0}=0\) analogously to Sec. III.2.2. We write \(T_{\mathrm{tube}}(x)=\sum_{0}^{\infty}t_{n}x^{n}\). Comparing Eq. (12) to Eq. (7), one has to leading order in \(t_{0}\) the coefficients \(c_{i,n}=8\sigma a_{i}t_{0}^{3}t_{n}/(c\epsilon_{0})\), \(\forall n\in\mathbb{N}\). The constraints on the temperature gradients _that vary synchronously with the signal_ are given in Tab. 3.
Note that the onboard temperature gradients are expected to vary mainly at orbital frequency and its harmonics. Because of the phase modulation of the signal by controlled rotations, as discussed in point (ii) in Sec. II.2, the error in the knowledge of the amplitude of that variation could be up to three orders of magnitude less stringent than the constraints given in Tab. 3.
#### ii.2.4 Second-order Zeeman effect
We consider an interferometer sequence operated with atoms in the \(m_{F}=0\) state and is thus up to first order insensitive to magnetic effects. Here, we study the impact of the second-order Zeeman effect on the differential phase shift [61]. The potential induced by the presence of magnetic field can be written as
\[V_{i,\mathrm{B}}(z)=\pi\hbar\gamma_{i}B_{\mathrm{tube}}^{2}(x), \tag{13}\]
where \(\gamma_{i}\) is the second-order Zeeman coefficient of atomic species \(i\) and \(B_{\mathrm{tube}}(x)\) is the magnetic field inside the vacuum tube at position \(x\) along the sensitive axis. We evaluate the effect of the magnetic field gradients the same way as in the previous sections, i.e. we expand the magnetic field in a series expansion \(B_{\mathrm{tube}}(x)=\sum_{0}^{\infty}b_{n}x^{n}\) and calculate constraints on the coefficients \(b_{n}\). The constraints on the magnetic field gradients _that vary synchronously with the signal_ are given in Tab. 4.
In the case of a circular orbit, the main time variation of \(B_{\mathrm{tube}}^{2}(x)\) will be at \(2f_{\mathrm{orb}}\) because of the dipolar nature of the Earth's magnetic field, and thus decorrelate well from the EP-violating signal at \(f_{\mathrm{orb}}\). The effect can therefore be modelled and subtracted (cf. (ii) of Sec. II.2) additionally to the reduction by about 3 orders of magnitude because of the phase modulation of the signal by controlled rotations (point (ii) in Sec. II.2).
We emphasize here that magnetic field gradients below the nT/m level [62] are achieved on 30 cm scales on the ground, in a much more perturbed magnetic environment than in space, and at a few nT/m over 10 m scales [63, 30].
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline n & \(GM/x_{M}^{n+1}\) (\(\eta=10^{-15}\)) & \(GM/x_{M}^{n+1}\) (\(\eta=10^{-17}\)) & unit \\ \hline \hline \(2\) & \(7.5\times 10^{-10}\pm 2.1\times 10^{-10}\) & \(1.3\times 10^{-11}\pm 3.4\times 10^{-12}\) & s\({}^{-2}\) \\ \(3\) & \(1.2\times 10^{12}\pm 6.4\times 10^{-13}\) & \(2.3\times 10^{10}\pm 2.1\times 10^{-16}\) & m\({}^{-1}\).s\({}^{-2}\) \\ \(4\) & \(9.5\times 10^{-8}\pm 2.7\times 10^{-8}\) & \(5.1\times 10^{-11}\pm 1.4\times 10^{-11}\) & m\({}^{-2}\).s\({}^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Maximum allowed self gravity variations and their maximum allowed uncertainties that are synchronous with the WEP violation signal, given in terms of \(GM/x_{M}^{n+1}\) for \(\eta=10^{-15}\) and \(\eta=10^{-17}\). Note that the knowledge of the static self-gravity coefficients may be up to 8 orders of magnitude less stringent (cf. (i) and (ii) of Sec. II.2).
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline n & \(t_{n}\) (\(\eta=10^{-15}\)) & \(t_{n}\) (\(\eta=10^{-17}\)) & unit \\ \hline \hline \(1\) & \(\pm 2.5\times 10^{-5}\) & \(\pm 2.5\times 10^{-7}\) & K\({}^{-1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)––\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)–\({}^{1}\)––\({}^{1}\)––\({}^{1}\)–\({}^{1}\)–\({1}\)–\({}^{1}\)–\({1}\)––\({}^{1}\)–\({}^{1}\)–\({1}\)–\({}^{1}\)––\({}^{1}\)–\({1}\)–\({}^{1}\)––\({}^{1}\)–\({1}\)––\({}^{1}\)–\({}^{1}\)––\({1}\)–\({}^{1}\)––\({}^{1}\)––\({}^{1}\)–\({1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)–\({1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({}^{1}\)––\({1}\)––\({}^{1}\)––\({1}\)––\({}^{1}\)––\({1}\)–––\({}^{1}\)–––\({1}\)–––\({1}\)––\({1}\)–––\({1}\)–––\({1}\)–––\({1}\)–––\({1}\)––––\({1}\)––––\
## IV Constraints on the spacecraft
Accelerations and rotations of the satellite can directly translate into additional phase shifts in the interferometric measurement. We now derive the corresponding requirements on spurious accelerations and rotations of the spacecraft.
### Acceleration
#### iv.1.1 Systematic effect
Any common accelerations of the two test masses can be suppressed by carefully choosing the experimental parameters as well as combining the phase shifts for each species in an optimal way (see Eq. (1)). An obvious choice would be \(\mathcal{A}=1\) and \(\mathcal{B}=-1\) leading to the direct subtraction of both phases. However, a more rigorous analysis reveals that we can exploit this freedom to construct a differential phase shift observable that is non-sensible to common accelerations. The transfer function of the differential atom interferometer phase, \(\Phi_{\rm gen}\) in Eq. (1), defining the response of the interferometer with respect to vibrational noise is given by [64]
\[H(\omega)=-4i\left[2\mathcal{A}Tk_{A}\sin^{2}(\omega T_{A}/2)+2\mathcal{B}k_{ B}\sin^{2}(\omega T_{B}/2)\right]. \tag{14}\]
The response to common accelerations can be reduced to zero by setting \(T=T_{A}=T_{B}\) and \(\mathcal{A}k_{A}=-\mathcal{B}k_{B}\). To keep \(\mathcal{A}\approx-\mathcal{B}\approx 1\), we choose \(\mathcal{A}=2k_{B}/(k_{A}+k_{B})\) and \(\mathcal{B}=-2k_{A}/(k_{A}+k_{B})\) such that
\[\Phi_{\rm gen}=\frac{2k_{B}}{k_{A}+k_{B}}\Phi_{A}-\frac{2k_{A}}{k_{A}+k_{B}} \Phi_{B}. \tag{15}\]
Rewriting the accelerations as \(a_{i}=a_{\rm c}\pm a_{\rm nc}\pm\eta g_{0}/2\) where \(a_{\rm c}\) (\(a_{\rm nc}\)) encompasses all the common (non-common) accelerations between the two species and where \(a_{\eta}\) is the extra acceleration due to a violation of the WEP, one has:
\[\Phi_{\rm gen}=\frac{4k_{A}k_{B}}{k_{A}+k_{B}}(2a_{\rm nc}+\eta g_{0})T^{2}. \tag{16}\]
The combination is insensitive to any common accelerations \(a_{\rm c}\), but the sensitivity to \(\eta g_{0}\) stays approximately untouched assuming \(k_{A}\approx k_{B}\).
Of course, the performance of the _acceleration free combination_ defined by Eq. (15) relies on the exact knowledge of the wave numbers \(k_{i}\). Assuming that this knowledge is limited up to an uncertainty \(\Delta k_{i}\), leads to an uncertainty in the differential phase of
\[\Delta\delta\Phi_{\rm gen}=4\frac{k_{B}\Delta k_{A}-k_{A}\Delta k_{B}}{k_{A} +k_{B}}a(t)T^{2}, \tag{17}\]
where \(a(t)=a_{A}(t)=a_{B}(t)\) is the residual acceleration of the satellite common to both species. We require that the component of \(\Delta\delta\Phi\) that is modulated with orbital frequency stays below the Eotvos signal. Thus, we find
\[|\Delta k_{A}/k_{A}-\Delta k_{B}/k_{B}|\times a(t)|_{\omega_{\rm obs}}\leq\eta g _{0}. \tag{18}\]
leaving a requirement on the relative knowledge \(\Delta k_{i}/k_{i}\) for a given \(a(t)|_{\omega_{\rm obs}}\). Note that the laser frequency is well known and the uncertainty \(\Delta k_{i}\) is dominated by pointing errors [65].
#### iv.1.2 Acceleration noise
Although any common accelerations are suppressed by the _acceleration free combination_, Eq. (15), we require that the phase shift induced by acceleration noise stays within a certain region around mid-fringe, i.e., the point of maximum phase sensitivity. We quantify this requirement by setting \(\Delta\Phi_{i}^{a}\leq\pi/10\) where \(\Delta\Phi_{i}^{a}\) is the uncertainty in the phase shift due to acceleration noise with power spectral density \(S_{a}(\omega)\)[64],
\[(\Delta\Phi_{i}^{a})^{2}=\int_{0}^{+\infty}\frac{S_{a}(\omega)}{\omega^{4}} \left|8k_{i}\sin^{2}\left(\frac{\omega T}{2}\right)\right|^{2}d\omega\leq \left(\frac{\pi}{10}\right)^{2}. \tag{19}\]
The integration can be restricted to an area \(\omega\in[2\pi/T_{c},2\pi f_{\rm cutoff}]\) where \(T_{c}\) is the cycle time of the measurement. At high frequencies the linear acceleration transfer function drops steeply as shown in Fig. 2, so we can limit the integration up to a certain cutoff \(f_{\rm cutoff}\). At low frequencies, \(\Delta\Phi_{a}\) reduces to the variance of the atom-interferometer phase \(2k_{i}\langle a\rangle T^{2}\) where \(\langle a\rangle\) is the average acceleration during the interrogation time of the atoms. We emphasize here that the slowly varying acceleration noise can be approximated by a low-order polynomial and used to suppress the noise in following measurements by feed forwarding the information to the laser frequency. In order to stay at mid-fringe, the change in acceleration in-between cycles should follow the inequality,
\[\langle\dot{\Phi}_{i}\rangle=2k_{i}T^{2}\langle\dot{a}\rangle T_{c}\leq\pi/10, \tag{20}\]
where \(\langle\dot{\Phi}_{i}\rangle\) is the average change of the interferometer phase from one cycle to the next.
#### iv.1.3 Application
The _acceleration free combination_, Eq. (15), ensures that any accelerations that are common to both species are suppressed, leaving only the requirement on the knowledge of the relative wave numbers, Eq. (18). Taking the drag-free
\begin{table}
\begin{tabular}{l|c|c|c} \hline n & \(b_{a}\) (\(\eta=10^{-15}\)) & \(b_{a}\) (\(\eta=10^{-17}\)) & unit \\ \hline \hline
1 & \(\pm 2.2\times 10^{-3}\) & \(\pm 2.2\times 10^{-5}\) & nT.m\({}^{-1}\) \\
2 & \(9.8\times 10^{0}\pm 2.8\times 10^{2}\) & \(1.6\times 10^{1}\pm 4.5\) & nT.m\({}^{-2}\) \\
3 & \(1.1\times 10^{8}\pm 3.4\times 10^{-1}\) & \(3.0\times 10^{6}\pm 1.1\times 10^{-4}\) & nT.m\({}^{-3}\) \\
4 & \(6.6\times 10^{6}\pm 2.2\times 10^{4}\) & \(1.8\times 10^{5}\pm 1.1\times 10^{1}\) & nT.m\({}^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 4: Requirements on the magnetic field gradients and their uncertainties that vary synchronously with the signal. We assume here \(b_{0}=100\,\)nT with uncertainty \(\Delta b_{0}=50\,\)pT and \(b_{1}=6\,\)nT/m. From Ref. [61] we have \(\chi_{B_{0}}=575.14\times 10^{8}\,\)Hz/T\({}^{2}\) and \(\chi_{K}=15460\times 10^{8}\,\)Hz/T\({}^{2}\). Note that the knowledge of the main component at \(2f_{orb}\) may be up to 3 orders of magnitude less stringent (cf. (ii) of Sec. II.2) and can be measured independently of the signal (cf. (iii) of Sec. II.2).
controlled MICROSCOPE satellite as an example [66], no residual acceleration exceeding \(\alpha(t)|_{\text{l}_{\text{outfls}}}>10^{-12}\) m/s\({}^{2}\) was observed, implying that \((\Delta k_{A}/k_{A}-\Delta k_{B}/k_{B})\leq 6.6\times 10^{-5}\) is sufficient to reach \(\eta\leq 10^{-17}\), well within the reach of present day laser systems.
The mid-fringe requirement leads to requirements on the single-species atom interferometer acceleration noise. Tab. 5 features the constraints evaluated for the parameters in Tab. 1 assuming white acceleration noise and using \(f_{\text{cutoff}}=0.5\) Hz.
### Rotation
Systematic and statistical effects due to rotations are not suppressed by the _acceleration free combination_, Eq. (15), and, thus, are of particular interest when setting constraints on the platform.
We now investigate the configuration shown in Fig. 3 to derive the phase shifts induced by rotations of the mirror or the spacecraft. We work in an inertial frame whose origin coincides with the center of mass of the satellite for all times \(t\). The orientation is chosen such that the sensitive axis of the interferometer is aligned with the \(x\)-axis when the atoms are released. The mirror is assumed to be rectangular with a thickness of \(d_{\text{M}}\) with its center of mass positioned at \(\mathbf{r}_{\text{M}}\). Small rotations of the mirror \(\theta_{\text{M}}\) simply add to the effect of the rotation of the satellite \(\theta_{\text{S}}\) such that the overall rotation can be defined as \(\theta(t)=\theta_{\text{M}}(t)+\theta_{\text{S}}(t)\).
#### iii.2.1 Systematic effect
Spurious rotations of the spacecraft cause additional accelerations scaling with the initial kinematics of the atoms. We first derive constraints on the angular velocity of the satellite by looking at the case of a constant rotation rate \(\Omega\), \(\theta(t)=\Omega t\) around the \(z\)-axis (see Fig. 3).
Using geometric considerations, the atom interferometer phase can be derived (see Appendix. B):
\[\Phi_{i}=4k_{i}v_{\gamma,0,i}\Omega T^{2}+2k_{i}(r_{x,0,i}-v_{x,0,i}(T+T_{0})) (\Omega T)^{2}, \tag{21}\]
to second order in \(\Omega T\). Here \(T_{0}\) is the dead time, corresponding to the time between release and first laser pulse, and \(r_{x,0,i}\) (\(v_{x,0,i}\)) denotes the atoms' initial position (velocity) in \(a\in\{x,y,z\}\) direction.
When calculating the differential phase according to Eq. (15), the dependencies on the individual initial kinematics directly translate into dependencies on the differential initial kinematics. Any systematical uncertainty must be below the target signal at \(\omega_{\text{orb}}\). Thus, we find the following requirement on the satellite's angular velocity at orbital frequency
\[\Omega_{\text{l}_{\text{outfls}}}\leq\eta g/(2\delta v_{\gamma,0}), \tag{22}\]
for the first order in Eq. (21). The second order leads to a requirement on \(\Omega^{2}\) at orbital frequency
\[\Omega^{2}|_{\text{l}_{\text{outfls}}} \leq\eta g/\delta r_{x,0}, \tag{23}\] \[\Omega^{2}|_{\text{l}_{\text{outfls}}} \leq\eta g/(\delta v_{x,0}(T+T_{0})),\]
\begin{table}
\begin{tabular}{c|c|c|c} Quantity & eq. & \(\eta=10^{-15}\) & \(\eta=10^{-17}\) & Unit \\ \hline \hline \(\sqrt{S}_{a}\) & (19) & \(3.5\times 10^{-9}\) & \(4.1\times 10^{-10}\) & m/s\({}^{2}\sqrt{\text{Hz}}\) \\ \(\langle\dot{a}\rangle\) & (20) & \(3.2\times 10^{-11}\) & \(2.6\times 10^{-13}\) & m/s\({}^{3}\) \\ \hline \end{tabular}
\end{table}
Table 5: Constraints on spurious linear accelerations of the spacecraft for \(\eta=10^{-15}\) and \(\eta=10^{-17}\).
Figure 2: Linear acceleration transfer function, \(H_{x}(\omega)/\omega^{2}=8k\sin^{2}(\omega T/2)/\omega^{2}\), Eq. (19), for the \({}^{87}\)Rb atom interferometer using the parameters in Tab. 1. It shows constant behavior for low frequencies and drops steeply \(\sim f^{-2}\) at high frequencies \(f\gg 2/T\). The frequency cutoff \(f_{\text{cutoff}}\) up to which the integration in Eq. (19) is performed is marked in dotted green.
Figure 3: Schematic representation of the experimental setup inside the satellite. Initially, the inertial frame coincides with the satellite frame. After release, the satellite undergoes rotations \(\theta_{\text{S}}(t)\) whereas the mirror (blue) is rotated by \(\theta_{\text{M}}(t)\). The laser head’s position (red) is fixed in the satellite frame. The \(x\)-axis of the inertial frame is chosen to be initially parallel to the sensitive axis of the interferometer (dashed). The incident and reflected laser beam is marked as a red line. The atoms’ position is marked in green and orange.
which translates into a requirement on \(\Omega\) at half the orbital frequency: \(\Omega|_{\omega_{\rm orb}/2}=\sqrt{\Omega^{2}|_{\omega_{\rm orb}}}\) and other cross terms of the form \(\Omega|_{\omega_{\rm s}}\Omega|_{\omega_{2}}\), where \(|\omega_{1}\pm\omega_{2}|=\omega_{\rm orb}\)1.
Footnote 1: Suppose the angular velocity is modulated according to \(\Omega|_{\omega_{1}}=\Omega_{1}\cos(\omega_{1}t)\) and \(\Omega|_{\omega_{2}}=\Omega_{2}\cos(\omega_{2}t)\) where \(|\omega_{1}\pm\omega_{2}|=\omega_{\rm orb}\). Then it follows that \(\Omega|_{\omega_{1}}\Omega|_{\omega_{2}}=\Omega_{1}\Omega_{2}[\cos((\omega_{1 }-\omega_{2})t)+\cos((\omega_{1}+\omega_{2})t)]/2\). In particular, \(|\omega_{1}\pm\omega_{2}|=\omega_{\rm orb}\) leads to \(\Omega|_{\omega_{1}}\Omega|_{\omega_{2}}=(\Omega^{2})|_{\omega_{\rm orb}}\).
#### iii.2.2 Rotation noise
In the following, we consider arbitrary satellite rotations \(\theta(t)\) to give constraints on rotational noise and derive further suppression techniques.
Up to first order in \(\theta(t)\), the atom interferometer phase is given by (see Appendix B):
\[\begin{split}\Phi_{i}&=2k_{i}|\bar{r}_{y,i}(T_{0} )\theta(-T)-2\bar{r}_{y,i}(T_{0}+T)\theta(0)\\ &\qquad\qquad+\bar{r}_{y,i}(T_{0}+2T)\theta(T)],\end{split} \tag{24}\]
where \(\bar{r}_{y,i}(t)=r_{y,0,i}+(v_{y,0,i}+r_{x,0,i}\Omega_{0})t\) denotes the atom's position in the rotated reference frame with \(\Omega_{0}=d\theta/dt|_{t=0}\) and \(t\) is the time after release.
The corresponding transfer function is obtained by taking the Fourier transform of the sensitivity function associated to the signal of Eq. (24),
\[\begin{split} H_{\theta}(\omega)&=4k_{A}B[-2i( \delta r_{y,0}+\delta v_{y,0}(T_{0}+T))\sin^{2}(\omega T/2)\\ &\qquad\qquad+\delta v_{y,0}T\sin(\omega T)],\end{split} \tag{25}\]
where \(k_{AB}=2k_{A}k_{B}/(k_{A}+k_{B})\). The uncertainty of the differential phase induced by rotations \(\theta\) of the satellite is then given by
\[(\Delta\delta\Phi_{\bar{\theta}})^{2}=\int_{0}^{+\infty}|H_{\theta}(\omega)| ^{2}\frac{S_{\bar{\theta}}}{\omega^{4}}(\omega)d\omega, \tag{26}\]
where \(S_{\bar{\theta}}(\omega)\) is the power spectral density of angular acceleration noise.
We require that any uncertainty in the differential phase induced by rotational noise of the satellite per cycle must be below the shot noise limit (see Sec. III.1):
\[\Delta\delta\Phi_{\bar{\theta}}\leq\Delta\delta\Phi_{\rm SN}=\frac{2\sqrt{k_{ B}^{2}+k_{A}^{2}}}{k_{A}+k_{B}}\frac{1}{\sqrt{N}}\approx\sqrt{\frac{2}{N}}. \tag{27}\]
We can restrict the integration in Eq. (26) to \(\omega\in[2\pi/T_{c},2\pi f_{\rm cutoff}]\) as for high frequencies the transfer function drops steeply (see Fig. 4). For low frequencies \(\omega<2\pi/T_{c}\) the uncertainty reduces to the Coriolis phase \(4k_{A}\delta v_{y,0}(\Omega)T^{2}\) where \(\langle\Omega\rangle\) is the average value of \(\dot{\theta}\) during the atom interferometer sequence. This can be treated as a systematic effect resulting in a requirement on \(\langle\Omega\rangle\) (see Eq. (22)).
Additional noise constraints arise from the _mid-fringe requirement_ and from the coupling of the atoms' finite velocity spread to \(\langle\Omega\rangle\) inducing shot to shot noise. The requirement to stay at mid-fringe is treated the same way as low frequency acceleration noise by feed forwarding the results of previous measurements. This yields a requirement on the average phase change per cycle,
\[\langle\dot{\Phi}\rangle=4k_{i}T^{2}v_{i,y,0}\langle\dot{\Omega}\rangle T_{c} \leq\pi/10. \tag{28}\]
Additional phase noise arises due to the limited knowledge of the atoms' kinematics. The shot to shot phase noise induced by the position and velocity uncertainty \(\Delta r_{i,0}\) and \(\Delta v_{i,0}\) is required to be smaller than the shot noise,
\[\left(4k_{i}T^{2}\frac{\sigma_{r,i}}{\sqrt{N}}\langle\Omega\rangle\right)^{2} +\left(2k_{i}T^{2}\frac{\sigma_{r,i}}{\sqrt{N}}\langle\Omega^{2}\rangle\right) ^{2}\leq\frac{1}{N}, \tag{29}\]
assuming a shot noise limited process of determining the atoms' mean position and velocity: \(\Delta r_{i,0}=\sigma_{r,i}/\sqrt{N}\) and \(\Delta v_{i,0}=\sigma_{r,i}/\sqrt{N}\)[67].
#### iii.2.3 Application
Tab. 6 lists the constraints coming from rotations of the spacecraft including their evaluation using the parameters of Tab. 1 assuming white angular acceleration noise and using \(f_{\rm cutoff}=0.5\) Hz in Eq. (26). Note that, in particular for \(\Omega|_{\omega_{\rm orb}}\) and \(\Omega|_{\omega_{\rm orb}/2}\), these requirements do not take into account the phase modulation of the signal by controlled rotations (point (ii) of Sec. II.2) and thus could be relaxed by about 3 orders of magnitude.
### Orbit control
The error mitigation techniques as well as the extraction of the target signal rely on the knowledge of the local gravitational potential. Since the inertial quantum sensor performs a local differential measurement, orbit errors only play a role via perturbing effects from external factors. An error in the
Figure 4: Angular acceleration transfer function, Eq. (25), using the parameters in Tab. 1. For low frequencies \(f\ll 2/T\) it behaves as \(f^{-1}\) while for larger frequencies \(f\gg 2/T\) it follows \(f^{-2}\), similar to the linear acceleration transfer function (see Fig. 2). The frequency cutoff \(f_{\rm cutoff}\) up to which the integration in Eq. (26) is performed is marked in dotted green.
knowledge of the satellite's position at the time of a measurement directly translates to an error due to an incorrect estimation of the corresponding differential gravitational acceleration and its gradients.
#### iii.2.1 Model
To analyze the effect of an orbit uncertainty we utilize a satellite simulator which enables us to study the effects of statistical and systematic uncertainties in the satellite's orbit or attitude. The simulator, called SQUID (Satellite-based QUantum systems for Inertial sensing and Discovery of new physics), allows to synthetically generate a space-borne atom interferometer signal and also analyze it assuming arbitrary orbit and attitude configurations. Here, we generate a realistic signal using a distorted orbit and fit it using a model assuming a perfect, i.e., circular, orbit to study orbit uncertainty induced limitations.
We implement orbit distortions using the Hill model which characterizes the position errors (for weakly eccentric orbits) at time \(t\) according to [68]
\[\Delta R(t) =\tfrac{1}{2}X\cos(\omega_{\text{orb}}t+\varphi_{R})+c_{R}, \tag{30}\] \[\Delta T(t) =-X\sin(\omega_{\text{orb}}t+\varphi_{R})-\tfrac{3}{2}\omega_{\text {orb}}c_{R}t+d_{R},\] \[\Delta N(t) =Y\cos(\omega_{\text{orb}}t+\varphi_{N}),\]
where (\(\Delta R\), \(\Delta T\), \(\Delta N\)) denotes the uncertainty in the (radial, tangential, normal) axis. \(X\), \(Y\), \(c_{R}\) and \(d_{R}\) are amplitude coefficients. For example, a radial uncertainty with \(X>0\) relates to an eccentricity of \(e=\sqrt{X/r}\) where \(r\) is the radius of the inertial circular orbit. A deviation from the circular orbit, e.g., one leading to \(e>0\), introduces additional components of the gravity gradient that are modulated with the orbital frequency [52].
The signal under consideration is given by
\[\delta\Phi(t_{j})=2[\eta g_{x}(t_{j})+\delta r_{x,0}\Gamma_{xx}(t_{j})]k_{AB}T ^{2}, \tag{31}\]
where \(\Gamma_{xx}\) is the \(xx\)-component of the gravity gradient and \(x\) equals the direction of the sensitive axis of the interferometer assuming an inertial attitude. The signal is sampled at certain satellite positions \(r_{S}(t_{j})\) where \(t_{j}\) marks the times a measurement is performed: \(t_{j}\in[0,\,T_{c},\,\dots,\,T_{\text{sc}}]\).
To perform the analysis, a signal is generated according to Eq. (31) using positions \(r_{S}\) computed for a circular orbit with a distortion given by Eq. (30). To estimate the uncertainty introduced by the distortion, we perform a least squares analysis of this signal using a fit model assuming an undistorted circular orbit (see Appendix C).
#### iii.2.2 Application
To estimate the maximum allowed orbit distortion for the STE-QUEST mission proposal, we initialize the unperturbed orbit as circular with the parameters stated in Tab. 1 (\(\eta=10^{-17}\)). We will focus on \(X\) because distortions along the radial and tangential axis directly couple into the signal through the gravity gradient. The normal axis is always perpendicular to the sensitive axis and will lead to less stringent requirements. The simulation was carried out with input values of \(\eta=10^{-17}\) and \(\delta r_{x,0}=1\)\(\mu\)m. The result of the analysis, i.e., the resulting fitted values and uncertainties of \(\eta\) and \(\delta r_{x,0}\) as a function of \(X\), is depicted in Fig. 5.
It is clearly visible that the correct value for \(\delta r_{x,0}\) is recovered in the fit with an uncertainty of \(\pm 0.05\) nm, independent of the orbit error \(X\leq 10^{3}\) m. The resulting \(\eta\), however, drifts away from the expected value \(\eta=10^{-17}\) for increasing orbit errors \(X\). After \(X\approx 250\) m, the expectation value leaves the confidence interval of the fit. This would correspond to a requirement on the maximum tolerable eccentricity, \(e\approx 5.6\times 10^{-3}\).
Note that this analysis does not take any attenuation techniques into account. Normally, the satellite's position is measured together with the differential acceleration leading to a requirement only on the knowledge of the orbit's eccentricity.
Figure 5: Orbit control analysis. The signal was generated using \(\eta=10^{-17}\) and \(\delta r_{x,0}=1\)\(\mu\)m. The fit was performed for various distortion strengths \(X\) (see Eq. (30)) while every other parameter in Eq. (30) was set to zero. Each blue point corresponds to 10 fits with different white noise that have been averaged. The error bars represent the standard deviation of the 10 fits.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Quantity & eq. & \(\eta=10^{-15}\) & \(\eta=10^{-17}\) & Unit \\ \hline \hline \(\Omega_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{ \text{L}_{\text{L}_{\text{L}_{\text{L}}}}}}}}}}}}}\) & (22) & \(3.3\times 10^{-9}\) & \(3.3\times 10^{-10}\) & rad/s \\ \(\Omega_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{\text{L}_{ \text{L}}_{\text{L}_{\text{L}}}}}}}}}}\) & (23) & \(3.8\times 10^{-5}\) & \(5.1\times 10^{-6}\) & rad/s \\ \(\sqrt{\mathcal{S}_{\text{\tiny{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0}{0}{0}{0}{}^{\text{S}}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{ \text{S}}{}^{\text{S}}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}^{ \text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}{}^{\text{S}}^{ \text{S}}{}
### Attitude control
The de-correlation technique (see Sec. II.2) to decouple the signal at interest, i.e., \(\eta g_{x}(t)\), from spurious effects modulated at the same frequency relies on the control of the satellite's attitude. By periodically performing discrete rotations of the satellite during the science time, we introduce phase jumps in the Eotvos signal \(\eta g_{x}(t)\) that help to de-correlate it from external influences modulated at orbital frequency that are not affected by these rotations. These rotations, however, need to be controlled up to a certain level to not introduce additional systematics reducing the sensitivity of the sensor. Here, we exploit SQUID to analyze uncertainties in these satellite rotations and set requirements on the attitude control system.
#### iv.4.1 Model
For the numerical analysis, we proceed similarly as in Sec. IV.3. Here, the synthetic signal includes the WEP violation plus some spurious accelerations,
\[\delta\Phi(t_{j})=2\left[\eta g_{x}(t_{j})+\delta a_{\text{DC}}+\delta a_{ \text{orb}}(t_{j})\right]k_{AB}T^{2}, \tag{32}\]
where \(t_{j}\in[0,\,T_{c},\,\dots,\,T_{\text{sc}}]\) denotes the times a measurement is performed. The linear gravitational acceleration \(g_{x}(t_{j})=g_{x}[T_{c}(t_{j}),\theta(t_{j})]\) is determined by the satellite's position \(r_{\text{S}}\) and attitude \(\theta\) at time \(t_{j}\). \(\delta a_{\text{DC}}\) denotes a spurious constant differential acceleration while \(\delta a_{\text{orb}}(t_{j})\) is modulated at orbital frequency, i.e., \(\delta a_{\text{orb}}(t_{j})=\delta a_{\text{orb,max}}\cos(\omega_{\text{orb }}t_{j})\). Note that both of these additional differential accelerations are assumed to be immune to changes in the satellite's attitude.
Here, we focus on a circular orbit where the satellite is kept inertial but rotated by \(10^{\circ}+\Delta\theta_{m}\) every 50 orbits. \(\Delta\theta_{m}\) denotes the rotation noise that is drawn from a Gaussian distribution with zero mean and standard deviation \(\sigma_{\Delta\theta}\) every time the satellite is rotated. To this signal, we additionally add atomic shot noise. The model matrix is constructed as in Eq. (25) assuming rotations of the satellite without any noise: \(\Delta\theta_{m}=0\), \(\forall m\). Finally, we fit the signal using our model for the free parameters \(\eta\), \(\delta a_{\text{DC}}\) and \(\delta a_{\text{orb,max}}\) for various noise levels \(\sigma_{\Delta\theta}\).
#### iv.4.2 Application
In the following, we will consider the parameters of STEQUEST (see Tab. 1, \(\eta=10^{-17}\)) to set a requirement on the attitude control system with a focus on the de-correlation technique defined in Sec. II.2. The result of this analysis is depicted in Fig. 6. Here, we iterate over different rotation noise strengths \(\sigma_{\Delta\theta}\) and try to recover the Eotvos parameter, \(\eta\), a constant differential acceleration, \(\delta a_{\text{DC}}\) and a differential acceleration modulated at orbital frequency, \(\delta a_{\text{orb,max}}\), from the noisy signal. The signal is generated using \(\eta=10^{-17}\), \(\delta a_{\text{DC}}=g_{0}\times 10^{-9}\) and \(\delta a_{\text{orb,max}}=g_{0}\times 10^{-14}\) as these correspond to the orders of magnitude of uncertainties in the differential acceleration induced by black body radiation and magnetic fields (DC effects) and by self-gravity gradients (AC effect) (see Sec. III.2). The differential acceleration signals \(\delta a_{\text{DC}}\) and \(\delta a_{\text{orb,max}}\) could be recovered independent of the rotation noise \(\sigma_{\Delta\theta}\in[10^{-5},\,10^{-1}]^{\circ}\). The Eotvos parameter, however, is only recovered up to an uncertainty of \(10^{-17}\) for rotation noise below a few \(0.01^{\circ}\), which is well within reach of standard star-trackers and attitude control systems.
## V Feasibility
In this section, we want to summarize and compute the previously derived requirements for the parameters in Tab. 1. The technical readiness level of a mission like STE-QUEST greatly benefits from the heritage of platform stability systems of previous missions. Thus, we also evaluate the requirements by analyzing the environment of previous and current missions, i.e., MICROSCOPE, LISA Pathfinder (LPF) and GRACE-FO1.
Footnote 1: Note that, contrary to MICROSCOPE and LPF, GRACE-FO has no active drag-free control, and we only use the performance of the on board accelerometer [69] as an estimate of the expected residual satellite accelerations.
Tab. 7 summarizes the constraints for STE-QUEST as well as the results for MICROSCOPE, LPF and GRACE-FO. For MICROSCOPE, we use the PSD of differential acceler
ation given in Ref. [70], as the satellite drag-free system is servo-controlled by one test mass, thus the differential acceleration between the test masses acts as an out of loop sensor for residual spacecraft accelerations. The rotation PSD is obtained from Ref. [66], based on star-tracker data. For LPF and GRACE-FO, we use the PSDs presented in Ref. [71] and Ref. [69], respectively. Integrating the PSDs together with the respective transfer functions (Eqs. (19), (26)) using a frequency band of \(f\in[1/T_{c},\infty)\) directly yields the uncertainty in the phase for a measurement of the differential acceleration. The quantities \(\langle\,\cdot\,\rangle\) in Tab. 7, that are requirements on the fluctuations in between cycles, are evaluated by integrating the respective PSD using frequencies smaller than the cycle frequency, i.e., \(f\in(0,1/T_{c}]\). The angular velocity component modulated at orbital frequency, \(\Omega_{\omega_{\rm orb}}\), is obtained by \(\Omega_{\omega_{\rm orb}}=S_{\bar{\theta}}(\omega_{\rm orb})/\omega_{\rm orb }^{2}/T_{\rm obs}\) where \(S_{\bar{\theta}}\) is the angular acceleration PSD that was obtained from a measurement with duration \(T_{\rm obs}\). Analogously for \(\Omega_{\omega_{\rm orb}/2}\).
In conclusion, most of the requirements are met with some margin, proving the technical readiness level of even a Q-WEP test at the \(\eta=10^{-17}\) level. For LPF, the change in the average linear acceleration in-between measurements, \(\langle\dot{a}\rangle\), is about twice as large as the requirement. However, this is limited by out of loop noise on LPF, which could be reduced by acting on the laser frequency, which allows increasing the loop bandwidth. The same method can be applied to handle the slightly too large value of GRACE-FO.
## VI Conclusion
We have studied the platform requirements for a satellite-based dual-species atom interferometer testing the WEP beyond current state-of-the-art measurements. In particular, we have derived the rotation, acceleration and orbit control requirements that a satellite needs to fulfill in order to allow a measurement of the Eotvos parameter \(\eta\) to the unprecedented sensitivity of \(10^{-17}\). We have demonstrated that the performance of previous (MICROSCOPE and LPF) and current (GRACE-FO accelerometer) satellite missions is sufficient to achieve the proposed sensitivity, underpinning the technical readiness of the STE-QUEST mission.
Additionally, we have derived requirements on self-gravity, temperature and magnetic field control inside the satellite at the payload location. To do so we have evaluated the effect of perturbing potentials up to order 4 of a polynomial expansion in position, which is beyond the reach of "standard" methods (e.g. [72]).
Ultimately, missions using atom interferometry are limited by atomic shot noise and the necessarily finite number of atoms that can be cooled and used. We anticipate that the development of entangled atomic source strategies [73, 74, 75, 76] could reduce the constraints on the satellite platform and/or lead to better sensitivity to a violation of WEP.
## VII Acknowledgments
The authors thank all contributors to STE-QUEST proposals (see Ref. [53] for a full list). R.C. thanks the Paris Observatory Scientific Council and was funded by "PSL fellowship at Paris Observatory" program. This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) under Germany's Excellence Strategy (EXC-2123 QuantumFrontiers Grants No. 390837967) and through CRC 1227 (DQ-mat) within Projects No. A05, and the German Space Agency at the German Aerospace Center (Deutsche Raumfahrtagentur im Deutschen Zentrum fur Luft- und Raumfahrt, DLR) with funds provided by the German Federal Ministry of Economic Affairs and Climate Action due to an enactment of the German Bundestag under Grants Nos. 50WM2250A and 50WM2250E (QUANTUS+), No. 50WP1700 (BECCAL), No. 50WM2245A (CAL-II), No. 50WM2263A (CARIOQA-GE), No. 50WM2253A (AI-Quadrat), No. 50RK1957 (QYRO), No. 50WM2177 (INTENTAS), as well as No. 50NA2106 (QYRO+).
## Appendix A Phase-shift calculations
The phase shift \(\Phi\) of a single atom interferometer arises from the overlap of two wave packets that travelled along different arms of the interferometer. This propagation is encoded in the unitary time-evolution operator \(\tilde{U}_{j}\) associated with arm
\begin{table}
\begin{tabular}{c||c|c|c|c||c|c|c|c|c} \hline Quantity & eq. & \(\eta=10^{-15}\) & MICROSCOPE & LPF & GRACE-FO & \(\eta=10^{-17}\) & MICROSCOPE & LPF & GRACE-FO & Unit \\ \hline \hline \(\Delta\Phi_{i}^{a}\) & (19) & \(\pi/10\) & \(0.01\) & \(0.11\) & \(0.003\) & \(\pi/10\) & \(0.02\) & \(0.22\) & \(0.03\) & rad \\ \(\langle\dot{a}\rangle\) & (20) & \(3.2\times 10^{-11}\) & \(1.0\times 10^{-12}\) & \(5.3\times 10^{-11}\) & \(2.0\times 10^{-12}\) & \(2.6\times 10^{-13}\) & \(7.6\times 10^{-14}\) & \(5.4\times 10^{-13}\) & \(3.0\times 10^{-13}\) & m/s\({}^{3}\) \\ \(\Omega_{\omega_{\rm orb}}\) & (22) & \(3.3\times 10^{-9a}\) & \(1.2\times 10^{-8}\) & \(4.0\times 10^{-10}\) & - & \(3.3\times 10^{-10a}\) & \(1.2\times 10^{-8}\) & \(4.0\times 10^{-10}\) & - & rad/s \\ \(\Omega_{\omega_{\rm orb}/2}\) & (23) & \(3.8\times 10^{-5a}\) & \(3.7\times 10^{-8}\) & \(3.8\times 10^{-10}\) & - & \(5.1\times 10^{-6a}\) & \(3.7\times 10^{-8}\) & \(3.8\times 10^{-10}\) & - & rad/s \\ \(\Delta\delta\Phi_{\bar{\theta}}\) & (27) & \(4.5\times 10^{-3}\) & \(2.1\times 10^{-5}\) & \(1.9\times 10^{-6}\) & - & \(8.9\times 10^{-4}\) & \(2.6\times 10^{-5}\) & \(1.5\times 10^{-6}\) & - & rad \\ \(\langle\Omega\rangle\) & (29) & \(7.5\times 10^{-6}\) & \(1.3\times 10^{-9}\) & \(2.9\times 10^{-10}\) & - & \(5.4\times 10^{-7}\) & \(3.9\times 10^{-11}\) & \(4.5\times 10^{-12}\) & - & rad/s \\ \(\langle\dot{\Omega}\rangle\) & (28) & \(1.6\times 10^{-5}\) & \(4.1\times 10^{-9}\) & \(8.8\times 10^{-10}\) & - & \(1.3\times 10^{-7}\) & \(5.9\times 10^{-10}\) & \(5.0\times 10^{-11}\) & - & rad/s\({}^{2}\) \\ \hline \end{tabular}
* Can be relaxed by a factor \(10^{3}\) to address the modulation of the EP-violating signal (cf. (ii) of Sec. II.2).
\end{table}
Table 7: Requirements on satellite accelerations and rotations for \(\eta=10^{-15}\) and \(\eta=10^{-17}\). Both scenarios feature different atom interferometer sequences given in Tab. 1. The constraints are checked by analyzing the data from MICROSCOPE ([70] for acceleration and [66] for rotation), LISA Pathfinder (LPF) [71] and GRACE-FO [69]. Note that \(\Delta\Phi_{i}^{a}\) and \(\Delta\delta\Phi_{\bar{\theta}}\) are evaluated at frequencies \(fT_{c}>1\) whereas the quantities \(\langle\,\cdot\,\rangle\) are constraints on smaller frequencies \(fT_{c}<1\).
\(j=1,2\) that ends in the considered exit port. It includes the relevant momentum transfer at the points of atom-light interaction, as detailed below. In a typical geometry, both arms are generated from the same initial wave packet \(\ket{\psi}\), so that the expectation value of the overlap reads
\[\Theta=\bra{\psi}\hat{U}_{1}^{\dagger}\hat{U}_{2}\ket{\psi}=\mathcal{V}\exp({ \rm i}\Phi), \tag{10}\]
where we have defined the visibility \(\mathcal{V}\). There are different methods to calculate the phase shift, such as in phase space [77; 78] or using path integrals [56; 58; 72; 79], as well as representation-free [80; 81] and perturbative techniques [57].
### Path integrals
The most commonly used method is based on path integrals [72], where the overlap is usually obtained in position representation with \(\psi(\xi)=\bra{\xi}\psi\) so that it takes (in one dimension) the form
\[\Theta=\iiint\mathrm{d}x\mathrm{d}\xi_{1}\mathrm{d}\xi_{2}\psi^{*}(\xi_{1}) \bra{\xi_{1}}\hat{U}_{1}^{\dagger}\ket{x}\bra{x}\hat{U}_{2}\ket{\xi_{2}}\psi( \xi_{2}). \tag{11}\]
In principle, the propagator can be calculated by path integrals
\[\bra{x}\hat{U}_{j}\ket{\xi_{j}}=\int\limits_{\xi_{j}}^{x}\mathcal{D}.\mathcal{ X}\exp({\rm i}S_{j}[\mathcal{X}]/\hbar), \tag{12}\]
where the action functional \(S_{j}[\mathcal{X}]\) depends on the Hamiltonian that describes the motion along arm \(j\), so that Eq. (11) corresponds to the influence functional [82; 83]. For an exact description the complete path integral has to be evaluated, and so far there is no connection to a classical trajectory, let alone the classical action, since the evaluation requires an integration over _all_ trajectories \(\mathcal{X}\). Hence, the path integral is in general not associated with any vanishing variation of the action [84].
However, one can write \(\mathcal{X}=x_{j}+\nu\), where \(x_{j}(t)\) is the classical trajectory with initial condition \(\xi_{j}\) and \(\nu(t)\) are fluctuations with vanishing initial conditions. Expanding the action around the classical trajectories \(x_{j}\), the linear order of expansion vanishes since the classical trajectory follows Euler-Lagrange equations. Hence, one arrives [85] at
\[\bra{x}\hat{U}_{j}\ket{\xi_{j}}=\exp({\rm i}S_{j}[x_{j}]/\hbar)\int\mathcal{D} \nu\exp({\rm i}\delta S_{j}[\nu]/\hbar) \tag{13}\]
with
\[\delta S_{j}[\nu]=\frac{\partial^{2}S_{j}}{2\partial x_{j}^{2}}\nu^{2}+\frac{ \partial^{2}S_{j}}{2\partial\dot{x}_{j}^{2}}\dot{\nu}^{2}+\frac{\partial^{2}S_ {j}}{2\partial x_{j}\partial\dot{x}_{j}}\nu\dot{\nu}+\cdots, \tag{14}\]
where the derivatives are evaluated at \(\nu=0\) and the remaining path integral is the fluctuation integral. It can be calculated exactly for linear potentials and is independent of initial and final conditions. Since the momentum transfer that acts during beam splitter and mirror pulses can be modeled by a linear potential, a calculation for simple cases of atom interferometers is straightforward.
For a symmetric double-diffraction Mach-Zehnder interferometer with interrogation time \(T\) and acceleration \(a\) that closes in phase space, one arrives at a phase
\[(S_{2}-S_{1})/\hbar=2kaT^{2}+\Phi_{\mathrm{L}}(-T)-2\Phi_{\mathrm{L}}(0)+\Phi _{\mathrm{L}}(T) \tag{15}\]
and no further phase contributions arise from the fluctuation integral. In this case, one can associate the phase with the difference of classical action \(S_{2}-S_{1}\). Here, \(\Phi_{\mathrm{L}}(t)\) is the laser phase at time \(t\) and its discrete second derivative enters the phase.
Moreover, the fluctuation integral can also be evaluated for general quadratic Lagrangians, and hence, as an approximation for higher orders using the method of stationary phase. However, in this case the observed phase difference depends not solely on the difference of actions, because the wave packets do not generally overlap perfectly in phase space [86] and a separation phase occurs [58]. In addition, depending on the particular potential, the shape of the wave packets might change and lead to additional contributions. Nevertheless, one can use path integrals to calculate perturbative effects of rotations and quadratic potentials [72].
Such techniques can be used for phase estimations that are based on classical actions, also for potentials beyond quadratic order and without explicitly evaluating the fluctuation integral [56]. While these approaches give some insight into possible phase contributions, they are effectively semiclassical.
When evaluating the difference of actions between both classical trajectories, the virial theorem can be used to express the corresponding integrals solely through the midpoint trajectory \((x_{1}+x_{2})/2\) and the arm separation \(x_{1}-x_{2}\) for a linear potential [87]. One can also perform an expansion of the potential around this midpoint trajectory in orders of the arm separation [58]. This way, one obtains a convenient tool for a calculation of the action difference, but not for obtaining the exact Feynman propagator for arbitrary potentials by evaluating the fluctuation integral. One can circumvent such problems by treating weak potentials as a perturbation.
### Perturbative operator method
We rely on a perturbative but operator-valued method following the work of [57]. Let us assume, that the Hamiltonian inducing the motion along arm \(j\) takes the form \(\hat{H}_{j}=\hat{\mathcal{H}}_{j}+\hat{V}\). The first, unperturbed contribution
\[\hat{\mathcal{H}}_{j}=\frac{\hat{p}^{2}}{2m}-\hbar\sum_{\ell}[k_{j}^{(\ell)} \hat{x}+\Phi_{\mathrm{L}}^{(\ell,\ell)}(t)]\delta(t-T_{\ell})\]
includes the momentum transfer \(k_{j}^{(\ell)}\) on arm \(j\) of pulse \(\ell\) at time \(T_{\ell}\) (being equal to \(-T\), \(0\) and \(T\) respectively at the first, second and third pulse), and we have included the corresponding laser phase \(\Phi_{\mathrm{L}}^{(\ell,j)}\) in this effective potential. Moreover, we assume
that the perturbing potential has the form
\[V(\hat{x})=\sum_{n=1}^{N}c_{n}\hat{x}^{n} \tag{10}\]
that also incorporates a linear term, which accounts for perturbative accelerations and, in particular, accelerations in microgravity.
We then change into the interaction picture [57] with respect to the unperturbed Hamiltonian \(\hat{\mathcal{H}}_{j}\) to calculate the overlap from Eq. (11). We make use of fact that the phase for a closed [86], unperturbed atom interferometer can be trivially calculated, e. g., using path integrals [72] as described above or by an representation-free method [80]. The remaining part of the overlap amounts to perturbatively calculating the Schwinger-Keldysh closed-time-path Green's function, which is equivalent to evaluating the influence functional known from the path-integral formalism [83]. Using a combination of Magnus expansion and cumulant expansion, one can show [57] that \(\Phi=\Phi_{\mathrm{L}}(-T)-2\Phi_{\mathrm{L}}(0)+\Phi_{\mathrm{L}}(T)+\Phi_{ \mathrm{pert}}\), directly obtained from the overlap, consists of a phase induced by the unperturbed Hamiltonian and additional perturbations \(\Phi_{\mathrm{pert}}\) that can be divided into two contributions, namely
\[\begin{split}\Phi_{\mathrm{pert}}=&-\frac{1}{\hbar} \int\limits_{-T}^{T}dt\ [V(x_{1}(t))-V(x_{2}(t))]\\ &-\frac{1}{2\hbar}\int\limits_{-T}^{T}dt\left[\frac{\partial^{2}V }{\partial^{2}x}\bigg{|}_{x_{1}(t)}-\frac{\partial^{2}V}{\partial^{2}x} \bigg{|}_{x_{2}(t)}\right]\left[\sigma_{x}^{2}+\sigma_{v}^{2}t^{2}\right]. \end{split} \tag{11}\]
Here, \(x_{j}(t)\) describes the classical unperturbed trajectory along arm \(j\), i. e., the one induced by the classical analog of \(\hat{\mathcal{H}}_{j}\). Moreover, \(\sigma_{x}^{2}\) is the initial width in position of the wave packet and \(\sigma_{v}^{2}\) is its initial velocity width. For this form, we have assumed that there is initially no correlation between position and momentum. The first contribution is just the perturbing potential \(V\) evaluated at the classical, unperturbed trajectories of both arms, whereas the second contribution accounts for imperfect overlap of wave packets due to their deformation.
We find for harmonic, cubic, and quartic perturbations, i. e., for \(N=4\), in a double-diffraction Mach-Zehnder interferometer
\[\begin{split}\Phi_{\mathrm{pert}}=&-\frac{2kT^{2}} {m}c_{1}-2\frac{2kT^{2}}{m}x(T)c_{2}+\kappa c_{3}\\ &+4\left[\kappa x(T)+\frac{kT^{2}}{m}(4x^{3}(T)-2T^{3}v_{0} \sigma_{v}^{2})\right]c_{4},\end{split} \tag{12}\]
with the abbreviation
\[\kappa=-\frac{kT^{2}}{m}\left[6x^{2}(T)+6\sigma_{x}^{2}+T^{2}\left(v_{0}^{2}+ \left(\frac{\hbar k}{m}\right)^{2}+7\sigma_{v}^{2}\right)\right]\]
and \(x(T)=x_{0}+v_{0}T\), where \(x_{0}\) and \(v_{0}\) correspond to the initial expectation value of position and velocity, respectively. In particular, we observe that wave packet deformations arise for cubic potentials, whereas for quadratic potentials initial conditions enter because the perturbation causes the interferometer to open [86]. The first term that stems from linear potentials has exactly the form \(2kaT^{2}\) discussed in Eq. (10).
### Deriving constraints on the potential's coefficients
In order to reach the proposed target uncertainty, we require that each spurious phase shift induced by the different coefficients of the potential in Eq. (10) is smaller than the Eotvos signal \(2\eta g_{0}k_{\mathrm{AB}}T^{2}\). It should be noted that the coefficients \(c_{n}\) scale different parameters in the differential phase:
\[\begin{split}\delta\Phi=&\delta\Phi_{1}(k_{i},T,m _{i})c_{1}+\delta\Phi_{2}(k_{i},T,m_{i},x_{0,i},v_{0,i})c_{2}\\ &+\sum_{j=3}^{N}\delta\Phi_{j}(k_{i},T,m_{i},x_{0,j},v_{0,i}, \sigma_{x,i},\sigma_{v,i})c_{j},\end{split} \tag{13}\]
where \(i\) marks the species and assuming \(c_{n}\) is species independent. The terms that are independent of statistical parameters can be suppressed by knowing the value of \(c_{n}\) up to an uncertainty \(\Delta c_{n}\) leaving:
\[\begin{split}\Delta\delta\phi=&|\delta\Phi_{1}| \Delta c_{1}+\sum_{j=2}^{N}[|\Delta\delta\Phi_{j}(\Delta\delta x_{0},\Delta \delta v_{0})|c_{j}\\ &+|\delta\Phi_{j}(\delta x_{0},\delta v_{0})|\Delta c_{j}+| \Delta\delta\Phi_{j}(\Delta\delta x_{0},\Delta\delta v_{0})|\Delta c_{j}] \end{split} \tag{14}\]
where \(\Delta\delta\Phi_{j}(\Delta\delta x_{0},\Delta\delta v_{0})=|\partial\delta \Phi_{j}/(\partial\delta x_{0})|\Delta\delta x_{0}+|\partial\delta\Phi_{j}/( \partial\delta v_{0})|\Delta\delta v_{0}\). Note that this is a pessimistic treatment. Since most of these contributions are uncorrelated, the favorable quadratic sum would also suffice.
Thus, we get requirements on the nominal values \(c_{n}\) for \(n>2\) and on the uncertainties \(\Delta c_{n}\) for all \(n\):
\[\begin{split} c_{j}&\leq 2\eta gk_{\mathrm{AB}}T^{2}/| \Delta\delta\Phi_{j}|,\quad j\in\mathbb{N}\setminus\{1\}\\ \Delta c_{j}&\leq 2\eta gk_{\mathrm{AB}}T^{2}/(| \delta\Phi_{j}|+|\Delta\delta\Phi_{j}|),\quad j\in\mathbb{N}.\end{split} \tag{15}\]
For some applications, the potential might not only be linear in the coefficients of interest. For example, for the black body radiation potential we find
\[V_{\mathrm{BBR}}(x)=\frac{2\alpha_{i}\sigma}{c\epsilon_{0}}T_{\mathrm{tube}}^{ 4}(x),\quad T_{\mathrm{tube}}(x)=\sum_{n=0}t_{n}x^{n} \tag{16}\]
yielding \(c_{n}=f(\{t_{m}\}_{m=0}^{n})\) where \(f\) denotes an arbitrary function. Thus, the constraints on \(t_{n}\) are codependent where the constraint on \(t_{n}\) depends on the value of \(t_{n-1}\),
\[\begin{split} t_{j}&\leq 2\eta gk_{\mathrm{AB}}T^{2}/ \Delta\delta\Phi_{j}(t_{m})_{m=0}^{j-1},\quad j\in\mathbb{N}\setminus\{1\},\\ \Delta t_{j}&\leq 2\eta gk_{\mathrm{AB}}T^{2}/(| \partial_{t_{j}}\delta\Phi(t_{m})_{m=0}^{n})|\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+| \partial_{t_{j}}\Delta\delta\Phi(t_{m})_{m=0}^{n})|),\quad j\in\mathbb{N}. \end{split} \tag{17}\]
This set of equations can be solved by assuming values for \(t_{0}\), \(\Delta t_{0}\) and \(t_{1}\). The second-order Zeeman effect can be treated analogously.
## Appendix B Deriving the angular acceleration transfer function
For the following derivation of the angular acceleration transfer function, we look at the configuration shown in Fig. 3
and work in an inertial frame whose origin coincides with the center of mass of the satellite for all times \(t\). The orientation is chosen such that the sensitive axis of the interferometer is aligned with the \(x\)-axis when the atoms are initialized. The mirror is assumed to be rectangular with a thickness of \(d_{\rm M}\) with its center of mass positioned at \({\bf r}_{\rm M}\). Small rotations of the mirror \(\theta_{\rm M}\) simply add to the effect of the rotation of the satellite \(\theta_{\rm S}\) such as the overall rotation is \(\theta(t)=\theta_{\rm M}(t)+\theta_{\rm S}(t)\).
In the inertial frame, the effective wave vector reads \({\bf k}_{i}=-k_{i}\times(\cos[\theta(t)],\sin[\theta(t)])^{T}\) and the initial velocity of the atoms is defined as \({\bf v}_{0}=\tilde{\bf v}_{0}+\mathbf{\Omega}\times\tilde{\bf r}_{0}\). For simplicity, initial accelerations are neglected. Here, \(\tilde{\bf r}_{0}\) and \(\tilde{\bf v}_{0}\) denote the initial position and velocity of the atoms in the rotating satellite frame. The resulting interferometer phase, \(\Phi_{i}\), can be deduced from the effective laser phase \(\phi_{i}(t)={\bf k}_{i}(t)\cdot{\bf r}_{i}(t)\) where \({\bf k}_{i}(t)\) and \({\bf r}_{i}(t)\) are respectively the effective wave-vector and the c.m. position of species \(i\) at pulse at time \(t\in\{-T,0,T\}\). We find
\[\begin{split}\phi_{i}(t)=& k_{i}(\tilde{r}_{x,i}(T_{ 0}+T+t)-(r_{{\rm M},x}+d_{\rm M}))\\ &+k_{i}\tilde{r}_{y,i}(T_{0}+T+t)\theta(t)-\frac{1}{2}k_{i}\tilde {r}_{x,i}(T_{0}+T+t)\theta(t)^{2}\\ &+\Theta(\theta(t)^{3}),\end{split} \tag{13}\]
where \(\tilde{r}_{x,i}(t)=r_{x,0,i}+(v_{x,0,i}-r_{y,0,i}\Omega_{0})t\) and \(\tilde{r}_{y,i}(t)=r_{y,0,i}+(v_{y,0,i}+r_{x,0,i}\Omega_{0})t\) denote the atoms' position in the rotated reference frame and \(T_{0}\) is the dead time measuring the duration from the release of the atoms until the first laser pulse at \(t=-T\) is applied. \(\Omega_{0}=d\theta/dt|_{t=-T_{0}-T}\) denotes the angular velocity of the satellite at the release of the atoms. In the final atom interferometer phase, \(\Phi_{i}=2[\phi_{i}(-T)-2\phi_{i}(0)+\phi_{i}(T)]\), all terms that are constant or linear in time vanish:
\[\begin{split}\Phi_{i}=& 2k_{i}[\tilde{r}_{y,i}(T_{0}) \theta(-T)-2\tilde{r}_{y,i}(T+T_{0})\theta(0)\\ &+\tilde{r}_{y,i}(T_{0}+2T)\theta(T)]\\ &+\Theta(\theta(t)^{2}).\end{split} \tag{14}\]
The sensitivity function of the differential phase \(\delta\Phi\) (see Eq. (15)) for a jump in the satellite rotation \(\Delta\theta(t)\) at time \(t\) is defined as [64]
\[g_{\theta}(t)=\lim_{\Delta\theta\to 0}\frac{\Delta\delta\Phi(\Delta\theta(t))}{ \Delta\theta(t)}. \tag{15}\]
Inserting Eq. (14) yields
\[g_{\theta}(t)=2k_{AB}\begin{cases}0,&t<-T\\ -(\delta r_{y,0}+\delta v_{y,0}T_{0}),&-T<t<0\\ (\delta r_{y,0}+\delta v_{y,0}(T_{0}+2T)),&0<t<T\\ 0,&t>T\end{cases}, \tag{16}\]
where \(k_{AB}=2k_{A}k_{B}/(k_{A}+k_{B})\). With this, we immediately obtain the corresponding transfer function by taking the Fourier transform of the sensitivity function [64]:
\[\begin{split} H_{\theta}(\omega)=4k_{AB}&[-2i(\delta r _{y,0}+\delta v_{y,0}(T_{0}+T)\sin^{2}(\omega T/2)\\ &+\delta v_{y,0}T\sin(T\omega)].\end{split} \tag{17}\]
## Appendix C Analyzing systematic uncertainties using the satellite simulator
In this section, we go into more detail how the SQUID simulator analyzes systematic and statistical uncertainties, using the orbit analysis in Sec. IV.3 as an example. The signal under consideration is given by
\[\delta\Phi(r_{S})=2[\eta g_{x}(r_{S})+2\delta r_{x,0}\Gamma_{xx}(r_{S})]k_{AB} T^{2}, \tag{18}\]
where \(r_{S}\) denotes the satellite's position and \(\Gamma_{xx}\) is the \(xx\)-component of the gravity gradient. For simplicity, we focus only on \(\Gamma_{xx}\), but this can easily extended to include the other components. Note that in the following the signal is sampled at certain satellite positions \(r_{S}(t_{i})\) where \(t_{i}\) marks the times a measurement is performed: \(t_{i+1}=t_{i}+T_{c}\).
For the numerical analysis, we construct the model used for the fit according to
\[\mathcal{M}=2\begin{pmatrix}g_{x}(r_{S}(t_{0}))&\Gamma_{xx}(r_{S}(t_{0}))\\ \cdots&\cdots\\ g_{x}(r_{S}(t_{n-1}))&\Gamma_{xx}(r_{S}(t_{n-1}))\end{pmatrix}kT^{2}, \tag{19}\]
for the free parameters \(p_{f}=(\eta,\delta r_{x,0})^{T}\) such that \(\delta\Phi=(\delta\Phi(r_{S}(t_{0})),\ldots,\delta\Phi(r_{S}(t_{n-1})))^{T}= \mathcal{M}\cdot p_{f}\).
Our analysis is performed in the following steps
1. Generate a signal \(\delta\tilde{\Phi}(r_{S}+\Delta r_{S})\) assuming a certain value for \(\eta\) and \(\delta r_{x,0}\) (see Eq. (18)) for a distorted circular orbit according to the Hill model \(\Delta r_{S}=(\Delta R,\,\Delta T,\,\Delta N)\) (see Eq. (30)) that is additionally subject to white noise (i.e., atomic shot noise).
2. Generate a model matrix \(\mathcal{M}\) according to Eq. (18) using the undistorted circular orbit from (1.).
3. Fit the generated signal (1.) using the model (2.). for the free parameters \(\eta\) and \(\delta r_{x,0}\) for various distortion strengths \(\Delta r_{S}\).
The fit can be obtained by a Generalized Least Squares (GLS) analysis where the best possible estimate of the free parameters \(p_{f}\) is obtained by
\[p_{f}^{\rm GLS}=(\mathcal{M}^{T}\Omega^{-1}\mathcal{M})^{-1}\mathcal{M}^{T} \Omega^{-1}\delta\tilde{\Phi} \tag{20}\]
where \(\Omega\) is the covariance matrix and \(v_{\rm GLS}=(\mathcal{M}^{T}\Omega^{-1}\mathcal{M})^{-1}\) the variance-covariance matrix. Assuming a white noise model, \(\Omega\) reduces to \(\Omega=\sigma\mathbf{1}\) where \(\sigma\) is the width of the distribution. In this case, the analysis simplifies to the Ordinary Least Squares (OLS) method defined as
\[p_{f}^{\rm OLS}=(\mathcal{M}^{T}\mathcal{M})^{-1}\mathcal{M}^{T}\delta\tilde{\Phi} \tag{21}\]
with its variance-covariance matrix being \(v_{\rm OLS}=\sigma^{2}(\mathcal{M}^{T}\mathcal{M})^{-1}\). |
2301.12331 | Time out of Mind: Generating Rate of Speech conditioned on emotion and
speaker | Voice synthesis has seen significant improvements in the past decade
resulting in highly intelligible voices. Further investigations have resulted
in models that can produce variable speech, including conditional emotional
expression. The problem lies, however, in a focus on phrase-level modifications
and prosodic vocal features. Using the CREMA-D dataset we have trained a GAN
conditioned on emotion to generate worth lengths for a given input text. These
word lengths are relative to neutral speech and can be provided, through speech
synthesis markup language (SSML) to a text-to-speech (TTS) system to generate
more expressive speech. Additionally, a generative model is also trained using
implicit maximum likelihood estimation (IMLE) and a comparative analysis with
GANs is included. We were able to achieve better performances on objective
measures for neutral speech, and better time alignment for happy speech when
compared to an out-of-box model. However, further investigation of subjective
evaluation is required. | Navjot Kaur, Paige Tuttosi | 2023-01-29T02:58:01Z | http://arxiv.org/abs/2301.12331v2 | # Time out of Mind: Generating Rate of Speech conditioned on emotion and speaker
###### Abstract
Voice synthesis has seen significant improvements in the past decade resulting in highly intelligible voices. Further investigations have resulted in models that can produce variable speech, including conditional emotional expression. The problem lies, however, in a focus on phrase-level modifications and prosodic vocal features. Using the CREMA-D dataset we have trained a GAN conditioned on emotion to generate worth lengths for a given input text. These word lengths are relative to neutral speech and can be provided, through speech synthesis markup language (SSML) to a text-to-speech (TTS) system to generate more expressive speech. Additionally, a generative model is also trained using implicit maximum likelihood estimation (IMLE) and a comparative analysis with GANs is included. We were able to achieve better performances on objective measures for neutral speech, and better time alignment for happy speech when compared to an out-of-box model. However, further investigation of subjective evaluation is required.
## 1 Introduction
As humans, we are particularly fascinated by the aspects of ourselves that are difficult to put words to, yet are inherent to our intrinsic humanness. We want to be able to define the nature of these fundamental human instincts and, as an extension, be able to replicate them. One of these human aspects of particular interest is emotionality and expressivity. Researchers have been distinctly interested in being able to generate naturalistic human emotions. Yet, the interpretation and generation of image-based features, e.g. creating faces, expressions, and gestures, has seen significantly more research capacity than vocal features, despite the importance of voice as a social cue [15].
For many years, the primary concern for voice generation was intelligibility. Because of this, state-of-the-art generated voices can be mistaken for a human voice, but they still lack contextual adaptation, specifically for non-prosodic features. Features such as pause rate, word length, and spectral features have been poorly explored in vocal synthesis despite their impact on vocal perception [24]. This is in part due to the limitations of SSML and the nature of these features belonging to the time, rather than the frequency domain. This requires sequential generation techniques. A second issue is most systems focus on phrase level modifications[1]. In some cases, random variations are incorporated over the phrase given the variance of the feature [17]. However, little attention has been given to word-level manipulation, despite indications that these fine-grained modifications are of particular importance in human speech [7].
We propose a generative model to take text as an input, and when conditioned on an emotion, will produce a set of word lengths for this phrase. This model is 1. Linguistically contextual: Linguistic features of the input text are considered when generating word lengths. 2. Granular: The result is per word rather than an average over a phrase. 3. Prosody independent: Input prosody is not required to generate word lengths, only text. 4. Sequential: The sequential context of a text is considered.
## 2 Related Works
### Emotional Speech
Most often human vocal modifications are for the purposes of creating 'deliberately clear speech,' [3]. In some cases, speech is not modified for clarity, but rather to communicate a specific emotion of purpose, such as politeness[4]. Vocal modifications are produced without conscious effort to elicit a specific auditory feature, rather they are produced as a result of achieving the aforementioned goals. A table of emotional vocal modifications for the basic emotions can be seen in Appendix D Fig.2.
### Speech Synthesis
TTS has become an inexpensive and efficient means to create realistic voices [6, 25, 18]. Companies like Google1, Amazon2, and Microsoft3 all have their own variations of these vocoders. Tacotron-GST even expresses basic emotions, however, the modifications are only in the frequency domain, i.e. prosodic [21, 14, 22, 1]. Furthermore, TTS is constrained by SSML. Although the available features have broadened and include loudness, pitch, and rate-of-speech4 these must be manually manipulated; as such there is a need to generate values for these features.
Footnote 1: [https://cloud.google.com/text-to-speech/](https://cloud.google.com/text-to-speech/)
Footnote 2: [https://aws.amazon.com/polly/](https://aws.amazon.com/polly/)
Footnote 3: [https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/](https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/)
Footnote 4: [https://cloud.google.com/text-to-speech/docs/ssml](https://cloud.google.com/text-to-speech/docs/ssml)
### Emotion-aware content generation
The recent works on generating content conditioned on emotion use variants of conditional GANs [8, 12, 9]. In this work, we have used Wasserstein GAN for generating relative word lengths, for the first time to our knowledge. We also experiment with the implicit maximum likelihood estimation model [13] to generate more robust and'realistic' data.
## 3 Dataset
We used the CREMA-D dataset [5]. This dataset consists of 91 actors, 48 males and 43 females between the ages of 20 and 74. The actors came from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified). Leading up to and during recording an acting coach was present to help induce emotion in the actors. The actors spoke 12 sentences that were determined to be emotionally neutral [19]. The sentences are listed in Section C.
The actors produced 6 basic emotions : anger, disgust, fear, happiness, neutral and sadness. Surprise, although considered to be one of the basic emotions, was not included as the acting coach suggested this emotion contained too many conflicting sub emotions to be of use. Each of the emotions was produced at one of 4 levels : low, medium, high, or unspecified.
This work produced 7442 clips that are each available as multi-modal video, audio, or image only video. The dataset was then validated by 2443 raters. Each validater rated 90 unique clips, 30 audio, 30 visual, and 30 audio-visual. This resulted in 95% of the clips having more than 7 ratings. Of these validations the intended emotion was selected 40.9%, 58.2% and 63.6% of the time for audio-only, visual-only, and audio-visual data respectively.
### Data Preparation
To prepare our data we wanted to be sure we were only including audio clips of high quality emotional replication. To ensure this we used the Krippendorff's alpha provided by the author. Krippendorff's alpha is a measure of inter-rater agreement that is able to handle categorical and missing responses [10][11]. An \(\alpha\geq 0.823\) is considered to be a good agreement, with \(0.667\leq\alpha\leq 0.823\) considered acceptable agreement [20]. We decided to use the cutoff of 0.667 for audio-only rating (acceptable agreement) as this resulted in 3413 audio clips, which already greatly reduced our training sample. We then defined the ground truth label as the final label for the model since, given an acceptable rater agreement, this should be representative of perceived emotion.
To extract linguistic information from the 12 phrases we used the spaCy library5. We extracted part of speech tagging (POS), dependency paring (DEP) and lemmatization (lemma). We then extracted word lengths and pause lengths with
Gentle Aligner 6. Gentle Aligner maps timestamps on both the word and phoneme level, including the start and end times of each word. Using these start and end times we were able to calculate the duration of each word. To ensure that duration is independent of word length we calculated each of the word lengths relative to the neutral word in the given phrase for each of the emotions. This also results in an optimal encoding for SSML as a rate of speech tag can be provided as a percentage relative to neutral speech, e.g. <prosody rate="50%">.
Footnote 6: [https://github.com/lowerquality/gentle](https://github.com/lowerquality/gentle)
## 4 Modelling
Given a text sequence, we want to generate word length relative to a neutral speech, for each word in a phrase. We use emotion as an input condition so that the output word length can be manipulated to generate speech of varying emotions, like happy or angry. To model this, we consider a conditional generative model architecture, specifically Wasserstein GANs [2], as shown in Fig. 1. The generator architecture takes in emotion and text, along with random noise, and generates a sequence of word lengths. The discriminator network learns to differentiate between the generated word length against the ground truth.
### GAN architecture
Inspired by [8], the generator network is made up of multiple sub-networks: text, emotion and noise encoders and a word length decoder.
The text encoder takes text as input in form of a sequence of 1-hot encoding vectors and maps it into a text latent code. Given the limited size of the vocabulary in the CREMA dataset the size of the 1-hot vector is 54. These vectors are mapped into dense space using a linear neural layer. The vector sequence is then passed into bi-directional LSTM module to learn the sequential context, followed by another linear layer to further reduce the dimensionality of the latent code. This results in a sequence of latent codes of length 20. The emotion encoder is a simple stack of two linear layers which take emotion as a 1-hot encoded vector and map it to a latent space of 3 length. All the outputs of the text and emotion are concatenated together.
The noise encoder is also a simple linear layer that takes noise as its input. The noise is sampled from a normal distribution with a mean of 0 and a variance of 0.7. The variance is chosen such that it is similar to the value of the rest of the encoders. The noise is stacked and reshaped such that the dimensions are the same as the outputs from the rest of the encoders and then added to the outputs. This gives the final latent code for the generator.
The decoder takes in the latent code and learns to construct the word length sequence relative to the neutral speech. Since the output is sequential in nature, we use a stack of two LSTM layers followed by a linear layer to learn the desired sequence. This sequence represents the generated word length for speech. In addition, another head with a linear layer is used to predict the emotion from the generated output to ensure that the emotional context is captured correctly.
The discriminator takes in word length sequences and learns to predict whether it belongs to the real or fake (generated) data class. We use two layers of LSTM modules, along with linear layers for learning the classification.
Figure 1: GAN model pipeline
### Objective function
The discriminator loss is the negative Wasserstein distance between the generator distribution and the real data distribution. It is an approximation of the Earth Mover (EM) distance, which theoretically shows that it can gradually optimize the training of the GAN. The measure is approximated by enforcing the K-Lipschitz constraint on the weights so that they lie in a compact space. One easy approach to enforce this is to clamp the weights between a small region. We consider [-0.03, 0.03] for our experiments. The discriminator \(D\) tries to maximize \(D(x)-D(G(z))\) while the generator \(G\) tries to maximize \(D(G(z))\), where \(x\) is real and \(z\) is generated data. The generator loss is updated less often than the discriminator, for instance, once in every five epochs.
\[L_{D}=max_{w\in W}E_{x\sim P_{r}}[D_{w}(x)]+E_{z\sim Z}[D_{w}(G_{\theta}(z)] \tag{1}\]
\[L_{G}=min_{\theta}-E_{z\sim Z}[D_{w}(G_{\theta}(z)] \tag{2}\]
### Experiments
We use the CREMA-D dataset [5] for our experiments, which consists of 6 different emotion classes and 3413 audio clips of 12 different text sentences from 91 speakers (Appendix 3). In order to study the model improvements, we employ two different approaches to present our data (Fig. 17). In the first approach, we consider word length distribution plots for a pair of word indices in a phrase and compare how the word lengths are distributed for the generated data against the real dataset. Since this method only provides insight into a subset of word length space, another approach is to summarize the dataset using mean and variance to compare both distributions. However, the resulting plot is a concentrated graph which is hard to visually understand. So we consider word length distribution plots for all our experiments and compare models trained for 3 emotion classes i.e _Happy_, _Angry_, and _Neutral_ for 1000 epochs each. We also consider mean square error to understand how far generated data is from the mean of the real population, as presented in Table 4.3.4.
#### 4.3.1 Conditioning on speaker
As shown in Fig. 6, there is a significant variance in speech rates of different speakers, and this information is missed when the model is conditioned on only emotion. This was expected given the results of our exploration in Appendix A. This results in low-quality generated word lengths, as seen in distribution plot Fig. 2a. To incorporate speaker information, we introduce a speaker encoder that takes in a 1-hot encoded vector of speaker identity and produces a latent code, which is then concatenated along with the outputs from the text and emotion encoders. Hence, the model is now conditioned on emotion and speaker.
#### 4.3.2 Using reconstruction loss
In addition to the Wasserstein loss defined in Section 4.2, the model is intermittently trained on reconstruction loss between the generated word lengths and the ground truth. This helps the model to converge faster and generates a better data distribution, as seen in Fig. 3a and Fig 2e,c respectively.
#### 4.3.3 Using dynamic length input
The dataset comprises 12 sentences of varying lengths. A usual approach to handle the varying length input is to pad the sequence with 0s or a small constant. Using the padded input, the model does not converge and generates undesired outputs (Figure 2b), which can be reasoned with the nature of our target data. Since we are dealing with relative word lengths to neutral speech, most of the target data values are 0 or very small, which presents sparsity. With additional padding of 0s, the target data becomes more sparse and as misleading. We use gradient accumulation over a sequence of inputs to support the batch training of the model which provides a better convergence, as shown in Fig. 3c.
#### 4.3.4 Using POS tags as input
The POS tags capture the structure of a sentence which implies that any model trained on POS tags for actual text should be capable to generalize better on large datasets. Because of this we initially used POS tags as input. Although the resulting model converges, the generated dataset shows characteristics of mode collapse and does not capture the spread of real distribution (Fig. 2f).
### Implicit maximum likelihood estimation (IMLE)
We extend our experiments by training an IMLE model to learn the data distribution and generate diverse word lengths. As presented in Fig. 4, the IMLE model is similar to the GAN architecture 1, with differences in noise injection and loss function. We use text and emotion encoders to get a generated latent code. Next, \(m\) samples of noise from normal distribution \(N(0,0.7)\) (where 0.7 is the variance of the latent codes from pre-trained model) are added to the generated latent code independently and fed into the decoder to generate \(m\) different outputs. Among these outputs, a sample with the minimum distance from a ground truth sample is chosen and euclidean distance or L2 loss is calculated for optimization. To help the model train faster, the encoders and decoder are pre-trained to predict the word length sequence (without using noise as input) and optimized using L2 loss between the predicted and ground truth sequence.
Since the IMLE model learns to pull the generated samples closer to the real data, after 1000 epochs, the generated samples are more representative of true data when compared to GAN results after the same number of epochs (Fig. 2c,d). After 4000 epochs, the generated data from the IMLE model still appears more diverse than the GAN results. As shown in Table 4.3.4, the MSE values are slightly higher for IMLE-generated data which is potential because GANs learn a few modes of data very well while not considering for rest of the data. On the other hand, the IMLE model has tried to learn the entire distribution, introducing more noise for all the data points, which could improve by training the model further.
Figure 3: Discriminator loss for different settings, as compared to the final model (cyan line)
Figure 2: Generated word length distribution against empirical distribution for different experiments using GAN and IMLE model. The dots and the crosses represent generated and real data respectively.
## 5 Results
### Objective Results
To evaluate our model's performance we generated ground truth audio samples by directly feeding word lengths to Azure's TTS system7 using "en-US-JennyNeural" and conditioned on a given emotion. These word lengths were the average over all participants and intensities for each phrase. We then generated the baselines, which were the out-of-box Azure TTS for each emotion using the same SSML setup as the ground truth without word lengths. We once again used Gentle Aligner to extract word lengths from the baseline.
Footnote 7: [https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/](https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/)
Three evaluation methods were used, RMSE, Peasons correlation coefficient (PCC) and Dynamic time warping (DTW) with frame disturbance (FD). As our model is non-deterministic the PCC and RMSE are calculated as an average over all generated voices for all phrases. The results can be seen in table 1.
DTW allows us to assess how well-aligned the amplitudes are for two waveforms. The frame disturbance is the average sum of the squared difference in this alignment. Essentially this allows us to see how well-aligned the timing is for two signals of the same amplitude. The resulting DTW for the baseline as well as one of our generated phrases can be seen in Appendix B.
### Subjective Results
Due to the time and resource limitations of human participants we have not completed a thorough subjective evaluation. Ideally, a mean opinion score (MOS) and AB preference test would be conducted. However, the baseline, ground truth and generated text can be listened to in our supplemental material. As our model is deterministic, unlike the baseline, we generated 2 versions of each phrase for both angry and happy voices.
## 6 Conclusion
In this work, we generated word lengths for speech relative to neutral emotion, conditioned on emotion and speaker. We experimented with GAN and IMLE models for generation where IMLE provides more diverse outputs while GANs provide lower mean square error from real datasets. The implementation can be further extended to more emotion classes and larger datasets with more diverse text in order to generalize the model. Overall our model performed well for happy and neutral speech but was lacking in its ability to generate accurate angry word lengths. We do, however, only have an objective assessment and the goal of emotion generation is perceptual. Given that objective results often do not reflect subjective performance [23] a subjective assessment with several raters needs to be completed to be sure of the model's performance.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Angry} & \multicolumn{3}{c|}{Happy} & \multicolumn{3}{c|}{Neutral} \\ \hline
**Model** & **RMSE** & **PCC** & **FD** & **RMSE** & **PCC** & **FD** & **RMSE** & **PCC** & **FD** \\ \hline Baseline & 0.059 & 0.968 & 0.201 & 0.054 & 0.965 & 0.139 & 0.040 & 0.982 & n/a \\ GAN & 0.096 & 0.969 & 0.403 & 0.060 & 0.961 & 0.009 & 0.000 & 1.000 & n/a \\ \hline \end{tabular}
\end{table}
Table 1: Objective results comparison between generated TTs and baseline TTS
Figure 4: IMLE model pipeline |
2310.12470 | RecolorCloud: A Point Cloud Tool for Recoloring, Segmentation, and
Conversion | Point clouds are a 3D space representation of an environment that was
recorded with a high precision laser scanner. These scanners can suffer from
environmental interference such as surface shading, texturing, and reflections.
Because of this, point clouds may be contaminated with fake or incorrect
colors. Current open source or proprietary tools offer limited or no access to
correcting these visual errors automatically.
RecolorCloud is a tool developed to resolve these color conflicts by
utilizing automated color recoloring. We offer the ability to deleting or
recoloring outlier points automatically with users only needing to specify
bounding box regions to effect colors. Results show a vast improvement of the
photo-realistic quality of large point clouds. Additionally, users can quickly
recolor a point cloud with set semantic segmentation colors. | Esteban Segarra Martinez, Ryan P. McMahan | 2023-10-19T05:04:22Z | http://arxiv.org/abs/2310.12470v1 | # RecolorCloud: A Point Cloud Tool for Recoloring, Segmentation, and Conversion
###### Abstract.
Point clouds are a 3D space representation of an environment that was recorded with a high precision laser scanner. These scanners can suffer from environmental interference such as surface shading, texturing, and reflections. Because of this, point clouds may be contaminated with fake or incorrect colors. Current open source or proprietary tools offer limited or no access to correcting these visual errors automatically.
RecolorCloud is a tool developed to resolve these color conflicts by utilizing automated color recoloring. We offer the ability to deleting or recoloring outlier points automatically with users only needing to specify bounding box regions to effect colors. Results show a vast improvement of the photo-realistic quality of large point clouds. Additionally, users can quickly recolor a point cloud with set semantic segmentation colors.
Point clouds, 3D Point Sets, Tools, Segmentation, Color Correction +
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs Software libraries and repositories
+
Footnote †: ccs Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
Footnote †: ccs Software libraries and repositories repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories
+
Footnote †: ccs: Software libraries and repositories repositories
+
Footnote †: ccs: Software libraries and repositories
### Editors with Recoloring as Feature
Because not all recoloring processes are different between editors, we will separate recoloring between _direct recoloring_ and _segmentation recoloring_. Direct recoloring refers to the ability of directly recoloring points in order to improve the quality of the point cloud. Segmentation recoloring refers to recoloring the point cloud with high-contrast colors in order to create coarse segmentation categories of objects in the point cloud.
As seen in **Table 7**, there is an equal amount of tools that support direct and segmentation recoloring. However, a majority of the tools are proprietary and close sourced, with two of the three direct recoloring tools being close sourced. This leaves only CloudCompare[(13)] as the only tool that supports direct recoloring.
### Current Open-source Tools
Not all of the point cloud editors provide large-scale point cloud recoloring. Of the open source tools tested, Point Cloud Visualizer, Semantic Segmentation Editor[(10)], and CloudCompare[(13)] are drastically slow or crashing while performing edits to point clouds that are are larger than 100M points. This limits the capability of editing large point clouds which have a high amount of points for details.
Open source segmentation tools are common to find, for example there is Point Cloud Labling Tool [(2)], which provides a tool for labeling Velodyne data as collected from the KiTTI Dataset. Other tools include Semantic Segmentation Editor[(10)], SUSTech Points [(8)], and 3D BAT[(25)]. Custom Editing Tools Developed by Alexandros Peteinarelis [(17)] has support for segmenting point clouds, however, it does not seem to offer a download location for the software discussed in the paper.
Of the discussed editors, Semantic Segmentation Editor[(10)] and 3D BAT[(25)] provide support for creating bounds based on pre-existing clusters, also known as bounding boxes. Bounds provide coarse selection of points in the point cloud but have the ability of enabling and disabling clusters of the point cloud for consideration.
### Impact of Noise on Point Clouds
There are papers that state the importance of clean or "denoised" point clouds. Noise can impact the subjective quality of the point cloud, as detailed in a study [(23)] where colored point clouds were subjectively evaluated. Another study [(9)] created a dataset of objects with various types of noise introduced by artificial means and corresponding subjective scores by naive subjects. A study [(3)] observed the impact of color and the effect the shading of glossiness or flatness effects of the estimated distance and color of the objects by two laser scanners.
In particular, outdoor datasets are prone to have outlier noise produced often by the laser scanner taking geometry from moving objects such as flags or trees and merging incorrect colors from the environment. One paper [(22)] recognized this to be a problem in low quality point clouds and proposed a solution to repair the point cloud geometry introduced from noisy laser scanning. Another paper [(5)] discussed segmentation and how noise impacted some machine learning algorithms in distinguishing vegetation and building portions.
Point cloud color noise can also be introduced by interactions such as data file compression. [(4)] presents BitDance which is an automated point cloud quality assessor tool which uses factors such as particularly colors and geometry to determine if one point cloud was affected by data compression.
## 3. RecolorCloud
_RecolorCloud_ is a recoloring and automated deletion tool that uses bounding boxes for selecting and editing the point cloud. The tool is designed to edit large-scale point clouds in order to edit and delete colors from a point cloud. By using bounds, it is also possible to recolor the point cloud with solid colors or split the point cloud into individualized point clouds. As an additional feature, RecolorCloud provides convenient file conversion between different popular point cloud formats.
_RecolorCloud_ provides the following features:
* Large point cloud support (>100 million points)
* Recoloring and deleting points based on coloring criteria including segmentation
* File conversion between popular file formats
* Fragmentation of point cloud into smaller ones based on bounds
* Open source and free of charge
The tool was created to be as easy to use for a researcher and novice user in point cloud editing. The computational cost of running this tool is to have enough RAM to load the point cloud in consideration. This project was completed and tested using a computer with an Intel 10850k, 128GB of DDR4 RAM, and a Nvidia GTX 2080.
### Implementation and Architecture
This project was implemented using Python 3.6.3 with a Anaconda environment for dependency control. **Figure 1** shows the user interface that displays RecolorCloud's functions. The application runs standalone, however it requires input data representing the bounding boxes that are meant to be edited. Currently we provide those bouding boxes using another Python tool called LabelCloud [(18)]. LabelCloud opens the point cloud and provides a convenient interface for creating and placing bounding boxes. The python packages used include Open3D[(24)], SciPy [(20)], Numpy[(6)], and
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Editor Name** & **OS?** & **P?** & **Can Recolor?** \\ \hline RhinoTerrain [(16)] & No & Yes & Segmentation \\ \hline CloudCompare [(13)] & Yes & No & Direct \\ \hline Point Cloud Visualizer [(19)] & No & Yes & Direct \\ \hline Vercator Cloud [(15)] & No & Yes & Segmentation \\ \hline TCP Point Cloud Editor[(1)] & No & Yes & Direct \\ \hline Semantic Segmentation Edtr.[(10)] & Yes & No & Segmentation \\ \hline
3D BAT[(25)] & Yes & No & No \\ \hline SUSTech Points[(8)] & Yes & No & Segmentation \\ \hline Custom Editing Tools [(17)] & No & No & Segmentation \\ \hline \end{tabular}
\end{table}
Table 1. Categories of editors and their respective features, Open Source (OS) and Propietary (P)
PyQ5. The dependency list was explicitly kept short so as to have the least amount of extra packages installed.
### Usage
For all cases, there has to be a point cloud loaded to edit the point cloud. These are for cases when the point cloud needs to be converted between file types. If direct or semantically recoloring the point cloud, a LabelCloud bounding file and a custom text file are used to control the semantic colors and enable the different labeled bounds.
After loading the three files, the user has the option of semantically recoloring the bounded regions, deleting regions of the point cloud, recoloring them using criteria, or splitting the point cloud into smaller ones.
### Feature 1: Recoloring points
Recoloring points is done by the user selecting the option in the UI to recolor the points in the bounding boxes. Recoloring done by _RecolorCloud_ primarily works in the RGB 3D space. The user then can select between two different recoloring algorithms as described in the following subsections:
#### 3.3.1. Spherical Recoloring
This technique utilizes the color space from a bounding box, finds the mean color point from the color space, and generates a sphere. All points inside the sphere are preserved while points outside the sphere are recolored with points inside the sphere. This technique is optimal for finding the average color from a region, such as a tree, pole, or bushes.
#### 3.3.2. RGB Bounding Box
This technique takes the color space from all unique colors inside a bounding box and generates a bounding box in the RGB space and corresponding centroid. The user defines a new bounding box region which is used to move the centroid of the first bounding box and scale it down to the size of the user-defined bounding box. This technique is optimal for changing the color of a selected region while retaining the brightness and shading of points inside the selected bounding box.
#### 3.3.3. RGB Color Substitution
This is a direct color replacement of colors based on custom file that define colors for each bounding box. This is useful for recoloring entire portions of a point cloud into a semantically segmented point cloud. This technique only focuses on substituting the colors inside the bounding boxes and recolors them with user-defined colors. Points not inside the bounding boxes are not recolored and removed from the final point cloud.
### Feature 2: Deleting outlier colors
This feature utilizes the same algorithms as outlined in subsections 3.3.1 and 3.3.2.Functionally however, the way outlier points are handled is different from the recoloring algorithms. The changed behavior of the algorithms are changed as follow:
* Spherical Recoloring: Points that lie outside the sphere from are deleted rather than recolored.
* RGB Bounding Box: Similarly, points that lie outside the user-defined domain from the technique are deleted.
### Feature 3: Splitting the point cloud
This technique splits up the point cloud into individual components as defined by bounding boxes. This feature is useful for cases such as a large-scale point and there's a need to break down the point cloud into individually labeled point cloud objects.
Figure 1. User Interface of the RecolorCloud Software.
### Feature 4: Point cloud conversion
An additional feature provided by RecolorCloud is the ability to convert point clouds between different formats. The application can load and save files in the following formats: las, laz, xyz, xyzn, xyzrg, pts, ply, and.ped. This allows users to efficiently convert large point clouds and export them quickly without crashing the application due to visualization or unsupported file types.
## 4. Case Studies
RecolorCloud has been applied successfully applied on different popular datasets and case studies. Below are the results obtained from different case applications. For the purposes of consistency in previewing the results, the before and after images of the point clouds are shown using Autodesk Recap (Ghezani et al., 2017) with a solid black background.
### Greek Park Dataset Case Study
This is a dataset that was collected with the help of the University of Central Florida's (UCF) ChronoPoints laser scanning project(Ghezani et al., 2017). ChronoPoints provided a dataset collected in-situ of UCF's Greek Park fraternity road and buildings. The dataset is approximately 192 meters long by 172 meters wide, providing access to environmental elements such as trees and buildings. This dataset contained multiple noisy outliers within its trees. Additionally, a semantically segmented version of this dataset was generated.
#### 4.1.1. Recoloring Outlier Points
This dataset had issues with almost all the trees as seen in **figure 2**. As this was an outdoor dataset, the trees got contaminated because the wind moved the leaves which were registered with laser scanner and then got merged with the photographs captured by the scanner. The errors manifested as white points with a combination of blue-green points from the color of the sky.
These errors were corrected by a combination of removing excess white outlier points and recoloring the remaining outlier blue-green points to be closer to the color of the tree. The results from this process can be seen in **figure 2**.
#### 4.1.2. Semantic Segmentation of Greek Park
Using the same bounding boxes from the recoloring task, we recolored the point cloud using the KiTTI dataset's colors. The advantage of doing this process is that the point cloud can then be rendered using a synthetic camera. The results from the segmentation process in **figure 3**.
### De-noising and Recoloring Case Study
A second case that RecolorCloud has been tested on has been the use of the dataset Tanks and Temples(Tanks and Temples, 2018), which features a selection of indoor and outdoor datasets. In particular, there was an outdoor dataset called Barn which featured noisy outlier errors commonly associated with outdoor datasets. This was done to demonstrate tool's capability of repairing the noisy colors from the dataset. Below are the results from running RecolorCloud on this case study.
#### 4.2.1. Color Outlier Correction
Similar to the Greek Park dataset, the trees and the buildings had outlier errors in the form of white points as seen on the top panel of **figure 4**. This dataset was corrected by placing bounding boxes on the trees and the buildings and applying the spherical recoloring technique as seen on the bottom panel of **figure 4**.
#### 4.2.2. Color Shift Application
As part of the tool's ability to recolor, the color of items in bounding boxes can be color-shifted to other hues as stated in section 3.3.2. In the example of the barn dataset, the color of the trees and the correctio
Figure 3. Semantic Coloring of Greek Park Point Cloud.
Figure 2. Greek Park Point Cloud Uncorrected (Top Panel) and Corrected (Bottom Panel).
color shifted to deep blue to indicate the potential of color shifting entire sections of the point cloud. This is different from semantic recoloring as colors are not of one unique color and are rather shifted colors from the original set.
### Semantic and Hue Recoloring Case Study
RecolorCloud was applied to an indoor dataset, the Multisensor Indoor Mapping and Positioning Dataset (Krishnan et al., 2018). This is to demonstrate the capabilities of the tool to semantically recolor the dataset. The point cloud chosen from this collection was the _Colored Indoor Laser Scanning Dataset_, named _20180526haiyunoffloors.ply_. This dataset was separately labeled with bounding boxes by LabelCloud and colored with custom segmentation colors.
This dataset had multiple outlier points from rooms that were partially scanned by the mobile laser scanner. These points were removed before proceeding with the recoloring with a preview seen in the top panel of **figure 6**. After applying the bounding boxes on the point cloud the final result of semantically segmenting the point cloud can be seen on the bottom panel **figure 6**.
## 5. Discussion
The tool has proven its capabilities in being able to perform direct and semantic recoloring. The following sections describes the tool's capabilities, current limitations, and future planned features.
### New Capabilities
The greatest highlight in RecolorCloud's capability is its ability to handle large scale point clouds and efficiently recolor them or delete points based on desired criteria. In addition, RecolorCloud is designed such that additional features can be easily coded into the application. This way, if someone wanted a lightweight semantic or direct recoloring tool, they will be able to download the tool and apply the changes easily.
### Limitations
First, the application depends on LabelCloud for generating bounding boxes. This limitation means that RecolorCloud can only perform coarse selection and editing of the point cloud. Granular selection will remain a future work element that will be corrected with improvements such as lasso selection.
Second, the user interface does not directly display the changes that will be applied to the point cloud before editing. This is because
Figure 4. Barn Point Cloud Uncorrected (Top Panel) and Corrected (Bottom Panel).
Figure 5. Color Shifted Barn Point Cloud.
Figure 6. Segmented Point Cloud.
to preview the edits done on a large-scale point cloud will require performing the edits, which is a slow process.
Third, this application depends on having a python back-end for running. This creates a limitation for novice users to use the software as they would have to install Conda, install related dependencies, and run RecolorCloud on a console window. Additionally, users have to install and run LabelCloud to generate the bounding boxes to select points on the point cloud - which additionally requires a down-sampled point cloud to simply view the point cloud in LabelCloud without performance issues.
### Future Work
Future work will make RecolorCloud a PyPi package for ease of installation and distribution. Additionally, on the Github distribution of the source code we plan to discover and implement recommendations by the research community. A future version of this tool is currently under development which will remove the dependency of using LabelCloud for generating bounding boxes, incorporate point cloud selection, and a improved user interface.
## 6. Conclusion
Point clouds will remain an important data format in the future and will require tools to be able to manipulate and correct them. There are a few tools that are capable of editing the colors of point clouds directly, however not all are free or open source. Current open source solutions provide limited capabilities of editing colors with CloudCompare being an exception but limited to medium sized point clouds. Tools that provide a dedicated segmentation mechanic, such as 3D BAT, Semantic Segmentation Editor, or SUSTech Points, can create segmented boundaries but are focused on labeling points rather than recoloring the point cloud and crash when opening large point clouds.
**RecolorCloud**, aims to solve the issue of a lack of a tool that provides users the ability to recolor point clouds directly and correct point clouds. By providing researchers with an open-source python application, users have an accessible option to perform edits and corrections to their point clouds. From the results, RecolorCloud is capable of recoloring by segmentation, correcting outlier colors errors, segmenting per bounding box, and converting between multiple popular point cloud formats.
|
2306.00792 | Learning Across Decentralized Multi-Modal Remote Sensing Archives with
Federated Learning | The development of federated learning (FL) methods, which aim to learn from
distributed databases (i.e., clients) without accessing data on clients, has
recently attracted great attention. Most of these methods assume that the
clients are associated with the same data modality. However, remote sensing
(RS) images in different clients can be associated with different data
modalities that can improve the classification performance when jointly used.
To address this problem, in this paper we introduce a novel multi-modal FL
framework that aims to learn from decentralized multi-modal RS image archives
for RS image classification problems. The proposed framework is made up of
three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3)
mutual information maximization (MIM). The MF module performs iterative model
averaging to learn without accessing data on clients in the case that clients
are associated with different data modalities. The FW module aligns the
representations learned among the different clients. The MIM module maximizes
the similarity of images from different modalities. Experimental results show
the effectiveness of the proposed framework compared to iterative model
averaging, which is a widely used algorithm in FL. The code of the proposed
framework is publicly available at https://git.tu-berlin.de/rsim/MM-FL. | Barış Büyüktaş, Gencer Sumbul, Begüm Demir | 2023-06-01T15:22:53Z | http://arxiv.org/abs/2306.00792v1 | # Learning Across Decentralized Multi-Modal Remote Sensing Archives with Federated Learning
###### Abstract
The development of federated learning (FL) methods, which aim to learn from distributed databases (i.e., clients) without accessing data on clients, has recently attracted great attention. Most of these methods assume that the clients are associated with the same data modality. However, remote sensing (RS) images in different clients can be associated with different data modalities that can improve the classification performance when jointly used. To address this problem, in this paper we introduce a novel multi-modal FL framework that aims to learn from decentralized multi-modal RS image archives for RS image classification problems. The proposed framework is made up of three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3) mutual information maximization (MIM). The MF module performs iterative model averaging to learn without accessing data on clients in the case that clients are associated with different data modalities. The FW module aligns the representations learned among the different clients. The MIM module maximizes the similarity of images from different modalities. Experimental results show the effectiveness of the proposed framework compared to iterative model averaging, which is a widely used algorithm in FL. The code of the proposed framework is publicly available at [https://git.tu-berlin.de/rsim/MM-FL](https://git.tu-berlin.de/rsim/MM-FL).
Barts Buyuktas\({}^{1,2}\), Gencer Sumbul\({}^{1}\) and Begum Demir\({}^{1,2}\)\({}^{1}\)Faculty of Electrical Engineering and Computer Science, Technische Universitat Berlin, Germany
\({}^{2}\)BIFOLD - Berlin Institute for the Foundations of Learning and Data, Germany Remote sensing, federated learning, multi-modal image classification.
## 1 Introduction
Remote sensing (RS) image archives can be stored under different databases due to their growth in size and the data storage limitations of gathering all the data in a centralized server. In addition, some RS archives of data providers (e.g., commercial providers) may not be directly accessible due to commercial concerns, legal regulations, etc. Legal restrictions, such as privacy laws and national security concerns, may also prohibit public access to sensitive information present in the RS image archives [1]. However, most of the deep learning (DL) based approaches require full access to data, while learning the model parameters of deep neural networks (DNNs) during training. To overcome these challenges, federated learning (FL) can be used when there is no access to data on decentralized RS image archives. FL aims to learn DNN models on distributed databases (i.e., clients) and to find the optimal model parameters in a global server (i.e., global model) without accessing data on clients. As one of the first FL studies, the federated averaging (FedAvg) algorithm is introduced in [2] to learn a global model by iterative model averaging. In this algorithm, a local model is trained on each client and then its parameters are sent to the global server, in which the parameters of all local models are iteratively averaged and sent back to clients. Although FL is highly studied in computer vision [2, 3, 4, 5], it is seldom considered in RS [6]. As an example, in [6] an FL algorithm is proposed to learn several global models through hierarchical clustering (denoted as FedPHC) when RS images are non-independently and identically distributed among clients. FedPHC assumes that RS images in the clients are associated with the same data modality. However, RS images on different clients can be associated with different data modalities. In addition, multi-modal images associated with the same geographical area allow for a rich characterization of RS images when jointly considered, and thus improve the effectiveness of the considered image analysis task. To jointly exploit multi-modal RS images, the development of image classification methods has attracted great attention in RS [7, 8, 9]. However, these methods assume that all the multi-modal RS images are accessible during training and thus cannot be directly utilized in the framework of FL. In addition, the adaptation of well-known FL algorithms for the cases, where RS images in different clients are associated with different data modalities, may not be always feasible. To adapt iterative model averaging for such cases, one could operate a dedicated FedAvg algorithm for each modality during training and then employ late fusion during inference (denoted as MSFedAvg). In this way, there is one global model learnt for each modality. Fig. 1 shows the inference phase of MSFedAvg. As one can see from the figure, the images on the same geographical area are fed to the global models based on their modality. Then, the prediction is made by averaging the resulting class probabilities. However, MSFedAvg does not extract and exploit the complementary information among
the different modalities during the training stage. This may result in limited image classification performance. We would like to note that most of the existing FL algorithms assume that the same DNN architecture is considered in each client. However, different DNN architectures can be required for RS images with different data modalities.
To address these issues, in this paper we propose a novel multi-modal FL framework that aims to accurately learn model parameters from decentralized multi-modal RS image archives without having direct access to images in the clients for RS image classification problems.
## 2 Proposed Multi-Modal Federated Learning Framework for RS Image Classification
Let \(K\) be the total number of clients and \(C^{i}\) be the \(i\)th client, where \(1\leq i\leq K\). Each client \(C^{i}\) locally holds the corresponding training set \(D_{i}=\{(\mathbf{x}_{z}^{i},\mathbf{y}_{z}^{i})\}_{z=1}^{M^{i}}\) including \(M^{i}\) samples, where \(\mathbf{x}_{z}^{i}\) is the \(z\)th RS image of the \(i\)th client, and \(\mathbf{y}_{z}^{i}\) is the corresponding class label. Let \(\phi^{i}\) and \(\mu^{i}\) be the image encoder and classifier, respectively, which are employed for local training on \(D^{i}\). We assume that data modalities can be different between clients (i.e., data modalities associated with \(\mathbf{x}_{z}^{i}\) and \(\mathbf{x}_{z}^{j}\) can be different for \(i\neq j\)\(\forall z\)). The proposed framework aims to learn DNNs on a central server for RS image classification problems without accessing data on clients, while different clients can contain RS images associated with different data modalities. To this end, our framework includes three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3) mutual information maximization (MIM). Fig. 2 shows an illustration of the local training stage of the proposed framework, which is explained in detail in the following.
The MF module aims to employ DNNs for iterative model averaging when clients are associated with different data modalities. To this end, instead of employing a single DNN, we define modality-specific backbones and a common classifier on the server side. Let \(P\) be the total number of modalities associated with clients. Let \(\psi_{m}\) be a modality-specific backbone while \(1\leq m\leq P\leq K\) and \(\theta\) be a common classifier. After defining DNNs on the server side, the model parameters are shared with the clients. Then, \(\mu^{i}\) is updated with \(\theta\) for the \(i\)th client and \(\phi^{i}\) is updated with the corresponding modality-specific backbone. To jointly utilize information from different modalities during inference, the feature vectors extracted by all modality-specific backbones \(\{\psi_{m}\}_{m=1}^{P}\) are fused through concatenation. However, during local training, a pseudo fusion via concatenation with the zero vector \(\mathbf{v}\) = \(\vec{0}\) is applied since there is no access to data from different clients. Then, the resulting vector is fed into \(\mu^{i}\). MF module allows defining different architectures for different modalities and also jointly utilizing and extracting information from multiple modalities.
It is worth noting that the distributions of feature vectors may differ between clients because of the modality differences that may lead to sub-optimal parameters of the aggregated models. To reduce such distribution differences, the FW module aims to project the feature vectors of the images with
Figure 1: An illustration of MSFedAvg during inference, while the clients are assumed to separately include Sentinel-1 (S1) and Sentinel-2 (S2) images for the sake of simplicity.
Figure 2: An illustration of the local training stage of our framework when two clients (a) and (b) are considered and include images associated with Sentinel-1 (S1) and Sentinel-2 (S2), respectively, for the sake of simplicity.
different modalities that are extracted by local models into a common distribution. This is achieved by using batch whitening layers instead of batch normalization layers inspired by [10]. The batch whitening layer \(BW\) for \(\mathbf{x}_{z,p}^{i}\) is defined as follows:
\[\begin{split}\mathbf{\hat{x}}_{z}^{i}&=W_{B}(\mathbf{x}_{z}^{ i}-\mu_{B}),\\ BW(\mathbf{x}_{z,p}^{i})&=\gamma_{p}\mathbf{\hat{x}}_{z,p}^ {i}+\beta_{p},\end{split} \tag{1}\]
where \(\mathbf{x}_{z,p}^{i}\) is the \(p\)th element of \(\mathbf{x}_{z}^{i}\), \(\mu_{B}\) is the mean vector, \(\gamma_{p}\) is the scaling parameter and \(\beta_{p}\) is the shifting parameter. This module aligns the data distributions between clients using covariance matrices. It uses domain-specific alignment layers, which compute domain-specific covariance matrices of intermediate features.
The MIM module aims to model mutual-information between different modalities by maximizing the similarity of images, which are acquired on the same geographical area and associated to different modalities. To achieve this, we employ the NT-Xent loss function \(\mathcal{L}_{NTX}\)[11] for modelling similarity through the feature vectors of a local model and that of aggregated models from different modalities. To this end, the feature vectors are extracted by feeding images into the local and global models. Since the global models are not trained with images that have the same modality as the images in local client, it is not possible to directly feed into the global model. Therefore, images are fed to the first convolutional layer of the local model and the output is fed to the global models. Then, the \(\mathcal{L}_{NTX}\) for a given mini-batch \(\mathbf{\mathcal{B}}\) is calculated using two feature vectors as follows:
\[\mathcal{L}_{NTX}(\mathbf{\mathcal{B}})\!=\!\!-\!\!\!\sum\limits_{\mathbf{x}_{z}^{i}\in \mathbf{\mathcal{B}}}\!\!\!\!\log\frac{e^{S(\phi^{i}(\mathbf{x}_{z}^{i}),\psi_{m}(\bm {x}_{z}^{i}))/\tau}}{\sum\limits_{\mathbf{x}_{z}^{i}\in\mathbf{\mathcal{B}}}\mathbb{1} _{[z\neq t]}e^{S(\phi^{i}(\mathbf{x}_{z}^{i}),\psi_{m}(\mathbf{x}_{t}^{i}))/\tau}}, \tag{2}\]
where \(\tau\) is a temperature parameter, \(\mathbb{1}\) is the indicator function, \(S(.,.)\) measures cosine similarity and \(\mathbb{1}\) is the concatenation operator. Accordingly, we define the local objective \(\mathcal{L}\) based on \(\mathcal{L}_{NTX}\) and cross-entropy loss \(\mathcal{L}_{BCE}\) as follows:
\[\mathcal{L}(\mathbf{\mathcal{B}})\!=\!\mathcal{L}_{NTX}(\mathbf{\mathcal{B}})\!+\!\! \!\!\sum\limits_{\mathbf{x}_{z}^{i},\mathbf{y}_{z}^{i}\in\mathbf{\mathcal{B}}}\!\!\!\! \mathcal{L}_{BCE}(\theta(\phi^{i}(\mathbf{x}_{z}^{i})\|\mathbf{v}),\mathbf{y}_{z}^{i}). \tag{3}\]
Once the local training procedure of our framework is completed, the considered local parameters are sent back to the server and aggregated on the server side.
## 3 Experimental Results
The experiments were conducted on the BigEarthNet-MM archive [12]. It includes 590,326 multi-modal image pairs, each of which includes Sentinel-1 and Sentinel-2 images acquired on the same geographical area. Each pair in BigEarthNet-MM is annotated with multi-labels. The multi-modal image pairs acquired in summer were used for experiments. We defined six clients, in which three clients contain Sentinel-1 images and the remaining clients contain Sentinel-2 images. All six clients participate in each round of the local training. We defined two different scenarios: 1) the images are randomly distributed to the clients; and 2) the images acquired in the same country are present in the same client. We utilized the ResNet-50 CNN architecture [13] as the backbone of the proposed framework. We used the Adam optimizer with the learning rate of 0.001 and the momentum of 0.9. We trained our framework for 40 epochs with the mini-bath size of 256. We evaluate the performance of our method in terms of classification accuracy (in \(F_{1}\)-Score) and local training complexity (in seconds).
In the first set of trials, we analyze the effectiveness of each module of our framework. Table 1 shows the corresponding results. One can see from the table that jointly using three modules achieves the highest results than other combinations for both scenarios. This shows that FW and MIM modules reduce the data distribution differences between clients, which leads to accurate characterization of multi-modal RS images during training. The lowest accuracy is obtained by the joint use of MF and MIM modules for Scenario 1, which is 2.5% lower than that obtained by the joint use of all modules. Moreover, the lowest accuracy is obtained by using only MF module for Scenario 2, which is 7.6% lower than the joint use of all modules. This indicates that the FW module increases the performance more than the MIM module. One can also observe from the table that the results achieved with Scenario 1 are higher than those achieved with Scenario 2. As an example, using only the MF module provides 74.4% and 61.7% accuracies for Scenario 1 and Scenario 2, respectively. This is due to the higher non-IID level of Scenario 2 compared Scenario 1.
In the second set of trials, we compare the proposed method with MSFedAvg in terms of accuracy and local training complexity under both scenarios. The local training complexity refers to the average computational cost for training the considered DNN for each client. Table 2 shows the corresponding results. One can observe from the table that our framework achieves the highest \(F_{1}\)-scores under both scenarios compared to MSFedAvg. The proposed framework
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{**Module**} & \multicolumn{3}{c|}{\(\mathbf{F_{1}\)**-Score**}} \\ \hline MF & FW & MIM & Scenario 1 & Scenario 2 \\ \hline ✓ & ✗ & ✗ & 74.4 & 61.7 \\ \hline ✓ & ✗ & ✓ & 74.2 & 67.1 \\ \hline ✓ & ✓ & ✗ & 75.5 & 68.5 \\ \hline ✓ & ✓ & ✓ & 76.7 & 69.3 \\ \hline \end{tabular}
\end{table}
Table 1: \(F_{1}\)-scores (%) associated with the different combinations of the modules of the proposed framework.
outperforms MSFedAvg by 9.5% and 10.4% for Scenario 1 and Scenario 2, respectively. This shows the effectiveness of our framework compared to MSFedAvg. One can also see from the table that the proposed framework leads to higher accuracies than MSFedAvg at the cost of a slight increase in local training complexity. The average completion time of a training round on one client increases only 10% when the proposed framework is used instead of MSFedAvg. This increase can be compensated by using higher mini-batch sizes to reduce the local training complexity of our framework.
## 4 Conclusion
In this paper, we have introduced a novel framework, which is capable of learning DNN model parameters from decentralized multi-modal RS image archives without accessing data on clients in the context of RS image classification. Our framework includes: i) multi-modal fusion module to perform iterative model averaging when the images in different clients are associated to different data modalities; ii) feature whitening module to align the representations learned among different clients; and iii) mutual information maximization module to maximize the similarity of images from different modalities. Experimental results show the success of the proposed framework compared to MSFedAvg [2].
We would like to note that the proposed framework is independent from the number of clients and also the modality-specific backbones being selected for the considered modalities. Thus, any DNN architecture specifically designed for each modality (e.g., attention-based CNNs for very high resolution aerial images) can be utilized in our framework. This can allow to accurately describe modality-specific information content, and thus lead to higher RS image classification performance. As a future work, we plan to investigate the joint use of RS images with socio-economic data (e.g., demographic data, household surveys, etc.) in the framework of FL in RS.
## 5 Acknowledgements
This work is supported by the European Research Council (ERC) through the ERC-2017-STG BigEarth Project under Grant 759764 and by the German Research Foundation through the IDEAL-VGI project under Grant 424966858.
|
2301.05895 | Uncovering the nature of transient and metastable non-equilibrium phases
in 1$T$-TaS$_2$ | Complex systems are characterized by strong coupling between different
microscopic degrees of freedom. Photoexcitation of such materials can drive
them into new transient and long-lived hidden phases that may not have any
counterparts in equilibrium. By exploiting femtosecond time- and angle-resolved
photoemission spectroscopy, we probe the photoinduced transient phase and the
recovery dynamics of the ground state in a complex material: the charge density
wave (CDW)-Mott insulator 1$T$-TaS$_2$. We reveal striking similarities between
the band structures of the transient phase and the (equilibrium) structurally
undistorted metallic phase, with evidence for the coexistence of the
low-temperature Mott insulating phase and high-temperature metallic phase.
Following the transient phase, we find that the restoration of the Mott and CDW
order begins around the same time. This highlights that the Mott transition is
tied to the CDW structural distortion, although earlier studies have shown that
the collapse of Mott and CDW phases are decoupled from each other.
Interestingly, as the suppressed order starts to recover, a long-lived
metastable phase emerges before the material recovers to the ground state. Our
results demonstrate that it is the CDW lattice order that drives the material
into this metastable phase, which is indeed a commensurate CDW-Mott insulating
phase but with a smaller CDW amplitude. Moreover, we find that the long-lived
state emerges only under strong photoexcitation and has no evidence when the
photoexcitation strength is weak. | Tanusree Saha, Arindam Pramanik, Barbara Ressel, Alessandra Ciavardini, Fabio Frassetto, Federico Galdenzi, Luca Poletto, Arun Ravindran, Primoz Rebernik Ribic, Giovanni De Ninno | 2023-01-14T11:28:11Z | http://arxiv.org/abs/2301.05895v2 | # Uncovering the nature of transient and metastable non-equilibrium phases in \(1t\)-TaS\({}_{2}\)
###### Abstract
Complex systems are characterized by strong coupling between different microscopic degrees of freedom. Photoscitcation of such materials can drive them into new transient and long-lived hidden phases that may not have any counterparts in equilibrium. By exploiting femtosecond time- and angle-resolved photoemission spectroscopy, we probe the photoinduced transient phase and the recovery dynamics of the ground state in a complex material: the charge density wave (CDW)-Mott insulator \(1T\)-TaS\({}_{2}\). We reveal striking similarities between the band structures of the transient phase and the (equilibrium) structurally undistorted metallic phase, with evidence for the coexistence of the low-temperature Mott insulating phase and high-temperature metallic phase. Following the transient phase, we find that the restoration of the Mott and CDW order begins around the same time. This highlights that the Mott transition is tied to the CDW structural distortion, although earlier studies have shown that the collapse of Mott and CDW phases are decoupled from each other. Interestingly, as the suppressed order starts to recover, a long-lived metastable phase emerges before the material recovers to the ground state. Our results demonstrate that it is the CDW lattice order that drives the material into this metastable phase, which is indeed a commensurate CDW-Mott insulating phase but with a smaller CDW amplitude. Moreover, we find that the long-lived state emerges only under strong photoexcitation and has no evidence when the photoexcitation strength is weak.
## I Introduction
Materials dominated by strong electron-electron and electron-lattice interactions can undergo phase transitions to insulating ground states, exhibiting charge and lattice order [1; 2; 3; 4; 5; 6; 7]. Under non-equilibrium conditions, such systems display a collapse of charge and lattice order of the ground state, as well as the occurrence of novel or hidden phases which are thermally inaccessible under equilibrium [8; 9]. Ultrafast pump-probe techniques have paved the way to delve into the non-equilibrium regime of matter [10; 11; 12]. Solid-state systems exhibiting some intriguing phases, such as Mott [13; 14; 15; 16; 17; 18], charge density wave (CDW) [19; 20; 21; 22; 23] and excitonic [24; 25; 26; 27], are being extensively studied using ultrafast spectroscopic and diffraction methods in the femtosecond time domain. The relevant timescales of quenching dynamics, photoinduced phase transitions [28; 29; 30; 31; 32; 33; 34; 35] and the emergence of metastable phases [9; 36; 37; 38] are the topics of great interest. While the quenching occurs instantaneously in Mott insulators and the timescale is set by the electronic hopping time given by the bandwidth [28; 39], the Peierls-CDW materials exhibit quenching times that are comparable to the timescales of the, slower, lattice-driven processes [33; 40]. For excitonic insulators, carrier screening time, given by the plasma frequency, determines the characteristic timescale [29].
The layered CDW-Mott insulator \(1T\)-TaS\({}_{2}\) is a prominent example of complex system since both electron-electron and electron-lattice interactions are simultaneously strong. It exhibits a manifold of electronic and structurally ordered phases [41; 42; 43; 44; 45]: at high temperatures (\(T>550\) K), the system has an undistorted hexagonal structure and is metallic while cooling results in the formation of various CDW phases - incommensurate \(\rightarrow\) normalensurate \(\rightarrow\) commensurate. Below the critical temperature for the commensurate CDW (CCDW) phase, \(T_{C}=180\) K, a periodic lattice distortion (PLD) gives rise to the formation of "Star-of-David (SD)" - shaped clusters consisting of thirteen Ta atoms. Fig. 1(a) shows a schematic of the lattice reconstruction in the plane of Ta atoms and its Brillouin zone in the metallic and CCDW phases of \(1T\)-TaS\({}_{2}\). The \(\sqrt{13}\times\sqrt{13}\) superlattice splits the Ta \(5d\) valence band into three subband manifolds, such that the narrow half-filled band at the Fermi level \(E_{F}\) becomes favorable for a Mott-Hubbard transition [41; 46]. Previous trARPES studies have shown an instantaneous collapse of the Mott gap at \(E_{F}\) on timescales \(<50\) fs after photoexcitation [41; 32; 44; 31]. In addition, the CDW gap between the Ta \(5d\) subbands was found to melt faster than the lattice vibrational timescale, suggesting that electron correlations might play a vital role in the CDW ordering [32; 44]. A prompt collapse of charge ordering was also shown using ultrafast core-level PES [48]. Ultrafast electron diffraction studies have identified a suppression of the PLD in the nearly-CCDW phase from the optically induced change in the spatial distribution of the electron density [49]. Lately, single-shot time-resolved techniques were able to capture the emergence of a persistent "hidden" phase in \(1T\)-TaS\({}_{2}\)[9; 36; 37; 50; 51; 52; 53; 54]. However, different characteristics of such a state can be manifested by tuning the experimental conditions [55].
Even though this material has been extensively studied, there has been minimal emphasis on the state of charge and lattice ordering in the non-equilibrium tran
sient phase. Moreover, the majority of the studies have focussed on the early stages of the dynamics, i.e., on the collapse rather than the recovery to the ground state. In the present work, we address the above scenario in \(1T\)-TaS\({}_{2}\) by studying its electronic band structure in the transient phase, as well as the recovery dynamics of the electronic and lattice order. We choose band structure as the spectroscopic parameter since its various features such as bandwidth, dispersion of the band and binding energy provide information about the lattice order, which plays a prominent role in the ground state of \(1T\)-TaS\({}_{2}\). Angle-resolved photoemission spectroscopy [56] (ARPES) in the ultrafast time domain is employed to systematically track the temporal evolution of Ta \(5d\) subbands in the CCDW-Mott phase. Our time-resolved ARPES (trARPES) study demonstrates that, after optical excitation, the material enters a transient phase which bears a striking correspondence with the high-temperature unreconstructed phase. Simultaneously, the early dynamics of the photoexcited system also demonstrates the coexistence of Mott-insulating and unreconstructed metallic phases. Interestingly, the recovery of the Mott and CDW dynamics, after traversing the transient phase, is observed to commence around the same time. It is important to note that although the suppression of Mott-CDW electronic order and CDW lattice order is known to occur on two distinct time scales in \(1T\)-TaS\({}_{2}\)[32; 44], the presence of a single timescale observed for the order re-establishment emphasizes that Mott physics is indeed coupled to the CCDW ordering in this material. Moving further, we find that the material recovers to a long-lived hidden phase that is primarily governed by the lattice order of the CDW. Moreover, our results predict that the hidden phase is a CCDW-Mott insulating phase but with a reduced CDW amplitude. Lastly, we also demonstrate that the emergence of a long-lived metastable state is observed only at high photoexcitation strengths and has no signatures under weak photoexcitation.
## II Experimental details
Single crystals of \(1T\)-TaS\({}_{2}\) were purchased from HQ Graphene [57]. The trARPES experiments were performed at the CITIUS high-harmonic generation (HHG) light source [58]. The system is driven by a mode-locked Ti:Sapphire laser delivering 800-nm pulses, with a duration of 40 fs at a repetition rate of 5 kHz. The driving laser was split in two beams: the major part of the intensity was used to generate extreme-ultraviolet (EUV) probe pulses through HHG, with Ar as the generating medium, and the remaining part was used as the pump.
Figure 1: (a) (Top) In-plane structural distortion in the CCDW phase of \(1T\)-TaS\({}_{2}\) produces “Star-of-David” clusters having inequivalent “a”, “b”, and “c” atoms. Red and blue dashed lines indicate the unit cells in the CCDW and unreconstructed phases, respectively. The arrows indicate the displacement of the Ta atoms from their initial positions. (Bottom) Brillouin zone in the unreconstructed (blue) and distorted phases (red) with the high-symmetry points \(\Gamma,M,K\). (b) A schematic of the pump-probe experimental geometry where the electric field \(\vec{E}\) of s- and p-polarized photons are indicated by blue (along \(y\)-axis) and green (in the _xz_-plane) double-headed arrows, respectively.
Figure 2: (a) Time evolution of the electronic band structure in \(1T\)-TaS\({}_{2}\) about \(M\)-point (along \(MK\) direction). The peak positions of the energy distribution curves (EDCs) have been plotted as a function of \(k_{||}\) at each pump-probe delay \(\Delta t\). (b) ARPES snapshots acquired before and after (\(\Delta t=+300\) fs) photoexcitation. (c) Corresponding EDC stacking where the blue curve represents the EDC at \(M\). The black curves are guide to the eye for the band dispersion. (d) Comparison of the band dispersion before photoexcitation and in the transient state of the system, where there is an energy shift towards \(E_{F}\) and the band is more dispersive. All the data correspond to a high pump fluence of 3.6 mJ/cm\({}^{2}\) and the dashed lines in (b) and (c) indicate \(E_{F}\). Binding energy is abbreviated to B. E.
The intensity of the pump pulses on the sample was controlled with a variable attenuator - in all experimental plots, the fluence refers to the (incident) peak energy density (in mJ/cm\({}^{2}\)), determined from the expression \(2E_{p}/(\pi w^{2})\), where \(E_{p}\) is the energy per pulse and \(w\) is the beam waist at the sample position. A schematic of the experimental geometry showing the polarization of pulses is shown in Fig. 1(b). The photon energy of the probe was selected by a monochromator grating with off-plane geometry, which preserved the pulse duration [59]. During the experiments, the fundamental frequency of the laser (\(h\nu=1.55\) eV) was used for optical excitation (pump pulse). A photon energy \(h\nu\sim 20\) eV (harmonic 13 of the fundamental laser) was selected for the probe pulse due to higher photoionization cross-section of the Ta \(5d\) bands and a high photon flux. To preserve the ultrafast response, the energy resolution of the source was limited to about 150 meV. This allowed us to achieve a temporal resolution of around 50 fs. The ultra-high vaccum setup at CITIUS is equipped with an R3000 hemispherical electron analyser from VG Scienta. A closed-cycle Helium cryostat was used to control the sample temperature and all the measurements were performed at an initial sample temperature \(T=100\) K. Prior to ARPES measurements, clean sample surfaces were obtained via cleaving in the direction perpendicular to the atomic planes. The samples were cleaved under UHV pressure better than \(6\)x\(10^{-9}\) mbar and the measurements were performed at a base pressure \(<1\)x\(10^{-10}\) mbar. p-polarized pump and probe pulses [green arrows in Fig. 1(b)] were used for the obtained data, unless specified.
## III Results
We will refer to the (equilibrium) electronic band structure of \(1T\)-TaS\({}_{2}\) reported in Ref. [[44]] while presenting the trARPES results on different Ta \(5d\) subbands. Firstly, we will demonstrate the nature of the photoinduced transient phase by characterizing the evolved band structure. For a high photoexcitation strength (\(3.6\) mJ/cm\({}^{2}\)), the time evolution of the Ta \(5d\) subband along high symmetry \(MK\) direction (we call it \(B_{2}\) band) [44] is plotted in Fig. 2(a). We observe that a shift in binding energy towards \(E_{F}\) and an enhancement of the bandwidth characterize the evolution, which occurs on a 200 fs-timescale. Since the timescale corresponds to half an oscillation cycle of the CDW amplitude mode [60; 61], the temporal changes indicate the collapse of CDW lattice order after photoexcitation. Subsequent recovery of the suppressed order is observed to occur after 300 fs (red and yellow circles). The characterization of the transient phase at pump-probe delay \(\Delta t=+300\) fs is presented in Figs. 2(b)-(d). An energy shift of the band minimum by 0.16 eV towards \(E_{F}\), accompanied by a substantial increase of the bandwidth [see Fig. 2(d)], are in excellent agreement with the dispersion of the \(B_{2}\) band in the unreconstructed phase [44]. According to theoretical calculations [41; 44], the dispersion crosses \(E_{F}\) at \(k_{\parallel}\) away from \(M\), which is however, not evident in our data at 300 fs. This is because \(B_{2}\) might have traversed such a feature within a few 10s of fs before 300 fs and could not be captured due to a large time interval (50 fs) used in the experiments. This particular characteristic of the dispersion is reported in Ref. [[31]]. Despite the correspondence of the transient band dispersion with that of the (equilibrium) high-temperature phase, the evolved band structure does not reflect phase transitions due to the rise in effective lattice temperature. This is because the observed changes occur much faster than the time scale needed to transfer the energy from the electronic subsystem to the lattice through phonon emission. According to the partial density of states in \(1T\)-TaS\({}_{2}\)[41], photoexcitation involves a redistribution of the conduction electron density within the SD clusters. This results in a radial motion of the Ta atoms towards the outer ring of the SD clusters ["c" atom in Fig. 1(a)] and hence, a relaxation of the periodic lattice distortion. The electrons can accommodate instantaneously to the atomic positions (Born-Oppenheimer approximation) which is evidenced by the band structures obtained at different time delays in Fig. 2(a). Hence, the relaxation of the PLD demonstrated in
Figure 3: (a) Temporal evolution of the EDCs at early pump-probe delays integrated over a \(k_{\parallel}\) range of \(\pm 0.1\) Å\({}^{-1}\) about \(\Gamma\)-point (along \(\Gamma M\) direction). (b) ARPES snapshots acquired before and after (\(\Delta t=+300\) fs) photoexcitation. (c) Corresponding EDC stacking where the blue curve denotes the EDC at \(\Gamma\). Smooth curves are guide to the eye to emphasize the change in the band dispersion around \(\Gamma\) in the transient phase. (d) ARPES snapshots about \(\Gamma\) taken at different delays using s-polarized probe pulses. (e) Corresponding EDC stacking. The smooth black line indicates the flat upper Hubbard band and its dynamics is obtained by changing the probe polarization from horizontal (p-pol) to vertical (s-pol). The data acquired using p-polarized and s-polarized probe pulses correspond to pump fluences of \(3.2\) mJ/cm\({}^{2}\) and \(3.6\) mJ/cm\({}^{2}\), respectively, and the dashed lines indicate \(E_{F}\). Binding energy is abbreviated to B. E.
our results is driven by the redistribution of charge density and is not an effect related to the increase in lattice temperature.
We will now look at the dynamics of the lower Hubbard band (LHB) [44] along high symmetry \(\Gamma M\) direction at a similar photoexcitation strength (3.2 mJ/cm\({}^{2}\)). The EDCs at various time delays extracted from the \(k\)-integrated trARPES spectrum are shown in Fig. 3(a). The early dynamics show a collapse of the Mott phase as the spectral weight in LHB is suppressed and transferred to binding energies at and above \(E_{F}\), similar to earlier studies [31; 32; 42]. The recovery of the spectral weight begins after 300 fs; it is to be noted that this is the same time at which the CDW lattice order starts to reform in Fig. 2(a). In spite of the established scenario where the suppression of electronic and lattice order occurs on different time scales [32], we find that the re-establishment of Mott electronic order and CDW lattice order begins at the same time. This provides evidence that the CCDW lattice reconstruction is the mechanism behind the Mott transition in this material [41; 44]. Figures 3(b)-(e) display the characteristics of the band structure in the transient phase at \(\Delta t=+300\) fs. We find that the spectral weight from LHB has shifted to an energy band above \(E_{F}\), which is (i) dispersive about \(\Gamma\) unlike the flatness of LHB, (ii) the band minima lies at \(\approx\) -0.1 eV [see Figs 3(b), (c)]. [It is to be noted that the dispersive feature beyond \(\pm\)0.15A\({}^{-1}\) in Figs. 3(b), (c) (left panels) is a contribution from other Ta 5\(d\) subbands.] More importantly, the dispersive band at 300 fs does not correspond to the flat upper Hubbard band (UHB). This has been verified from the UHB dynamics that could be tracked at 20 eV probe energy by changing the polarization of the probe pulses [see Fig. 1(b)] from horizontal (p-pol) to vertical (s-pol). In Figs. 3(d) and 3(e), the UHB lying at \(\approx\) -0.25 eV can be distinctly observed at \(\Delta t=+50\) fs, which eventually shifts towards \(E_{F}\) with time. At \(\Delta t=+300\) fs, the UHB lies across \(E_{F}\) and cannot be spectrally resolved as shown in Fig. 3(e) (right panel). All the observed characteristics of the dispersive band have a close resemblance to the band structure of the unreconstructed metallic phase about \(\Gamma\)[44]. Therefore, the above results demonstrate two features near \(E_{F}\): (i) depletion of the LHB intensity and emergence of a dispersive band above \(E_{F}\) and (ii) shift of the UHB towards \(E_{F}\) indicating a reduction of the Coulomb repulsion strength [62; 44]. The former corresponds to the relaxation of the PLD towards the undistorted high-temperature (metallic) phase, whereas the latter indicates photoinduced modification of the Mott-Hubbard gap. These provide evidence for phase coexistence in 1\(T\)-TaS\({}_{2}\) under non-equilibrium conditions, which might arise due to a particular lattice structure comprising hexagonal or striped SD domains separated by metallic islands. The manifestation of such a lattice configuration in the electronic band structure can be addressed through ARPES studies on the nearly commensurate and triclinic CDW phases of 1\(T\)-TaS\({}_{2}\). Altogether, our trARPES results at early time delays show that under the destruction of electronic and lattice order in 1\(T\)-TaS\({}_{2}\), it enters a transient phase that has remarkable similarities with the unreconstructed metallic phase, along with coexistence of the metallic (high-temperature) and insulating (Mott) phases.
Now, we will move on to the recovery dynamics and identify the nature of the phase where it settles at longer time delays. Figure 4 captures such dynamics under strong photoexcitation (3.6 mJ/cm\({}^{2}\)) for the probed Ta 5\(d\) subbands (\(B_{2}\) and LHB). We observe that, as the relaxed lattice structure of the transient phase starts to recover after 300 fs, there is only a partial recovery of the lattice order till \(\Delta t=+600\) fs shown in Fig. 4(a). We call it partial since \(B_{2}\) band does not exhibit the dispersion corresponding to that of before photoexcitation (\(\Delta t=-1.2\) ps). Any further recovery occurs on extremely long time scales which can be clearly identified from the negligible changes in the band dispersion from 600 fs to \(\Delta t=+3.5\) ps. This signifies the emergence of a long-lived metastable state in photoexcited 1\(T\)-TaS\({}_{2}\). The ARPES snapshots taken before and after (3.5 ps) photoexcitation, and their EDCs are shown in
Figure 4: (a) Time evolution of the electronic band dispersion about \(M\)-point (along \(MK\) direction). For each pump-probe delay \(\Delta t\), the peak position of the EDCs are plotted as a function of \(k_{\parallel}\). (b) ARPES snapshots acquired before and after (\(\Delta t=+3.5\) ps) photoexcitation. (c) Corresponding EDC stacking where the blue curve represents the EDC at \(M\). The black curves are guide to the eye for the band dispersion. (d) Comparison of the band dispersion before photoexcitation and in the long-lived state of the system. The energy shifts around the band minimum and maximum are indicated by arrows. (e) ARPES snapshots acquired before and after (\(\Delta t=3.5\) ps) photoexcitation about \(\Gamma\)-point (along \(\Gamma M\) direction). (f) Temporal evolution of the EDCs at longer delays integrated over a \(k_{\parallel}\) range of \(\pm\)0.1 Å\({}^{-1}\) about \(\Gamma\). All the data correspond to a high pump fluence of 3.6 mJ/cm\({}^{2}\) and the dashed lines in (b), (c), (e) denote \(E_{F}\). Binding energy is abbreviated to B. E.
Fig. 4(b) and Fig. 4(c), respectively. In the long-lived hidden phase, \(B_{2}\) exhibits a weaker band dispersion in comparison to the transient phase [compare red and yellow curves in Fig. 4(a)]. However, the band minima is still shifted by \(\approx\) 0.08 eV towards \(E_{F}\) and \(B_{2}\) has a larger bandwidth with respect to the ground state dispersion [see Fig. 4(d)]. On the other hand, the dynamics of the LHB display a complete recovery of the Mott phase. This can be claimed from the following features of LHB at \(\Delta t\) = +3.5 ps: (i) the spectral weight recovery in LHB and no additional weight at the tail of the EDC in Fig. 4(f), and (ii) the peak of the EDC lying at a similar binding energy as that of before photoexcitation [see Figs. 4(e) and 4(f)]. However, the recovery of LHB intensity slows down after 600 fs, with no pronounced changes at longer time delays. It is not known whether such slow dynamics of LHB can be linked to the destruction of CDW order and will require fluence-dependent studies in the future to make any further comments.
Finally, we will look at the features of the metastable state under strong (4.2 mJ/cm\({}^{2}\)) and weak (1.2 mJ/cm\({}^{2}\)) photoexcitation by tracking the dynamics of the Ta 5\(d\) subband lying at 0.5 eV below \(E_{F}\) (we call it \(B_{1}\)) [44] in Fig. 5. For a high photoexcitation strength, the band dispersion at long time delays is stronger and shifted towards \(E_{F}\) while this is not the case at a low photoexcitation strength [see Figs. 5(c), (d)]. We show the data at \(\Delta t\) = +30 ps for pump fluence 4.2 mJ/cm\({}^{2}\) in Figs. 5(a), (b) to emphasize that the dispersion (CDW lattice order) has not recovered even at longer times. The quantitative changes in the band structure at \(\Delta t\) = +2 ps are persistent till \(\Delta t\) = +30 ps and longer under strong photoexcitation in Fig. 5(c). This, once again, provides evidence that the system is driven to a long-lived metastable state prior to the complete recovery of the CDW lattice order. On the contrary, we do not find any signatures of the metastable state under weak photoexcitation since the small bandshifts are completely recovered within \(\Delta t\) = +3 ps [compare black and green curves in Fig. 5(d)]. At low photoexcitation strengths (\(\sim\) 1.3 mJ/cm\({}^{2}\) in this study), the LHB dynamics show complete recovery of the Mott phase. Hence, the long time dynamics of the Ta 5\(d\) subbands (LHB, \(B_{1}\), \(B_{2}\)) provide insights into the metastable phase in 1\(T\)-TaS\({}_{2}\), which is a hidden phase having no counterparts in equilibrium.
## IV Discussion
The correspondence between the (photoinduced) transient and (equilibrium) structurally undistorted phases imply that the ordering in the CCDW-Mott phase is destroyed as the lattice order relaxes to the undistorted metallic phase. Although the recovery of both the CDW and Mott phases begin at the same time, the CDW phase undergoes only a partial recovery while the Mott phase fully recovers within one ps. The metastable state attained by the system after its partial recovery does not correspond to any of the thermally accessible equilibrium phases. The signatures of the metastable phase are exhibited only by \(B_{1}\) and \(B_{2}\) bands, while LHB shows no evidence of such a long-lived state. Since the LHB is derived from electron-electron interactions and \(B_{1}\) and \(B_{2}\) have dominant contribution from electron-lattice interactions [41], it can be inferred that it is primarily the interaction of the electrons with the lattice that pushes the material towards a long-lived state. Such a state could be mediated by mode-selective electron-phonon coupling due to the destruction of CDW order, as has been shown for a similar compound 1\(T\)-TaSe\({}_{2}\)[38].
It is the electronic and lattice configuration in the low-temperature CCDW phase, which makes it susceptible to a Mott-Hubbard transition. Even though the CDW phase is not observed to reform completely, the ordering of the electronic and lattice degrees of freedom is such that the intracluster Coulomb repulsion (\(U\)) is larger than the electronic hopping strength (\(W\)), i.e., \(U/W\gtrsim 1.2\)[43]. This tends to localize the electrons at the atomic sites, leading to the recovery of the Mott phase. Therefore, it can be deduced that the metastable state is indeed a Mott insulating phase but with a reduced CDW amplitude as compared to the CCDW phase in equilibrium. A clear and direct investigation of the structural configuration in the metastable non-equilibrium phase can be obtained from time-resolved electron diffraction, which will be used in future studies to probe the long-lived hidden phases in this compound. It is also important to identify the critical fluence above which such a long-lived hidden phase emerges. Further time-resolved studies in this direction would involve a deeper investi
Figure 5: (a) ARPES snapshots of the Ta 5\(d\) subband at 0.5 eV below \(E_{F}\) along \(\Gamma M\) direction taken before and after photoexcitation: delay \(\Delta t\) = - 1 ps (left), \(\Delta t\) = + 30 ps at pump fluence 4.2 mJ/cm\({}^{2}\) (middle), \(\Delta t\) = + 3 ps at pump fluence 1.2 mJ/cm\({}^{2}\) (right). (b) Corresponding stacked EDCs representing the band dispersion. Smooth black curves are guide to the eye for the dispersion and dashed lines denote \(E_{F}\). (c) Peak positions of the EDCs plotted as a function of \(k_{||}\) at various time delays for high fluence, 4.2 mJ/cm\({}^{2}\). (d) The same for low fluence, 1.2 mJ/cm\({}^{2}\). The data at high fluence shows the presence of a long-lived state. Binding energy is abbreviated to B. E.
gation of how the microscopic interactions evolve as the material changes its state under non-equilibrium conditions.
## V Conclusion
In summary, we demonstrated the characteristics of the non-equilibrium phases in photoexcited \(1T\)-TaS\({}_{2}\) using time-resolved ARPES. In the transient phase, the Mott-CDW order is suppressed and the band structure has excellent resemblance with that of the unreconstructed metallic phase. Together with the complete relaxation of the PLD driven by charge redistribution, the dynamics at early time delays also exhibit signatures of phase coexistence in photoexcited \(1T\)-TaS\({}_{2}\). The Mott and CDW orders begin recovering around the same time, but only to settle in a long-lived metastable phase. In this "hidden" phase, \(1T\)-TaS\({}_{2}\) is a CCDW-Mott insulator but with a reduced CDW amplitude and the emergence of this phase is driven by the lattice order. In addition, the metastable state emerges only under strong photoexcitation of the system. A distinct characterization of these phases provides deeper insights into the state of charge and lattice order under non-equilibrium conditions and the prominent role played by the different degrees of freedom in governing these phases in a complex system.
## Acknowledgements
We are thankful to E. Nicolini and G. Bortoletto for characterization of the samples using Laue diffraction. We acknowledge fruitful discussions with M. Capone, Z. Bacciconi and A. Amaricci. This work was supported by the FLAG-ERA grant DIMAG, by the Research Foundation - Flanders (FWO), the Agence Nationale pour la Recherche (ANR), the Deutsche Forschungsgemeinschaft (DFG), the Slovenian Research Agency (ARRS).
|
2305.17966 | Quafu-RL: The Cloud Quantum Computers based Quantum Reinforcement
Learning | With the rapid advent of quantum computing, hybrid quantum-classical machine
learning has shown promising computational advantages in many key fields.
Quantum reinforcement learning, as one of the most challenging tasks, has
recently demonstrated its ability to solve standard benchmark environments with
formally provable theoretical advantages over classical counterparts. However,
despite the progress of quantum processors and the emergence of quantum
computing clouds in the noisy intermediate-scale quantum (NISQ) era, algorithms
based on parameterized quantum circuits (PQCs) are rarely conducted on NISQ
devices. In this work, we take the first step towards executing benchmark
quantum reinforcement problems on various real devices equipped with at most
136 qubits on BAQIS Quafu quantum computing cloud. The experimental results
demonstrate that the Reinforcement Learning (RL) agents are capable of
achieving goals that are slightly relaxed both during the training and
inference stages. Moreover, we meticulously design hardware-efficient PQC
architectures in the quantum model using a multi-objective evolutionary
algorithm and develop a learning algorithm that is adaptable to Quafu. We hope
that the Quafu-RL be a guiding example to show how to realize machine learning
task by taking advantage of quantum computers on the quantum cloud platform. | BAQIS Quafu Group | 2023-05-29T09:13:50Z | http://arxiv.org/abs/2305.17966v2 | # Quafu-RL: The Cloud Quantum Computers based Quantum Reinforcement Learning
###### Abstract
With the rapid advent of quantum computing, hybrid quantum-classical machine learning has shown promising computational advantages in many key fields. Quantum reinforcement learning, as one of the most challenging tasks, has recently demonstrated its ability to solve standard benchmark environments with formally provable theoretical advantages over classical counterparts. However, despite the progress of quantum processors and the emergence of quantum computing clouds in the noisy intermediate-scale quantum (NISQ) era, algorithms based on parameterized quantum circuits (PQCs) are rarely conducted on NISQ devices. In this work, we take the first step towards executing benchmark quantum reinforcement problems on various real devices equipped with at most 136 qubits on BAQIS Quafu quantum computing cloud. The experimental results demonstrate that the Reinforcement Learning (RL) agents are capable of achieving goals that are slightly relaxed both during the training and inference stages. Moreover, we meticulously design hardware-efficient PQC architectures in the quantum model using a multi-objective evolutionary algorithm and develop a learning algorithm that is adaptable to Quafu. We hope that the Quafu-RL be a guiding example to show how to realize machine learning task by taking advantage of quantum computers on the quantum cloud platform.
## I Introduction
Recent years have witnessed a huge development of quantum computing, including physics experiments to quantum hardware and abundant software platforms. However, the limitation of current noisy intermediate-scale quantum (NISQ) regime still suppresses achieving the so-called quantum supremacy [1, 2, 3] due to a restricted number of qubits, decoherence of the quantum state and imprecise quantum operations [4, 5]. With the obstacle of constructing ideal quantum processing units (QPUs) in the short term, a current trend indicates that it's necessary to build hybrid quantum-classical infrastructure to explore useful applications while making efforts to scale up QPUs [6]. Quantum computing clouds such as IBMQ, Amazon Braket and Quafu [7] serve as bridges to connect developers in quantum software applications with advanced quantum computers in NISQ era. Although quantum computing is promising to solve some intractable classical NP-hard problems, deficient quantum devices nowadays favorably excel at hybrid algorithms leveraging both quantum circuits and classical computing.
Variational quantum algorithms (VQAs) that run parameterized quantum circuits (PQCs) on quantum devices while integrating classical optimizer for parameter optimization satisfy the desire to demonstrate applications on NISQ devices [8] and have been wildly explored. Researches in VQAs mainly focus on quantum chemistry [9, 10, 11], quantum optimization [12, 13, 14] and quantum machine learning (QML) [15, 16] including subfields such as classification [17, 18, 19, 20], generative adversarial networks [21, 22, 23, 24] and related theoretic analysis [25, 26, 27].
Reinforcement learning (RL) [28], as one of the most challenging tasks in modern machine learning research, receives comparatively later attention in VQA-based approaches [29, 30, 31, 32]. Nonetheless, these proposed PQC agents don't reach satisfactory performance in benchmark environments from OpenAI Gym [33]. Until recently, policy-based [34] and value-based [35] quantum reinforcement learning make breakthroughs in both solving standard benchmark tasks and theoretical learning advantage over classical algorithms. In [34, 35], apart from hyperparameters tuning, they both emphasize the importance of PQC architectures in successful RL training and better performance. This issue inspires some evolutionary methods [36, 37, 38] to search suitable PQC architectures without human ingenuity in the aim of balancing expressivity [39, 40] and trainability.
In this study, we leverage the theoretical guarantees and experimental validation of parameterized quantum policies for reinforcement learning, along with the availability of quantum computing clouds such as Quafu, which offer scalable and stable services over the long term. With these resources, we are able to apply a benchmark RL task, CartPole, to real quantum devices and conduct numerous experiments. Firstly, we utilize a multi-objective evolutionary approach to search for the most suitable PQCs that can construct policies for higher performance and lower entanglement and design an adaptive learning algorithm for Quafu. Next, we select the best policy and execute it on available quantum resources that include 10-qubit, 18-qubit, and 136-qubit quantum computers. Our experiments demonstrate that the environment can be successfully solved with some relaxation of the original objectives. Overall, our proposed methods and pipelines offer a perspective for conducting PQC-based RL policies and other QML experiments on NISQ devices.
## II Reinforcement Learning
Reinforcement learning illustrates a problem that an agent learns to make best decisions to get maximum cu
mulative rewards through interactions with the environment [28; 41]. Concretely speaking, as shown in fig.1, an agent first observes a state \(s\in\mathcal{S}\) from the environment. Then, according to the policy \(\pi\), which can be some deterministic tables or neural networks with trainable parameters, the agent takes an action \(a\) from all possible actions \(\mathcal{A}\). Additionally, a policy can be further described as a mapping from states to the probability of performing each possible action \(\pi(a\mid s)\). After executing the action \(a\), the environment returns a new state \(s^{\prime}\) and a reward \(r\). Aforementioned transitions usually formulated as \(p\left(s^{\prime}\mid s,a\right)\) describe the dynamics of the environment. In order to formalize the RL objective, discount factor \(\gamma\) is introduced which lies in \([0,1]\). And the corresponding discounted return is defined as the weighted sum of future rewards \(G_{t}=\sum_{k=0}^{T-t-1}\gamma^{k}r_{t+k+1}\), which indicates the rewards obtained starting at time \(t\) until the final time \(T\). In essence, gathering above all elements, reinforcement learning is mathematically modeled with Markov Decision Process (MDP) as a tuple \((\mathcal{S},\mathcal{A},p,G,\gamma)\).
### Policy gradient methods
Reinforcement learning algorithms can be classified into two main categories: value-based and policy-based methods. Value-based methods aim to find the policy that maximizes the value function, while policy-based methods directly attempt to find the optimal policy by proposing a parameterized \(\pi_{\boldsymbol{\theta}}(a\mid s)\) and optimizing its parameters \(\boldsymbol{\theta}\). In this paper, we mainly focus on the policy gradient methods.
In most cases, policy gradient methods compute the gradient of the expected total return \(\nabla_{\boldsymbol{\theta}}\mathbb{E}\left[G\mid\pi_{\boldsymbol{\theta}}\right]\) and apply gradient ascent to update parameters. Specifically, the expectation value is taken over the trajectories \(\tau\) which collect all all states, actions and rewards \(s_{0},a_{0},r_{1},s_{1},a_{1},r_{2},s_{2},a_{2},\ldots,s_{T}\) within one episode. Then, consistent with policy gradient theorem in [42], the expectation can be rewrited as eq.1
\[\nabla_{\boldsymbol{\theta}}\mathbb{E}\left[G\mid\pi_{\boldsymbol{\theta}} \right]=\mathbb{E}\left[G\sum_{t=0}^{T-1}\nabla_{\boldsymbol{\theta}}\log\pi_ {\boldsymbol{\theta}}\left(a_{t}\mid s_{t}\right)\mid\pi_{\boldsymbol{\theta}} \right]. \tag{1}\]
## IV PQC-based quantum reinforcement learning
In the quantum realm, parameterized quantum circuits which can be represented by a unitary \(U(s,\boldsymbol{\theta})\) with an input state \(s\) and trainable parameters \(\boldsymbol{\theta}\) are utilized to build reinforcement learning policies, analogous to the function of classical neural networks. Recent studies [16; 39; 40; 43] have revealed some hardware-efficient PQCs, typically composed of variational PQCs, data-encoding PQCs, entanglements and a final measurement. In the context, as displayed in 2, a variational PQC (\(U_{var}(\varphi)\)) refers to a circuit made up of single-qubit rotations \(R_{x},R_{y},R_{z}\) acting on each qubit, assigning rotation angles \(\varphi\) as trainable parameters. Moreover, a data-encoding PQC (\(U_{enc}(s,\lambda)\)) is formed by \(R_{x}\) applied to each qubit, with a state vector \(s=(s_{0},s_{1},\ldots)\) scaled by trainable parameters \(\lambda\). Also, an entanglement (\(U_{\text{ent }}\)) achieves circular entanglement by using mutiple controlled-Z gates. At the end, measurement is performed following by a variational PQC.
### Parametrized quantum policies
The four basic operations described above are the building blocks of the most common PQC architec
Figure 1: **Quantum reinforcement learning with Quafu.** The entire pipeline can be described as follows: the RL agent, which is composed of a hardware-efficient PQC designed specifically for this task, receives a state \(s\) from the environment. The agent is then evaluated on Quafu and generates a policy \(\pi_{\boldsymbol{\theta}}\left(a\mid s\right)\) to sample an action \(a\) and receive a feedback reward \(r\). The parameters within the PQC are updated using a classical optimizer and training algorithm introduced later.
tures. However, in order to solve RL environments and get good performances, some data-encoding techniques and readout strategies are crucial for success as suggested in [34; 35]. To enhance the expressive power, data-reuploading which means using data-encoding PQC repeatedly in a circuit is a widely-adopted skill. In [34; 35], the authors assemble a variational PQC, an entanglement and a data-encoding PQC as an alternating layer and aplly it recurrently to create highly-expressive parametrized quantum policies. Meanwhile, to handle the range of the output of measurements, weighted observables associated to action \(a\) are utilized with trainable parameters \(\omega\). The expectation value of weighted observables then can be formulated as eq.2,
\[\left\langle O_{a}\right\rangle_{s,\mathbf{\theta}}=\left\langle 0^{\otimes n} \left|U(s,\varphi,\lambda)^{\dagger}O_{a}U(s,\varphi,\lambda)\right|0^{ \otimes n}\right\rangle\cdot\omega_{a}, \tag{2}\]
considering a n-qubit PQC with input state \(s\), rotation angles \(\varphi\), scaling parameters \(\lambda\) and observable operators \(O_{a}\), where \(\mathbf{\theta}=(\varphi,\lambda,\omega)\). Furthermore, non-linear activation function softmax is applied on the expectation values \(\left\langle O_{a}\right\rangle_{s,\mathbf{\theta}}\), which defines a SOFTMAX-PQC policy as eq.3,
\[\pi_{\mathbf{\theta}}(a\mid s)=\frac{e^{\beta\left\langle O_{a}\right\rangle_{s, \mathbf{\theta}}}}{\sum_{a^{\prime}}e^{\beta\left\langle O_{a^{\prime}}\right\rangle _{s,\mathbf{\theta}}}}, \tag{3}\]
where \(\beta\) is an inverse-temperature parameter.
### Quantum architecture search
It's effective to adopt pre-selected alternating layers to construct the quantum policy architecture as shown in the work [34], but a recent work [36] proposes an evolutionary quantum architecture search method to build a more flexible PQC policy using the four fundamental architecture components in fig.2, achieving higher RL performance with lower depths.
To be more precise, the four elementary PQC structures, namely \(U_{var}(\varphi)\), \(U_{enc}(s,\lambda)\), \(U_{\text{ent}}\), and measurement, are represented as genes and encoded into integer spaces with the values 1, 2, 3, and 0, respectively. Afterwards, Non-Dominated Sorting Genetic Algorithm II (NSGA-II) [44] is implemented to optimize the quantum architecture search process which iteratively generates a population of candidates through genetic operations like crossover and mutation on the given parents, and selects parents for the next generation based on fitness evaluation with the RL average collected rewards as the objective. More details can be found in [36].
## III Quantum reinforcement learning with quafu
With the assistance of the SOFTMAX-PQC policy and evolutionary quantum architecture search method outlined in the preceding section, we are able to devise a PQC that is both efficient in hardware usage and effective in implementing quantum reinforcement learning tasks on the Quafu cloud platform.
### Architecture selection
To select a hardware-efficient PQC for executing RL agents on Quafu, we utilize the method of evolutionary quantum architecture search (EQAS-PQC) [36], but take a further step in refining our objective. We design a multi-objective function that not only takes into account RL training performance, but also considers the number of entanglement components. The optimal architecture discovered during the search process for the CartPole environment can be represented using genes as follows: 3 - 1 - 1 - 2 - 1 - 1 - 3 - 1 - 3 - 2 - 1 - 2 - 0. To evaluate the ability and efficiency of our selected architecture, we
Figure 2: **PQC architecture components.** The example 4-qubit circuit consists of four basic PQC architecture components including variational PQC denoted as \(U_{var}(\varphi)\) with trainable parameters \(\varphi\), data-encoding PQC termed as \(U_{enc}(s,\lambda)\) with input state \(s\) and scaling parameters \(\lambda\), entanglement identified as \(U_{\text{ent}}\) and the final measurement part.
compare it with the best architecture proposed in [36] and the alternating scheme presented in [34].
As illustrated in fig. 3, the left subplot demonstrates that our PQC architecture achieves the best performance and is the first to reach the goal, whereas both the 5-alternating-layer method and EQAS-PQC fail to solve the environment within 500 training episodes. Moreover, the Quafu cloud platform provides a compilation scheme for quantum circuits prior to their execution on the quantum computer. Therefore, certain hardware-sensitive characteristics (such as the number of total gates, the number of two-qubit gate CNOT and the depth of the circuit) of the compiled circuits for the architectures are crucial for achieving optimal performance on Quafu. As shown in the right subplot of fig. 3, our proposed architecture exhibits the least number of gates and CNOTs, as well as the shortest depth of the compiled circuit. In contrast, the other two methods have significantly more gates, CNOTs, and deeper circuit depths, which clearly indicates their lower hardware efficiency. Some implementation details to note in conducting the experiments shown in fig. 3 are that we have removed the non-nearest neighbor CNOTs from the entanglement part. This is because including these CNOTs would drastically increase the number of gates and circuit depth, without adding significant benefits to the overall success of the task.
In conclusion, our proposed multi-objective evolutionary quantum architecture search method enables us to identify a suitable PQC architecture that exhibits proficient RL performance and is also hardware-adaptive to the Quafu platform, as described in detail above.
### Learning algorithm
After confirming the architecture used in SOFTMAX-PQC policy, it's important to specify an algorithm that connect classical optimizers and the Quafu platform. In [34; 36], they both implement the classical REINFORCE algorithm[45] with quantum simulator. However, with the involvement of quantum devices, there will be slight modifications as shown in Algorithm1.
```
Input: initialized SOFTMAX-PQC policy \(\pi_{\boldsymbol{\theta}}(a\mid s)\), learning rate \(\eta\), number of trajectories \(N\), maximum time \(T\) while True do for\(i=1\) to \(N\) do Initialize \(s_{0}\) for\(t=0\) to \(T-1\) do Execute \(\text{PQC}_{\theta}\) on Quafu and get policy \(\pi_{\theta}\left(a_{t}\mid s_{t}\right)\) Take action \(a_{t}\sim\pi_{\theta}\left(a_{t}\mid s_{t}\right)\) Move to next state \(s_{t+1}\) and store reward \(r_{t+1}\) endfor Execute \(\text{PQC}_{\theta}\) on simulator and get policy \(\widehat{\pi}_{\theta}\left(a_{t}\mid s_{t}\right)\) \(G^{(i)}\leftarrow\sum_{t}\gamma^{t}r_{t+1}\) Compute \(\epsilon=\pi_{\theta}\left(a_{t}\mid s_{t}\right)-\widehat{\pi}_{\theta} \left(a_{t}\mid s_{t}\right)\) \(z^{(i)}\leftarrow\sum_{t}\nabla_{\boldsymbol{\theta}}\log\left(\widehat{\pi}_ {\theta}\left(a_{t}\mid s_{t}\right)+\epsilon\right)\) endfor \(\Delta\theta\leftarrow(1/N)\sum_{i}G^{(i)}z^{(i)}\) \(\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}+\eta\Delta\boldsymbol{\theta}\) endwhile
```
**Algorithm 1** REINFORCE with Quafu
Figure 3: **Architecture comparison. Our proposed architecture, 5-alternating-layer method, and EQAS-PQC are compared based on their training performance and compiled circuit properties. The left subplot displays the performance within 500 episodes and the right subplot shows the number of gates, CNOTs, and circuit depth of the compiled circuits.**
## IV Numerical results
In this section, we investigate a classical benchmark environment called CartPole from the OpenAI Gym [33] and apply our proposed PQC policy and algorithm to train the agent on three different quantum devices from Quafu [7]: ScQ-P10, ScQP18, and ScQ-P136, which have 10-qubit, 18-qubit, and 136-qubit capabilities, respectively.
### Experimental setups
We determine the hyperparameters, such as learning rates and observables, based on the standard practices outlined in [34], which are also summarized in Table 1. Besides, classical optimization process is simulated on TensorFlow Quantum [46].
### Results
As presented in fig. 4, we conduct a series of experiments on the Quafu cloud platform. In the left coulum of fig. 4, a RL agent have been trained on ScQ-P10 device and get a maximum reward of 132 within 100 training episodes. Even in noiseless settings as demonstrated in fig. 3, it is not easy to achieve such objectives with only 100 training episodes. Moreover, observing the rewards averaged by a time window of 5 episodes, the agent is able to reach a stable reward of over 50, which lasts for a certain period of time after 80 training episodes. Additionally, the mean reward over 100 training episodes is approximately 30, which is higher than the reward achieved by random choice (20). The middle and right subplots in fig. 4 depict the training process of RL agents on ScQ-P18 and ScQ-P136, which accomplish maximum rewards of 86 and 94, respectively.
We also evaluate the pre-trained model on ScQ-P10 within 100 episodes as shown in fig. 5. The results indicate that the RL agent trained on Quafu is capable of attaining a goal of over 100 rewards in the inference stage, with an overall mean reward of approximately 50. These
\begin{table}
\begin{tabular}{l c c c c c c} \hline Environment & Qubits & Actions & Reward & Learning rates & \(\gamma\) & \(\beta\) & Observables \\ \hline CartPole-v1 & 4 & 2 & +1 & \(0.01,0.1,0.1\) & 1.0 & 1.0 & \([Z_{0}Z_{1}Z_{2}Z_{3}]\) \\ \hline \end{tabular}
\end{table}
Table 1: Hyperpameters of the RL environment. Learning rates correspond to parameters \(\varphi,\omega,\lambda\).
Figure 4: RL training performance with Quafu within 100 episodes. RL agents have been trained on ScQ-P10, ScQ-P18 and ScQ-P136. The blue lines indicate episode rewards within 100 training episodes, the orange dashed lines take a moving average over episode rewards with a window of 5, and the dotted green lines show the mean rewards over all training process.
Figure 5: Inference on ScQ-P10 within 100 episodes. The interpretation of the lines is similar to that in Figure 4.
signs provide evidence of the agent's successful training on noisy devices, albeit with some relaxation of the original goals. However, we only conduct training and inference for 100 episodes in this study, as continuing the training for a longer duration would have required an unaffordable amount of runtime.
## Conclusion
In this study, we implement quantum reinforcement learning on ScQ-P10, ScQ-P18, and ScQ-P136 from the Quafu cloud platform. We diligently choose our hardware-efficient PQCs using a multi-objective evolutionary algorithm and refine the REINFORCE learning algorithm to ensure the feasibility of training RL agents. Experimental results show that RL agents can be successfully trained and evaluated with some relaxation of the original goals within 100 episodes.
Moving forward, there is potential to train agents on quantum devices with higher objectives, longer episode lengths, and a wider range of test environments. Additionally, the PQC search process could be conducted on real devices under finer restricted conditions to find a more suitable architecture for a specific device. Finally, other algorithms such as PPO could be introduced to stabilize the training process on quantum computers.
**Code available:** The corresponding software can be found in: [https://github.com/enchanted123/quantum-RL-with-quat](https://github.com/enchanted123/quantum-RL-with-quat).
###### Acknowledgements.
This work is supported by the Beijing Academy of Quantum Information Sciences.
|
2303.15227 | Tri-Resonant Leptogenesis | We present a class of leptogenesis models where the light neutrinos acquire
their observed mass through a symmetry-motivated construction. We consider an
extension of the Standard Model, which includes three singlet neutrinos which
have mass splittings comparable to their decay widths. We show that this
tri-resonant structure leads to an appreciable increase in the observed CP
asymmetry over that found previously in typical bi-resonant models. To analyse
such tri-resonant scenarios, we solve a set of coupled Boltzmann equations,
crucially preserving the variations in the relativistic degrees of freedom. We
highlight the fact that small variations at high temperatures can have major
implications for the evolution of the baryon asymmetry when the singlet
neutrino mass scale is below $100$ GeV. We then illustrate how this variation
can significantly affect the ability to find successful leptogenesis at these
low masses. Finally, the parameter space for viable leptogenesis is delineated,
and comparisons are made with current and future experiments. | P. Candia da Silva, D. Karamitros, T. McKelvey, A. Pilaftsis | 2023-03-27T14:06:20Z | http://arxiv.org/abs/2303.15227v1 | # Tri-Resonant Leptogenesis
###### Abstract:
We present a class of leptogenesis models where the light neutrinos acquire their observed mass through a symmetry-motivated construction. We consider an extension of the Standard Model, which includes three singlet neutrinos which have mass splittings comparable to their decay widths. We show that this tri-resonant structure leads to an appreciable increase in the observed CP asymmetry over that found previously in typical bi-resonant models. To analyse such tri-resonant scenarios, we solve a set of coupled Boltzmann equations, crucially preserving the variations in the relativistic degrees of freedom. We highlight the fact that small variations at high temperatures can have major implications for the evolution of the baryon asymmetry when the singlet neutrino mass scale is below \(100\,\,\mathrm{GeV}\). We then illustrate how this variation can significantly affect the ability to find successful leptogenesis at these low masses. Finally, the parameter space for viable leptogenesis is delineated, and comparisons are made with current and future experiments.
Introduction
Observations done by the Wilkinson Microwave Anisotropy Probe (WMAP) and the Planck observatory indicate that the extent of the Baryon Asymmetry of the Universe (BAU) amounts to [1, 2]
\[\eta_{B}^{\rm CMB}=6.104\pm 0.058\times 10^{-10}. \tag{1}\]
Hence, explaining the observed BAU has been one of the central themes of Particle Cosmology for decades. The existence of this non-zero BAU is one of the greatest pieces of evidence for physics beyond the Standard Model (SM). Moreover, the observation of neutrino oscillation phenomena [3, 4, 5] indicates the existence of non-zero neutrino masses in contradiction with the SM prediction. A minimal resolution to both of these problems is to introduce additional neutrinos, which are singlets of the SM gauge group: \(\mathrm{SU}(3)_{c}\times\mathrm{SU}(2)_{L}\times\mathrm{U}(1)_{Y}\). The inclusion of a lepton number violating Majorana mass term permits these additional neutrinos to have large masses whilst suppressing the masses of the SM neutrinos. This mechanism is aptly referred to as the seesaw mechanism [6, 7, 8, 9]. The violation of lepton number by two units satisfies one of the famous Sakharov conditions [10] for the generation of appreciable particle asymmetries. Further to this, the expansion of the FRW Universe provides a cosmic arrow of time as well as satisfying the out-of-equilibrium condition, again provided by Sakharov. In combination with the CP violation present in the Yukawa sector, these properties allow for the generation of large lepton asymmetries, which may then be reprocessed into a baryon asymmetry through equilibrium (\(B+L\))-violating sphaleron transitions. This mechanism of generating appreciable BAU is widely known as _leptogenesis_[11, 12].
In [13], we consider a class of leptogenesis models which can provide naturally light SM neutrino masses as well as generate appreciable levels of BAU to match the observed CMB data. We assume these models to contain three singlet neutrinos, which have mass splittings comparable to their decay widths, permitting maximal mixing between the heavy eigenstates. This framework is commonly referred to as Resonant Leptogenesis (RL). To this end, we compute the CP asymmetries associated with this tri-resonant model and follow this up with a set of complete Boltzmann equations to calculate the generated BAU. These Boltzmann equations describe the evolution of the neutrino and lepton number densities prior to the sphaleron freeze-out when the temperature of the universe falls below \(T_{\rm sph}\approx 132\) GeV [14]. A particular highlight of our study is the preservation of the variations in the relativistic degrees of freedom. In much of the literature, the degrees of freedom are taken to be constant due to the high-temperature scales. However, we show that the small variations, which pervade even above temperatures of \(T=100\) GeV, can have a significant impact on the generated BAU.
Finally, we present results for the allowed regions of parameter space which can achieve successful leptogenesis and compare these results with current and future experiments. In particular, we make comparisons with coherent flavour-changing processes within nuclei from experiments such as MEG [15, 16] and PRISM [17], as well as collider experiments such as the LHC and FCC.
## 2 Flavour Symmetric Model
We utilise a minimal extension of the SM, with the inclusion of three right-handed neutrinos, which are singlets of the SM gauge group: \(\mathrm{SU}(3)_{c}\times\mathrm{SU}(2)_{L}\times\mathrm{U}(1)_{Y}\), and have lepton number
\(\mathrm{L}=1\). Given this additional particle content and quantum number assignment, the SM is extended through the additional Lagrangian terms
\[\mathcal{L}_{\nu_{R}}=i\overline{\nu}_{R}\hat{\phi}\nu_{R}-\left( \overline{L}\,\mathbf{h}^{\nu}\bar{\Phi}\,\nu_{R}+\frac{1}{2}\overline{\nu}_{R }^{C}\,\mathbf{m}_{M}\,\nu_{R}+\mathrm{H.c.}\,\right)\,. \tag{2}\]
Here, \(L_{i}=\left(\nu_{L,i},e_{L,i}\right)^{\mathsf{T}}\), with \(i=1,2,3\), are left-handed lepton doublets; \(\nu_{R,\alpha}\), with \(\alpha=1,2,3\), are right-handed neutrino singlet fields; and \(\bar{\Phi}\) is the isospin conjugate Higgs doublet. The matrices \(\mathbf{h}^{\nu}_{i\alpha}\) and \((\mathbf{m}_{M})_{\alpha\beta}\) are the neutrino Yukawa couplings and Majorana mass matrix, respectively. It is worth pointing out that the inclusion of the Majorana mass matrix explicitly breaks lepton number conservation by two units, \(\Delta L=2\), satisfying one of the three Sakharov conditions for the generations of appreciable lepton asymmetry.
Without loss of generality, we may select a basis for the singlet neutrino sector such that the Majorana mass term is diagonalised, _i.e_\(\mathbf{m}_{M}=\mathrm{diag}(m_{N_{1}},m_{N_{2}},m_{N_{3}})\). In this basis, the Lagrangian in the unbroken phase takes the form
\[\mathcal{L}_{\nu_{R}}=i\overline{N}\hat{\phi}N-\left(\overline{L}\,\mathbf{h} ^{\nu}\bar{\Phi}\,P_{R}N+\mathrm{H.c.}\right)-\frac{1}{2}\overline{N}\, \mathbf{m}_{M}N. \tag{3}\]
In this expression, \(N_{\alpha}=\nu_{R,\alpha}+\nu_{R,\alpha}^{C}\), and \(P_{R/L}=\frac{1}{2}\left(\mathds{1}_{4}\pm\gamma^{5}\right)\) are right/left-chiral projection operators.
In the broken phase, the addition of a Dirac mass term from the Yukawa sector results in the mixing between left- and right-chiral neutrinos, with the mass basis in the broken phase a particular combination of left- and right-chiral neutrinos
\[P_{R}\begin{pmatrix}\nu\\ N\end{pmatrix}=\begin{pmatrix}U_{\nu\nu_{L}^{C}}&U_{\nu\nu_{R}}\\ U_{N\nu_{L}^{C}}&U_{N\nu_{R}}\end{pmatrix}\begin{pmatrix}\nu_{L}^{C}\\ \nu_{R}\end{pmatrix}\,. \tag{4}\]
In the above, we have defined \(\nu_{i}\) as light neutrino mass eigenstates and \(N_{i}\) as heavy neutrino mass eigenstates. Furthermore, the unitary matrix, \(U\), diagonalises the full neutrino mass matrix. To leading order in the quantity \(\xi_{i\alpha}=(\mathbf{m}_{D}\mathbf{m}_{M}^{-1})_{i\alpha}\)[18], the light neutrino mass matrix may be written as
\[\mathbf{m}^{\nu}=-\mathbf{m}_{D}\mathbf{m}_{M}^{-1}\mathbf{m}_{D}^{\mathsf{T }}\,\,, \tag{5}\]
with \(\mathbf{m}_{D}=\mathbf{h}^{\nu}v/\sqrt{2}\) the Dirac mass matrix, with Higgs VEV, \(v\simeq 246\) GeV [7]. By virtue of this relation, it is clear that to satisfy observed neutrino data, the Majorana mass matrix would have to be GUT scale if the Dirac matrix is of electroweak scale (\(|\mathbf{m}_{D}|\sim v\)), and of general structure. As a result, the impact of singlet neutrinos on experimental signatures would be minimal as the charged current interactions are suppressed through the mixing parameter \(B_{i\alpha}=\xi_{i\alpha}\)[18]
\[\mathcal{L}_{\mathrm{int}}^{W}=-\frac{g_{w}}{\sqrt{2}}W_{\mu}^{- }\overline{e}_{iL}B_{i\alpha}\gamma^{\mu}P_{L}N_{\alpha}+\mathrm{H.c.}\,. \tag{6}\]
Consequently, there is a motivation to identify models which allow for low-scale heavy neutrino masses whilst remaining in alignment with the observed neutrino data.
One approach which may be taken to address this problem is to assume the existence of a symmetry on the flavour structure of the Yukawa couplings, \(\mathbf{h}_{0}^{\nu}\), which would render the light neutrino eigenstates massless
\[-\mathbf{m}_{D}\mathbf{m}_{M}^{-1}\mathbf{m}_{D}^{\mathsf{T}}=- \frac{\nu^{2}}{2}\mathbf{h}_{0}^{\nu}\mathbf{m}_{M}^{-1}(\mathbf{h}_{0}^{\nu}) ^{\mathsf{T}}=\mathbf{0}_{3}. \tag{7}\]
From this, small neutrino masses may be generated through perturbations about the symmetric Yukawa couplings
\[(\mathbf{h}_{0}^{\nu}+\delta\mathbf{h}^{\nu})\,\mathbf{m}_{M}^{ -1}\,(\mathbf{h}_{0}^{\nu}+\delta\mathbf{h}^{\nu})^{\mathsf{T}} =\,\frac{2}{\nu^{2}}\,\mathbf{m}^{\nu}. \tag{8}\]
In the case of a near degenerate heavy neutrino mass spectrum, the condition on the symmetric Yukawa couplings given in (7) may be approximately satisfied by
\[\mathbf{h}_{0}^{\nu}(\mathbf{h}_{0}^{\nu})^{\mathsf{T}}=\mathbf{ 0}_{3}. \tag{9}\]
This motivates a nil-potent structure of the Yukawa matrix. In particular, we have identified the structure
\[\mathbf{h}_{0}^{\nu}=\begin{pmatrix}a&a\,\omega&a\,\omega^{2}\\ b&b\,\omega&b\,\omega^{2}\\ c&c\,\omega&c\,\omega^{2}\end{pmatrix}\, \tag{10}\]
with \(a\), \(b\), \(c\in\mathbb{C}\), and \(\omega=\exp\left(\frac{2\pi i}{6}\right)\) the generator of the \(\mathbb{Z}_{6}\) group. This structure is not unique in satisfying the constraint given in (9). Other similar structures, such as \(\mathbb{Z}_{3}\), with generators \(\omega^{\prime}=\exp\left(\frac{2\pi i}{3}\right)\) would also produce a vanishing light neutrino mass spectrum at leading order. Most interestingly, this symmetry-motivated structure offers large CP-violating phases which contribute significantly to the generation of appreciable BAU. Instead, this possibility is not easily achievable in bi-resonant models, where the CP-odd phases are strongly correlated to the light-neutrino masses.
As a further insight, since this symmetry exists within the flavour structure, any additional contributions to the light neutrino mass matrix with an identical flavour structure will vanish. In particular, the first-order loop correction to the light neutrino mass matrix [18] may be incorporated into the zero mass condition of the symmetric Yukawa matrix
\[\frac{\nu^{2}}{2}\mathbf{h}_{0}^{\nu}\left[\mathbf{m}_{M}^{-1}- \frac{\alpha_{w}}{16\pi M_{W}^{2}}\mathbf{m}_{M}^{\dagger}\,f(\mathbf{m}_{M} \mathbf{m}_{M}^{\dagger})\right]\mathbf{h}_{0}^{\nu\mathsf{T}}=\,\mathbf{0}_{ 3}\, \tag{11}\]
where
\[f(\mathbf{m}_{M}\mathbf{m}_{M}^{\dagger})=\frac{M_{H}^{2}}{ \mathbf{m}_{M}\mathbf{m}_{M}^{\dagger}-M_{H}^{2}\,\mathbb{1}_{3}}\ln\left( \frac{\mathbf{m}_{M}\mathbf{m}_{M}^{\dagger}}{M_{H}^{2}}\right)+\frac{3M_{Z}^ {2}}{\mathbf{m}_{M}\mathbf{m}_{M}^{\dagger}-M_{Z}^{2}\,\mathbb{1}_{3}}\ln \left(\frac{\mathbf{m}_{M}\mathbf{m}_{M}^{\dagger}}{M_{Z}^{2}}\right). \tag{12}\]
In the above, \(\alpha_{w}\equiv g_{w}^{2}/(4\pi)^{2}\) is the electroweak gauge-coupling parameter, and \(M_{W}\), \(M_{Z}\), and \(M_{H}\) are the masses of the \(W\), \(Z\), and Higgs bosons, respectively.
## 3 Leptonic Asymmetries
In models of thermal leptogenesis, CP-violating effects enter through the difference in the decay rates of heavy neutrinos into leptons and Higgs bosons (\(N\to L\Phi\)), and the conjugate process (\(N\to L^{c}\Phi^{\dagger}\)) [11, 12]. This difference appears at the loop level, with the wavefunction contribution particularly dominant in models of RL, where mass splittings are of a similar size to the decay widths of the heavy neutrinos (for a review, see [19]). To aid the discussion of analytic results regarding the leptonic CP asymmetries, we introduce the coefficients [20, 21, 22]
\[A_{\alpha\beta} =\sum_{l=1}^{3}\frac{\mathbf{h}_{\ell\alpha}^{\nu}\mathbf{h}_{l \beta}^{\nu*}}{16\pi}=\frac{\left(\mathbf{h}^{\nu\dagger}\mathbf{h}^{\nu} \right)_{\alpha\beta}^{*}}{16\pi}, \tag{13}\] \[V_{l\alpha} =\sum_{k=1}^{3}\sum_{\gamma\neq\alpha}\frac{\mathbf{h}_{\ell \alpha}^{\nu*}\mathbf{h}_{k\gamma}^{\nu}\mathbf{h}_{l\gamma}^{\nu}}{16\pi}f \left(\frac{m_{N_{\gamma}}^{2}}{m_{N_{\alpha}}}\right), \tag{14}\]
which correspond to absorptive transition rates for the wavefunction and vertex, respectively. In (14), \(f(x)=\sqrt{x}\left[1-(1+x)\ln\left(\frac{1+x}{x}\right)\right]\) is the Fukugita-Yanagida 1-loop function [11, 12].
Completing a full re-summation of the loop corrections, including all three Majorana neutrinos, generates an effective \(NL\dot{\Phi}\) coupling [21, 22, 23]
\[(\bar{\mathbf{h}}_{+}^{\nu})_{l\alpha} =\mathbf{h}_{l\alpha}^{\nu}+iV_{l\alpha}-i\sum_{\beta,\gamma=1}^{ 3}\left|\epsilon_{\alpha\beta\gamma}\right|\mathbf{h}_{l\beta}^{\nu}\] \[\times\frac{m_{N_{\alpha}}\left(M_{\alpha\alpha\beta}+M_{\beta \beta\alpha}\right)-iR_{\alpha\gamma}\left[M_{\alpha\gamma\beta}\left(M_{ \alpha\alpha\gamma}+M_{\gamma\gamma\alpha}\right)+M_{\beta\beta\gamma}\left(M_ {\alpha\gamma\alpha}+M_{\gamma\alpha\gamma}\right)\right]}{m_{N_{\alpha}}^{2}- m_{N_{\beta}}^{2}+2im_{N_{\alpha}}^{2}A_{\beta\beta}+2i\Im{n_{\alpha}} \left(m_{N_{\alpha}}^{2}|A_{\beta\gamma}|^{2}+m_{N_{\beta}}m_{N_{\gamma}} \Re eA_{\beta\gamma}^{2}\right)}\, \tag{15}\]
where \(\epsilon_{\alpha\beta\gamma}\) is the anti-symmetric Levi-Civita symbol, \(M_{\alpha\beta\gamma}\equiv m_{N_{\alpha}}A_{\beta\gamma}\) and
\[R_{\alpha\beta}\equiv\frac{m_{N_{\alpha}}^{2}}{m_{N_{\alpha}}^{2}-m_{N_{\beta }}^{2}+2im_{N_{\alpha}}^{2}A_{\beta\beta}}. \tag{16}\]
The conjugate \(NL^{c}\dot{\Phi}^{\dagger}\) couplings, denoted by \((\bar{\mathbf{h}}^{\nu})_{l\alpha}\), are found through the replacement of \(\mathbf{h}_{l\alpha}^{\nu}\) by \((\mathbf{h}^{\nu})_{l\alpha}^{*}\) in (15). These effective couplings capture both _bi-resonant_ and _tri-resonant_ effects, corresponding to maximal CP asymmetries through the mixing of two and three singlet neutrinos, respectively. In particular, one may recover the bi-resonant expressions by simply taking \(R_{\alpha\gamma}\) to zero.
Utilising these re-summed effective couplings, we may calculate the partial decay widths of the heavy neutrinos as
\[\Gamma(N_{\alpha}\to L_{l}\Phi)=\frac{m_{N_{\alpha}}}{8\pi}\left|(\bar{ \mathbf{h}}_{+}^{\nu})_{l\alpha}\right|^{2},\qquad\Gamma(N_{\alpha}\to L_{l}^ {C}\Phi^{\dagger})=\frac{m_{N_{\alpha}}}{8\pi}\left|(\bar{\mathbf{h}}_{-}^{ \nu})_{l\alpha}\right|^{2}. \tag{17}\]
From this, we identify the size of the CP asymmetries within the model using the dimensionless quantity
\[\delta_{\alpha l}\equiv\frac{\Gamma(N_{\alpha}\to L_{l}\Phi)-\Gamma(N_{ \alpha}\to L_{l}^{C}\Phi^{\dagger})}{\sum_{k=e,\mu,\tau}\Gamma(N_{\alpha}\to L _{k}\Phi)+\Gamma(N_{\alpha}\to L_{k}^{C}\Phi^{\dagger})}=\frac{\left|(\bar{ \mathbf{h}}_{+}^{\nu})_{l\alpha}\right|^{2}-\left|(\bar{\mathbf{h}}_{-}^{\nu})_ {l\alpha}\right|^{2}}{(\bar{\mathbf{h}}_{+}^{\nu\dagger}\bar{\mathbf{h}}_{+}^{ \nu})_{\alpha\alpha}+(\bar{\mathbf{h}}_{-}^{\nu\dagger}\bar{\mathbf{h}}_{-}^{ \nu})_{\alpha\alpha}}. \tag{18}\]
Furthermore, we may define the total CP asymmetry associated with each neutrino species by summing over the lepton families
\[\delta_{\alpha}=\sum_{l}\delta_{\alpha l}=\frac{(\bar{\mathbf{h}}_{+}^{\nu\dagger }\bar{\mathbf{h}}_{+}^{\nu})_{\alpha\alpha}-(\bar{\mathbf{h}}_{-}^{\nu\dagger }\bar{\mathbf{h}}_{-}^{\nu})_{\alpha\alpha}}{(\bar{\mathbf{h}}_{+}^{\nu\dagger }\bar{\mathbf{h}}_{+}^{\nu})_{\alpha\alpha}+(\bar{\mathbf{h}}_{-}^{\nu\dagger }\bar{\mathbf{h}}_{-}^{\nu})_{\alpha\alpha}}. \tag{19}\]
At this point, it is important to mention that the existence of non-zero CP asymmetries is only possible in the event that the CP-odd invariant
\[\Delta_{\text{CP}} =\Im m\left\{\text{Tr}\left[(\mathbf{h}^{\nu})^{\dagger}\mathbf{ h}^{\nu}\mathbf{m}_{M}^{\dagger}\mathbf{m}_{M}\mathbf{m}_{M}^{\dagger}( \mathbf{h}^{\nu})^{\mathsf{T}}(\mathbf{h}^{\nu})^{*}\mathbf{m}_{M}\right]\right\} \tag{20}\] \[=\sum_{\alpha<\beta}m_{N_{\alpha}}m_{N_{\beta}}\left(m_{N_{\alpha }}^{2}-m_{N_{\beta}}^{2}\right)\,\Im m\left[\left(\mathbf{h}^{\nu\dagger} \mathbf{h}^{\nu}\right)_{\beta\alpha}^{2}\right] \tag{21}\]
does not vanish [24, 25, 26, 21] When all neutrinos are exactly degenerate, this quantity is trivially zero, and hence CP asymmetries are not possible. However, if mass splittings are permitted, we see that in the \(\mathbb{Z}_{6}\) model presented earlier, this CP-odd quantity is proportional to the \(\mathbb{Z}_{6}\) element \(\omega^{2}\)
\[\Delta_{\text{CP}}\approx\left(|a|^{2}+|b|^{2}+|c|^{2}\right)^{2}\,\sum_{ \alpha<\beta}m_{N_{\alpha}}m_{N_{\beta}}\left(m_{N_{\alpha}}^{2}-m_{N_{\beta} }^{2}\right)\,\Im m\Big{(}\omega^{2(\alpha-\beta)}\Big{)}. \tag{22}\]
Accordingly, the \(\mathbb{Z}_{6}\) structure we have proposed offers both naturally light SM neutrino masses and produce significant levels of CP asymmetry due to the large CP-violating phases present.
In the literature, there are several examples of the bi-resonant approximation being used in RL scenarios to enhance the contribution to the CP asymmetry through the mixing of two singlet neutrinos whilst permitting the third neutrino to decouple, either through suppressed couplings or a higher mass scale. However, in a model where all three neutrinos satisfy the resonance condition
\[|m_{N_{\alpha}}-m_{N_{\beta}}|\sim\frac{\Gamma_{N_{\alpha,\beta}}}{2}, \tag{23}\]
Figure 1: _Left panel:_ CP asymmetries generated by the decays of \(N_{1}\), \(N_{2}\) and \(N_{3}\), together with the total CP asymmetry \(\delta_{T}=\sum_{\alpha}\delta_{\alpha}\), as a function of the mass of \(N_{3}\). _Centre panel:_ Comparison of the CP asymmetry in the decay of \(N_{2}\) vs. \(m_{N_{3}}\) as calculated from two neutrino mixing (\(\delta_{2}^{(2)}\)) and three-neutrino mixing (\(\delta_{2}^{(3)}\)). _Right panel:_ CP asymmetry in the decay of \(N_{3}\) vs. \(m_{N_{3}}\) calculated from two-neutrino mixing (\(\delta_{3}^{(2)}\)) and three-neutrino mixing (\(\delta_{3}^{(3)}\)). We indicate the values of \(m_{N_{1}}\), \(m_{N_{2}}\) and the tri-resonant value of \(m_{N_{3}}\) with grey dashed lines.
the contributions to CP asymmetries may be enhanced through constructive interference between all three neutrinos [24]. Figure 1 shows the variation in the generated CP asymmetry through the decay of singlet neutrinos, as well as the total CP asymmetry \(\delta_{T}=\sum_{\alpha}\delta_{\alpha}\). In this figure, \(m_{N_{1}}\) and \(m_{N_{2}}\) are fixed to satisfy the resonance condition, and \(m_{N_{3}}\) is permitted to vary. As is expected, the total CP asymmetry is seen to vanish in the case that \(m_{N_{3}}\) is equal to either \(m_{N_{1}}\) or \(m_{N_{2}}\); however, is maximised when \(m_{N_{3}}=m_{N_{2}}+\frac{1}{2}\Gamma_{N_{2}}\). This maximum of the CP asymmetry is 35% larger than what can be produced in models with only two neutrino mixing. Furthermore, at this maximum, it can be seen that \(\delta_{1}\simeq\delta_{3}\), while \(\delta_{2}\) is significantly enhanced. Consequently, \(\delta_{2}\) is the dominant contributor to \(\delta_{T}\).
The latter two panels in Figure 1 highlight the difference between two neutrino mixing, \(\delta_{\alpha}^{(2)}\), and three neutrino mixing \(\delta_{\alpha}^{(3)}\). It is clear from the second panel that the proper inclusion of three neutrino mixing is important in the resonant region, as a sizeable difference becomes apparent in the CP asymmetry of \(N_{2}\). A similar effect is present in the CP asymmetry of \(N_{3}\), shown in the final panel, although to a lesser extent.
In general, Figure 1 highlights the importance of full and proper accounting for the mixing of three neutrinos when these neutrinos are in consecutive resonance. As a consequence, this tri-resonant structure saturates the available CP asymmetry and maximises the generated BAU at a given mass scale with specified couplings. This is in contrast to the bi-resonant models commonly studied in the literature, which neglect contributions to the CP asymmetry from the mixing of a third neutrino species.
## 4 Boltzmann Equations
The generation of appreciable BAU requires not only significant CP asymmetries but also a departure from equilibrium and baryon number violation. Here, we will introduce a complete set of Boltzmann equations which describe the out-of-equilibrium dynamics in the early universe, which allows for a dynamical generation of appreciable lepton asymmetry. This lepton asymmetry may be reprocessed into a baryon asymmetry through \((B+L)\)-violating sphaleron transitions [27].
At temperature scales pertinent to leptogenesis, it is assumed that the Universe is in the radiation-domination era, with energy and entropy densities
\[\rho(T)=\frac{\pi^{2}}{30}g_{\rm eff}(T)\,T^{4},\qquad s(T)=\frac{2\pi^{2}}{45 }h_{\rm eff}(T)\,T^{3}, \tag{24}\]
respectively. Here, \(T\) is the temperature of the Universe, with \(g_{\rm eff}\) and \(h_{\rm eff}\) relativistic degrees of freedom of the SM plasma. We include the variations in the relativistic degrees of freedom since these are not constant, even when the temperature is well above \(100~{}{\rm GeV}\). These variations are small in magnitude but may have drastic implications for the generation of appreciable BAU with low-scale neutrino masses. For our numerical simulations, we utilise the data set labelled 'EOS C' provided in [28].
The evolution of the neutrino and lepton asymmetry number densities are described by their respective Boltzmann equations, written as a function of the dimensionless parameter \(z_{\alpha}=m_{N_{\alpha}}/T\). To align with previously used conventions, we define \(z=z_{1}\) to be the dynamical evolution parameter.
In addition, we normalise the number density of a species, \(i\), to the photon density,
\[n_{\gamma}(z_{\alpha})=\frac{2\zeta(3)T^{3}}{\pi^{2}}=\frac{2\zeta(3)}{\pi^{2}} \left(\frac{m_{N_{\alpha}}}{z_{\alpha}}\right)^{3}. \tag{25}\]
This normalisation simplifies the Boltzmann equations and relates the number density to an observable quantity,
\[\eta_{i}(z_{\alpha})=\frac{n_{i}(z_{\alpha})}{n_{\gamma}(z_{\alpha})}. \tag{26}\]
In the case of the neutrino Boltzmann equations, it is convenient to express the evolution in terms of a departure-from-equilibrium quantity
\[\delta\eta_{\alpha}(z_{\alpha})=\frac{\eta_{\alpha}(z_{\alpha})}{\eta_{\alpha }^{\rm eq}(z_{\alpha})}-1. \tag{27}\]
In this definition, we have used the equilibrium value of \(\eta_{\alpha}\), which may be explicitly calculated to be
\[\eta_{\alpha}^{\rm eq}\approx\frac{z_{\alpha}^{2}}{2\zeta(3)}K_{2}(z_{\alpha}), \tag{28}\]
with \(\zeta(3)\)Apery's constant, and \(K_{n}(z_{\alpha})\) a modified Bessel function of the second kind.
With these considerations, we may write a set of coupled Boltzmann equations, including decay terms, \(\Delta L=1\) and \(\Delta L=2\) scattering processes, as well as the running of the degrees of freedom,
\[\frac{d\delta\eta_{N_{\alpha}}}{d\ln z_{\alpha}}= -\frac{\delta_{h}(z_{\alpha})}{H(z_{\alpha})\ \eta_{N_{\alpha}}^{\rm eq}(z_{\alpha})}\left[\delta\eta_{N_{ \alpha}}\left(\Gamma^{D(\alpha)}+\Gamma_{Y}^{S(\alpha)}+\Gamma_{G}^{S(\alpha)} \right)+\frac{2}{9}\,\eta_{L}\,\delta_{\alpha}\left(\bar{\Gamma}^{D(\alpha)}+ \bar{\Gamma}_{Y}^{S(\alpha)}+\bar{\Gamma}_{G}^{S(\alpha)}\right)\right]\] \[+\left(\delta\eta_{N_{\alpha}}+1\right)\ \left[z_{\alpha}\frac{K_{1}(z_{ \alpha})}{K_{2}(z_{\alpha})}-3(\delta_{h}(z_{\alpha})-1)\right]\, \tag{29}\] \[\frac{d\eta_{L}}{d\ln z}= -\frac{\delta_{h}(z)}{H(z)}\Bigg{\{}\sum_{\alpha=1}^{3}\delta\eta _{N_{\alpha}}\delta_{\alpha}\left(\Gamma^{D(\alpha)}+\Gamma_{Y}^{S(\alpha)}+ \Gamma_{G}^{S(\alpha)}\right)\] \[+\frac{2}{9}\eta_{L}\left[\sum_{\alpha=1}^{3}\left(\bar{\Gamma}^ {D(\alpha)}+\bar{\Gamma}_{Y}^{S(\alpha)}+\bar{\Gamma}_{G}^{S(\alpha)}+\Gamma_ {Y}^{W(\alpha)}+\Gamma_{G}^{W(\alpha)}\right)+\Gamma^{\Delta L=2}\right]\] \[+\frac{2}{27}\eta_{L}\,\sum_{\alpha=1}^{3}\delta_{\alpha}^{2}\ \left(\Gamma_{Y}^{W(\alpha)}+\Gamma_{G}^{W(\alpha)}\right)\Bigg{\}}-3\eta_{L}( \delta_{h}(z)-1). \tag{30}\]
In these equations, we have utilised the well-known expression for the temperature-dependent Hubble parameter
\[H(z_{\alpha})=\sqrt{\frac{4\pi^{3}g_{\rm eff}(z_{\alpha})}{45}}\frac{m_{N_{ \alpha}}^{2}}{M_{\rm Pl}}\frac{1}{z_{\alpha}^{2}}, \tag{31}\]
with \(M_{\rm Pl}\approx 1.221\times 10^{19}\)GeV the Planck mass. The relevant collision terms, denoted by \(\Gamma_{Y}^{\rm X}\), are readily available in the literature [22].
As presented here, the Boltzmann equations are complete up to \(\Delta L=2\) scattering processes. Moreover, the terms included take into account the subtraction of real intermediate states (RIS).
Such terms may contribute negatively to the Boltzmann equations due to the lack of an on-shell contribution to the scattering amplitude.
As briefly alluded to earlier, the lepton asymmetry generated is partially re-processed into a baryon asymmetry through \((B+L)\)-violating sphaleron transitions. As may be found in the literature [29], the generated BAU from a lepton asymmetry is given by
\[\eta_{B}=-\frac{28}{51}\eta_{L}. \tag{32}\]
However, sphaleron transitions become suppressed once the temperature of the Universe falls below the critical temperature \(T_{\rm sph}\approx 132\leavevmode\nobreak\ \mathrm{GeV}\). Consequently, no leptons are re-processed once the Universe cools to a temperature below \(T_{\rm sph}\).
Moreover, observations of the BAU produce the value at the recombination epoch; however, due to the expansion of the Universe, this results in a dilution of the overall baryon asymmetry present at \(T_{\rm sph}\). To compare these two values meaningfully, we assume that as the Universe cools from \(T_{\rm sph}\) to \(T_{\rm rec}\), there are no entropy-releasing processes. Consequently, the entropy of the Universe is constant, and we may calculate the ratio
\[\frac{\eta_{B}(T_{\rm rec})}{\eta_{B}(T_{\rm sph})}=\frac{n_{\gamma}(T_{\rm sph })s(T_{\rm rec})}{n_{\gamma}(T_{\rm rec})s(T_{\rm sph})}=\frac{h_{\rm eff}(T_{ \rm rec})}{h_{\rm eff}(T_{\rm sph})}. \tag{33}\]
For this ratio, we take the approximate value \(1/27\)[12, 21], and as a result, the observed baryon asymmetry is related to the generated lepton asymmetry by
\[\eta_{B}^{\rm obs}=-\frac{1}{27}\frac{28}{51}\eta_{L}(T_{\rm sph}). \tag{34}\]
## 5 Variations in the Relativistic Degrees of Freedom of the SM Plasma
In this section, we consider the effect of variations in the relativistic degrees of freedom of the SM plasma. In the Boltzmann equations, we introduce this effect through the inclusion of the factor
\[\delta_{h}(z)=1-\frac{1}{3}\frac{d\ln h_{\rm eff}}{d\ln z}, \tag{35}\]
which is greater than 1 and has the limiting value 1 for constant degrees of freedom.
While the size of this quantity does not differ significantly from unity, the final term of (29) may be dominant for low values of \(z\). Consequently, early in the evolution of \(\delta\eta_{N_{\alpha}}\), negative values may be observed. Accordingly, the dependence of \(\eta_{L}\) on \(\delta\eta_{N_{\alpha}}\) will result in negative values for the baryon asymmetry, \(\eta_{B}\). In the case that the singlet neutrino mass scale is sufficiently low, the sphaleron freeze-out at \(T_{\rm sph}\) may preserve this feature.
In Figure 2, we can see the impact of the variations in the relativistic degrees of freedom. In these figures, the dashed lines represent regions where the relevant quantity takes on negative values, typically \(z\lesssim 10^{-1}\), and solid lines represent positive values, typically \(z\gtrsim 10^{-1}\). Whilst these two figures are qualitatively similar, the importance of the mass scale becomes clear when we consider the values at the sphaleron freeze-out temperature. In Figure 2a, we see that for singlet neutrinos of mass \(m_{N_{1}}=35\leavevmode\nobreak\ \mathrm{GeV}\), the sphaleron freeze-out occurs prior to the evolution 'bouncing
back' to positive values, resulting in an overall negative sign for the generated BAU. Conversely, in Figure 2, we consider singlet neutrinos of mass \(m_{N_{1}}=45\) GeV, the baryon asymmetry has had enough time to return to positive values before the sphaleron freeze-out, and we find the expected positive values for the generated BAU.
We highlight the impact of the variations in the relativistic degrees of freedom in Figure 3. From these figures, it is clear that when we utilise models with sub-TeV masses, the variations in the relativistic degrees of freedom may result in drastically different values for the observed BAU. In particular, we call attention to the fact that for singlet neutrino masses below 100 GeV, the observed BAU may take on negative values once the variations in the degrees of freedom are accounted for.
As a concluding remark, we note that this effect is stable under perturbations of the initial conditions since the solutions of the Boltzmann equations quickly reach attractor solutions. Moreover, we expect that this behaviour would pervade even with the inclusion of additional CP-violating phenomena, such as coherent heavy neutrino oscillations.
## 6 Results
We now present the numerical solutions of the Boltzmann equations given in (29) and (30). We assume a tri-resonant mass spectrum for the singlet neutrinos and a democratic \(a=b=c\) structure for the Yukawa couplings. We consider masses in the range 40 GeV to 1 TeV, as numerical solutions below 40 GeV are more limited due to the neglect of thermal masses, which may cause phase space suppression. In addition, a study at these low masses may require the inclusion of additional CP-violating effects, such as coherent oscillations of singlet neutrinos.
The inclusion of scattering processes generates a delay in the maximum of the baryon asymmetry evolution. As a result, the evolution of the baryon asymmetry becomes dependent on the
Figure 2: Evolution of \(\delta_{N_{1}}\) (red) and \(\eta_{L}\) (black) for \(m_{N_{1}}=35\) GeV (a) and \(m_{N_{1}}=45\) GeV (b), with \(|\mathbf{h}^{\nu}_{iJ}|\approx 4.5\times 10^{-5}\). The black and red solid (dashed) lines indicate where \(\eta_{B}\) and \(\delta_{N_{1}}\) are positive (negative). Grey dotted lines indicate the sphaleron freeze-out value, \(z_{\rm sph}\), and the observed Baryon asymmetry at \(z_{\rm sph}\).
mass scale of the singlet neutrinos. In Figure 4, we analyse this phenomenon. In both panels, we take the initial conditions \(\delta\eta_{N_{\alpha}}(z_{0})=0\) and \(\eta_{L}(z_{0})=0\), with \(z_{0}=10^{-2}\), although due to the heavily attractive nature of the solution, the results remain unchanged for any other sensible choice of the initial conditions. Figure 3(a) considers neutrinos of mass scale \(m_{N_{1}}=1\) TeV and
Figure 4: Evolution plots for the deviation from equilibrium of the neutrino densities \(\delta\eta_{N_{\alpha}}\) (blue and red solid lines) and the baryon asymmetry \(\eta_{B}\) (black solid line). Input parameters for the mass of the lightest singlet neutrino, and the scale of the Yukawa coupling can be seen on each panel, with the grey (dotted) line indicating the value \(z=z_{\rm sph}\) at which the sphaleron processes freeze out. The orange dot-dashed line indicates the observed value of the baryon asymmetry of \(\eta_{B}^{\rm CMB}=6.104\times 10^{-10}\).
Figure 3: _Left panel:_ The generated \(\eta_{B}\) for \(|\mathbf{h}_{ij}^{\nu}|\approx 3\times 10^{-4}\) in the tri-resonant scenario as a function of \(m_{N_{1}}\) for \(h_{\rm eff}\) as given in [28] (black), [30] (blue), and taking \(h_{\rm eff}={\rm const.}\approx 105\) (red). The grey dotted line shows the observed baryon asymmetry, \(\eta_{B}^{\rm CMB}=6.104\times 10^{-10}\). _Right panel:_ The ratio of \(|\eta_{B}|\) with varying \(h_{\rm eff}\). The black (blue) line corresponds to [28] ([30]), with respect to \(h_{\rm eff}={\rm const.}\)
Yukawa couplings \(|\mathbf{h}_{ij}^{\nu}|=3\times 10^{-4}\). It can be seen in this figure that for TeV scale neutrinos, the baryon asymmetry reaches a maximum value before a significant amount is washed out prior to the sphaleron freeze-out at \(T_{\rm sph}\). Conversely, Figure 3(b) shows the evolution for neutrinos of mass \(m_{N_{1}}=120\) GeV and Yukawa couplings of size \(|\mathbf{h}_{ij}^{\nu}|=2\times 10^{-4}\). In this figure, we see that the generation of the BAU occurs at the maximum of the evolution. In general, light singlet neutrino masses results in the generated BAU freezing out earlier in the evolution. Finally, we observe that in both of the panels in Figure 4, at high values of \(z\), there is a significant difference between the evolution of \(\delta\eta_{N_{2}}\) and \(\delta\eta_{N_{1,3}}\). This is due to the significantly higher CP asymmetry associated with \(N_{2}\), as highlighted in Figure 1.
Figure 5 shows the parameter space on the \(\sum_{\alpha}B_{l\alpha}B_{k\alpha}^{*}\) vs. \(m_{N_{1}}\) plane. As before, we assume a democratic flavour structure with \(a=b=c\) and a tri-resonant mass spectrum. Moreover, we take the initial conditions \(\delta\eta_{N_{\alpha}}(z_{0})=0\) and \(\eta_{L}(z_{0})=0\), with \(z_{0}=10^{-2}\). We highlight the region in which successful generation of the BAU is possible, with the solid green line indicating points in the parameter space where the generated BAU is equal to the observed value \(\eta_{B}^{\rm CMB}=6.104\times 10^{-10}\). Points within the green-shaded region may be made to match \(\eta_{B}^{\rm CMB}\) by softly relaxing the tri-resonant condition, and hence this region also permits the successful generation of the BAU.
In Figure 5, the yellow dashed lines represent bounds when additional sources of CP asymmetry are included. The upper dashed line is obtained by scaling up the total CP asymmetry by a factor of
Figure 5: Parameter space for the TRL model, including current limits (solid lines) and projected sensitivities of future experiments (dashed lines). _Left panel:_ Projected sensitivities of cLFV searches for \(\mu\to e\gamma\) (orange dashed line), \(\mu\to eee\) (dashed red line), coherent \(\mu\to e\) conversion in titanium (dashed blue line), and in gold (solid blue line). _Right panel:_ Projected sensitivities for collider searches at LHC14 (blue dashed line), FCC-ee (red dashed line), and current limits from DELPHI (orange solid line). The green region in these panels indicates points in the parameter space where successful leptogenesis is possible, and the green solid line corresponds to the points that reproduce exactly the observed value for a tri-resonant model. The upper and lower yellow dot-dashed lines were obtained by scaling the total CP asymmetry \(\delta_{T}\) by a factor of 2 and 0.1, respectively, then matching the observed baryon asymmetry. These lines represent an uncertainty estimate in the calculation of the solid green line due to the oscillations of singlet neutrinos.
2, and the lower dashed line is obtained by scaling down the CP asymmetry by a factor of 10. These represent the theoretical uncertainty due to the neglect of coherent oscillation effects between heavy neutrinos. These estimates were generated by assuming that the CP asymmetry from coherent oscillations is additive to the CP asymmetry arising through singlet neutrino mixing. However, there may be constructive or destructive interference between these two effects, highlighting the current lack of consensus regarding whether mixing and oscillations are distinct phenomena or whether mixing is contained within the oscillation formalism. Consequently, the numerical results from a detailed study containing both effects may not be as extreme as the bounds presented in Figure 5.
Figure 4(a) compares the available parameter space with sensitivity limits on current cLFV experiments involving muons. In particular, we consider coherent muon to electron transitions within nuclei, as well as \(\mu\to e\gamma\) and \(\mu\to eee\) experiments. As may be seen in this figure, the only experiment which may probe the parameter space of successful leptogenesis is the coherent \(\mu\to e\) transition in Titanium at PRISM [17]
Figure 4(b) considers the projected and current limits for various collider experiments. The estimate denoted by LHC14 (blue dashed line) presents conservative projections LHC with 300 fb\({}^{-1}\) data operating at \(\sqrt{s}=14\) TeV for the sensitivity to the process \(pp\to N\ell^{\pm}jj\)[31, 32]. The orange solid line represents 95% C.L. limits found by comparing LEP data with the prediction for signals of decaying heavy neutrinos that are produced via \(Z\to N\nu_{L}\)[33] at DELPHI. Similar limits have been derived by the L3 collaboration [34]. The red dashed line shows the Future Circular Collider (FCC) sensitivity to the same signals for electron-positron collisions assuming the normal order of the light neutrino spectrum, and considering the lifetime of the heavy neutrinos [35].
## 7 Conclusions
We have showcased a class of leptogenesis models characterised by their \(\mathbb{Z}_{6}\) or \(\mathbb{Z}_{3}\) symmetries. These models offer naturally light SM neutrino masses with large CP-violating phases. When utilised in a tri-resonant framework, this model can fully saturate the available CP asymmetry. We have highlighted how the full and consistent incorporation of a third singlet neutrino can lead to significantly higher scales of CP asymmetry when compared with the bi-resonant approximation commonly utilised in the literature.
Furthermore, we have presented a complete set of Boltzmann equations, accounting for various effects such as varying degrees of freedom and chemical potential corrections. In addition, we have included scattering terms up to \(\Delta L=2\) processes with proper RIS subtraction. In our analysis, we have explicitly demonstrated the importance of proper implementation of the variation of the degrees of freedom since this feature can have a significant impact on the generated BAU.
In addition, we have illustrated that an enhanced parameter space is possible when a tri-resonant mass spectrum is considered when compared with the expectations from a typical seesaw model. While the parameter space found is out of range for many current experiments, it is still possible for certain mass ranges to be probed, particularly through \(\mu\to e\) conversion in Titanium at PRISM or through collision experiments at the FCC. Due to the democratic structure of the Tri-RL models, flavour effects will not be significant, and hence the results we provide give an upper bound on the scale of the neutrino Yukawa couplings. However, in principle, we may expand this parameter space
through the inclusion of additional phenomena, such as coherent oscillations and supersymmetry (SUSY).
## 8 Acknowledgements
The work of AP and DK is supported in part by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC Research Grant ST/T001038/1. The work of PCdS is funded by Agencia Nacional de Investigacion y Desarrollo (ANID) through the Becas Chile Scholarship No. 72190359. TM acknowledges support from the STFC Doctoral Training Partnership under STFC training grant ST/V506898/1.
|
2301.00735 | Failure of curvature-dimension conditions on sub-Riemannian manifolds
via tangent isometries | We prove that, on any sub-Riemannian manifold endowed with a positive smooth
measure, the Bakry-\'Emery inequality for the corresponding sub-Laplacian
implies the existence of enough Killing vector fields on the tangent cone to
force the latter to be Euclidean at each point, yielding the failure of the
curvature-dimension condition in full generality. Our approach does not apply
to non-strictly-positive measures. In fact, we prove that the weighted Grushin
plane does not satisfy any curvature-dimension condition, but, nevertheless,
does admit an a.e. pointwise version of the Bakry-\'Emery inequality. As
recently observed by Pan and Montgomery, one half of the weighted Grushin plane
satisfies the RCD(0,N) condition, yielding a counterexample to gluing theorems
in the RCD setting. | Luca Rizzi, Giorgio Stefani | 2023-01-02T16:01:32Z | http://arxiv.org/abs/2301.00735v2 | # Failure of curvature-dimension conditions on sub-Riemannian manifolds via tangent isometries
###### Abstract.
We prove that, on any sub-Riemannian manifold endowed with a positive smooth measure, the Bakry-Emery inequality for the corresponding sub-Laplacian,
\[\frac{1}{2}\Delta(\|\nabla u\|^{2})\geq\mathrm{g}(\nabla u,\nabla\Delta u)+K\| \nabla u\|^{2},\quad K\in\mathbb{R},\]
implies the existence of enough Killing vector fields on the tangent cone to force the latter to be Euclidean at each point, yielding the failure of the curvature-dimension condition in full generality. Our approach does not apply to non-strictly-positive measures. In fact, we prove that the weighted Grushin plane does not satisfy any curvature-dimension condition, but, nevertheless, does admit an a.e. pointwise version of the Bakry-Emery inequality. As recently observed by Pan and Montgomery, one half of the weighted Grushin plane satisfies the \(\mathsf{RCD}(0,N)\) condition, yielding a counterexample to gluing theorems in the \(\mathsf{RCD}\) setting.
Key words and phrases:Sub-Riemannian manifold, \(\mathsf{CD}(K,\infty)\) condition, Bakry-Emery inequality, infinitesimally Hilbertian, Grushin plane, privileged coordinates 2
implies \(\mathsf{BE}(K,N)\) in _infinitesimal Hilbertian_ metric-measure spaces, as introduced in [25], while the converse implication requires further technical assumptions.
Such synthetic theory of curvature-dimension conditions, besides being consistent with the classical notions of Ricci curvature and dimension on smooth Riemannian manifolds, is stable under pointed-measure Gromov-Hausdorff convergence. Furthermore, it yields a comprehensive approach for establishing all results typically associated with Ricci curvature lower bounds, like Poincare, Sobolev, log-Sobolev and Gaussian isoperimetric inequalities, as well as Brunn-Minkowski, Bishop-Gromov and Bonnet-Myers inequalities.
### The sub-Riemannian framework
Although the aforementioned synthetic curvature-dimension conditions embed a large variety of metric-measure spaces, a relevant and widely-studied class of smooth structures is left out--the family of _sub-Riemmanian manifolds_. A sub-Riemannian structure is a natural generalization of a Riemannian one, in the sense that its distance is induced by a scalar product that is defined only on a smooth sub-bundle of the tangent bundle, whose rank possibly varies along the manifold. See the monographs [40, 2, 45] for a detailed presentation.
The first result in this direction was obtained by Driver-Melcher [23], who proved that an integrated version of the \(\mathsf{BE}(K,\infty)\), the so-called _pointwise gradient estimate_ for the heat flow, is false for the three-dimensional _Heisenberg group_.
In [31], Juillet proved the failure of the \(\mathsf{CD}(K,\infty)\) property for all Heisenberg groups (and even for the strictly related _Grushin plane_, see [32]). Later, Juillet [33] extended his result to any sub-Riemannian manifold endowed with a possibly rank-varying distribution of rank _strictly_ smaller than the manifold's dimension, and with any positive smooth measure, by exploiting the notion of _ample curves_ introduced in [1]. The idea of [31, 33] is to construct a counterexample to the _Brunn-Minkowski inequality_.
The 'no-\(\mathsf{CD}\) theorem' of [31] was extended to all _Carnot groups_ by Ambrosio and the second-named author in [8, Prop. 3.6] with a completely different technique, namely, by exploiting the optimal version of the _reverse Poincare inequality_ obtained in [16].
In the case of sub-Riemannian manifolds endowed with an _equiregular distribution_ and a positive smooth measure, Huang-Sun [29] proved the failure of the \(\mathsf{CD}(K,N)\) condition for all values of \(K\in\mathbb{R}\) and \(N\in(1,\infty)\) contradicting a bi-Lipschitz embedding result.
Very recently, in order to address the structures left out in [33], Magnabosco-Rossi [37] recently extended the 'no-\(\mathsf{CD}\) theorem' to _almost-Riemannian manifolds_\(M\) of dimension \(2\) or _strongly regular_. The approach of [37] relies on the localization technique developed by Cavalletti-Mondino [19] in metric-measure spaces.
To complete the picture, we mention that several replacements for the Lott-Sturm-Villani curvature-dimension property have been proposed and studied in the sub-Riemannian framework in recent years. Far from being complete, we refer the reader to [11, 12, 13, 14, 15, 38] for an account on the Lagrangian approach, to [17] concerning the Eulerian one, and finally to [47] for a first link between entropic inequalities and contraction properties of the heat flow in the special setting of metric-measure groups.
**Main aim.** At the present stage, a 'no-\(\mathsf{CD}\) theorem' for sub-Riemannian structures in full generality is missing, since the aforementioned approaches [8, 23, 37, 31, 29, 33] either require the ambient space to satisfy some structural assumptions, or leave out the infinite dimensional case \(N=\infty\).
The main aim of the present paper is to fill this gap by showing that (possibly rank-varying) sub-Riemannian manifolds do not satisfy any curvature bound in the sense of Lott-Sturm-Villani or Bakry-Emery when equipped with a positive smooth measure, i.e., a Radon measure whose density in local charts with respect to the Lebesgue measure is a strictly positive smooth function.
### Failure of the Bakry-Emery inequality
The starting point of our strategy is the weakest curvature-dimension condition, as we now define.
**Definition 1.1** (Bakry-Emery inequality).: We say that a sub-Riemannian manifold \((M,\mathsf{d})\) endowed with a positive smooth measure \(\mathsf{m}\) satisfies the _Bakry-Emery_\(\mathsf{BE}(K,\infty)\)_inequality_, for \(K\in\mathbb{R}\), if
\[\frac{1}{2}\,\Delta(\|\nabla u\|^{2})\geq\mathrm{g}(\nabla u,\nabla\Delta u)+ K\|\nabla u\|^{2}\quad\text{for all $u\in C^{\infty}(M)$}, \tag{1.1}\]
where \(\Delta\) is the corresponding sub-Laplacian, and \(\nabla\) the sub-Riemannian gradient.
Our first main result is the following rigidity property for sub-Riemannian structures supporting the Bakry-Emery inequality (1.1).
**Theorem 1.2** (no-Be).: _Let \((M,\mathsf{d})\) be a complete sub-Riemannian manifold endowed with a positive smooth measure \(\mathsf{m}\). If \((M,\mathsf{d},\mathsf{m})\) satisfies the \(\mathsf{BE}(K,\infty)\) inequality for some \(K\in\mathbb{R}\), then \(\operatorname{rank}\mathscr{D}_{x}=\dim M\) at each \(x\in M\), so that \((M,\mathsf{d})\) is Riemannian._
The idea behind our proof of Theorem 1.2 is to show that the _metric tangent cone_ in the sense of Gromov [26] at each point of \((M,\mathsf{d})\) is Euclidean. This line of thought is somehow reminiscent of the deep structural result for \(\mathsf{RCD}(K,N)\) spaces, with \(K\in\mathbb{R}\) and \(N\in(1,\infty)\), proved by Mondino-Naber [39]. However, differently from [39], Theorem 1.2 provides information about the metric tangent cone at _each_ point of the manifold. Showing that the distribution \(\mathscr{D}\) is Riemannian at _almost every_ point in fact would not be enough, as this would not rule out almost-Riemannian structures.
Starting from (1.1), we first blow-up the sub-Riemannian structure and pass to its metric-measure tangent cone, showing that (1.1) is preserved with \(K=0\). Note that, in this blow-up procedure, the positivity of the density of \(\mathsf{m}\) is crucial, since otherwise the resulting metric tangent cone would be endowed with the null measure.
The resulting blown-up sub-Riemannian space is isometric to a homogeneous space of the form \(G/H\), where \(G=\exp\mathfrak{g}\) is the Carnot group associated to the underlying (finite-dimensional and stratified) Lie algebra \(\mathfrak{g}\) of bracket-generating vector fields, and \(H=\exp\mathfrak{h}\) is its subgroup corresponding to the Lie subalgebra \(\mathfrak{h}\) of vector fields vanishing at the origin, see [18]. Of course, the most difficult case is when \(H\) is non-trivial, that is, the tangent cone is not a Carnot group.
At this point, the key idea is to show that the Bakry-Emery inequality \(\mathsf{BE}(K,\infty)\) implies the existence of special isometries on the tangent cone.
**Definition 1.3** (Sub-Riemannian isometries).: Let \(M\) be a sub-Riemannian manifold, with distribution \(\mathscr{D}\) and metric \(\mathrm{g}\). A diffeomorphism \(\phi:M\to M\) is an _isometry_ if
\[(\phi_{*}\mathscr{D})|_{x}=\mathscr{D}_{\phi(x)}\quad\text{for all $x\in M$}, \tag{1.2}\]
and, furthermore, \(\phi_{*}\) is an orthogonal map with respect to \(\mathrm{g}\). We say that a smooth vector field \(V\) is _Killing_ if its flow \(\phi_{t}^{V}\) is an isometry for all \(t\in\mathbb{R}\).
For precise definitions of \(\mathfrak{g}\) and \(\mathfrak{h}\) in the next statement, we refer to Section 2.4.
**Theorem 1.4** (Existence of Killing fields).: _Let \((M,\mathsf{d})\) be a complete sub-Riemannian manifold equipped with a positive smooth measure \(\mathfrak{m}\) If \((M,\mathsf{d},\mathfrak{m})\) satisfies the \(\mathsf{BE}(K,\infty)\) inequality for some \(K\in\mathbb{R}\), then, for the nilpotent approximation at any given point, there exists a vector space \(\mathsf{i}\subset\mathfrak{g}^{1}\) such that_
\[\mathfrak{g}^{1}=\mathsf{i}\oplus\mathfrak{h}^{1} \tag{1.3}\]
_and every \(Y\in\mathsf{i}\) is a Killing vector field._
The existence of the space of isometries \(\mathsf{i}\) forces the Lie algebra \(\mathfrak{g}\) to be commutative and of maximal rank, thus implying that the original manifold \((M,\mathsf{d})\) was in fact Riemannian.
**Theorem 1.5** (Killing implies commutativity).: _If there exists a subspace \(\mathsf{i}\subset\mathfrak{g}^{1}\) of Killing vector fields such that \(\mathfrak{g}^{1}=\mathsf{i}\oplus\mathfrak{h}^{1}\), then \(\mathfrak{g}\) is commutative._
Theorem 1.5 states that, if a Carnot group contains enough _horizontal symmetries_, then it must be commutative. As it will be evident from its proof, Theorem 1.5 holds simply assuming that, for each \(V\in\mathsf{i}\), the flow \(\phi_{t}^{V}\) is pointwise distribution-preserving, namely it satisfies (1.2), without being necessarily isometries.
### Infinitesimal Hilbertianity
The Bakry-Emery inequality \(\mathsf{BE}(K,\infty)\) in (1.1) is a consequence of the \(\mathsf{CD}(K,\infty)\) condition as soon as the ambient metric-measure space is infinitesimal Hilbertian as defined in [25].
Let \((X,\mathsf{d})\) be a complete separable metric space, \(\mathfrak{m}\) be a locally bounded Borel measure, and \(q\in[1,\infty)\). We let \(|\mathsf{D}u|_{w,q}\in\mathrm{L}^{q}(X,\mathfrak{m})\) be the _minimal \(q\)-upper gradient_ of a measurable function \(u:X\to\mathbb{R}\), see [5, Sec. 4.4]. We define the Banach space
\[\mathrm{W}^{1,q}(X,\mathsf{d},\mathfrak{m})=\left\{u\in\mathrm{L}^{q}(X, \mathfrak{m}):|\mathsf{D}u|_{w,q}\in\mathrm{L}^{q}(X,\mathfrak{m})\right\}\]
with the norm
\[\|u\|_{\mathrm{W}^{1,q}(X,\mathsf{d},\mathfrak{m})}=\left(\|u\|_{\mathrm{L}^{ q}(X,\mathfrak{m})}^{q}+\||\mathsf{D}u|_{w,q}\|_{\mathrm{L}^{q}(X,\mathfrak{m})}^ {q}\right)^{1/q}.\]
**Definition 1.6** (Infinitesimal Hilbertianity).: A metric measure space \((X,\mathsf{d},\mathfrak{m})\) is _infinitesimally Hilbertian_ if \(\mathrm{W}^{1,2}(X,\mathsf{d},\mathfrak{m})\) is a Hilbert space.
The infinitesimal Hilbertianity of sub-Riemannian structures has been recently proved in [35], with respect to any Radon measure. In particular, Theorem 1.2 immediately yields the following 'no-\(\mathsf{CD}\) theorem' for sub-Riemannian manifolds, thus extending all the aforementioned results [8, 23, 29, 31, 33, 37].
**Corollary 1.7** (no-\(\mathsf{CD}\)).: _Let \((M,\mathsf{d})\) be a complete sub-Riemannian manifold endowed with a positive smooth measure \(\mathfrak{m}\). If \((M,\mathsf{d},\mathfrak{m})\) satisfies the \(\mathsf{CD}(K,\infty)\) condition for some \(K\in\mathbb{R}\), then \((M,\mathsf{d})\) is Riemannian._
However, since the measure in Corollary 1.7 is positive and smooth, we can avoid to rely on the general result of [35], instead providing a simpler and self-contained proof of the infinitesimal Hilbertianity property. In particular, we prove the following result, which actually refines [35, Th. 5.6] in the case of smooth measures. In the following, \(\mathrm{HW}^{1,q}(M,\mathfrak{m})\) denotes the sub-Riemannian Sobolev spaces (see Section 2.2).
**Theorem 1.8** (Infinitesimal Hilbertianity).: _Let \(q\in(1,\infty)\). Let \((M,\mathsf{d})\) be a complete sub-Riemannian manifold equipped with a positive smooth measure \(\mathsf{m}\). The following hold._
1. \(\mathrm{W}^{1,q}(M,\mathsf{d},\mathsf{m})=\mathrm{HW}^{1,q}(M,\mathsf{m})\)_, with_ \(|\mathsf{D}u|_{w,q}=\|\nabla u\|\)__\(\mathsf{m}\)_-a.e. on_ \(M\) _for all_ \(u\in\mathrm{W}^{1,q}(M,\mathsf{d},\mathsf{m})\)_. In particular, taking_ \(q=2\)_,_ \((M,\mathsf{d},\mathsf{m})\) _is infinitesimally Hilbertian._
2. _If_ \((M,\mathsf{d},\mathsf{m})\) _satisfies the_ \(\mathsf{CD}(K,\infty)\) _condition for some_ \(K\in\mathbb{R}\)_, then the Bakry-Emery_ \(\mathsf{BE}(K,\infty)\) _inequality_ (1.1) _holds on_ \(M\)_._
Note that Theorem 1.8 holds for less regular measures, see Remark 3.6.
**Remark 1.9** (The case of a.e. smooth measures).: Theorem 1.8 can be adapted also to the case of a Borel and locally finite measure \(\mathsf{m}\) which is smooth and positive only on \(\overline{\Omega}\), where \(\Omega\subset M\) is an open set with \(\mathsf{m}(\partial\Omega)=0\). In this case, we obtain \(\mathrm{HW}^{1,q}(\Omega,\mathsf{m})=\mathrm{W}^{1,q}(\overline{\Omega}, \mathsf{d},\mathsf{m})\), with \(|\mathsf{D}u|_{w,q}=\|\nabla u\|\)\(\mathsf{m}\)-a.e. on \(\Omega\) for all \(u\in\mathrm{W}^{1,q}(\overline{\Omega},\mathsf{d},\mathsf{m})\). In particular, if \(\mathsf{m}\) is smooth and positive out of a closed set \(\mathcal{Z}\), with \(\mathsf{m}(\mathcal{Z})=0\), an elementary approximation argument proves that \((M,\mathsf{d},\mathsf{m})\) is infinitesimally Hilbertian and, if \((M,\mathsf{d},\mathsf{m})\) satisfies the \(\mathsf{CD}(K,\infty)\) condition for \(K\in\mathbb{R}\), then the Bakry-Emery \(\mathsf{BE}(K,\infty)\) inequality (1.1) holds on \(M\setminus\mathcal{Z}\). This is the case, for example, of the Grushin planes and half-planes with weighted measures of Section 1.5. The proof follows the same argument of the one of Theorem 1.8, exploiting the locality of the \(q\)-upper gradient, see for example [5, Sec. 8.2] and [25, Prop. 2.6], and similar properties for the distributional derivative.
### An alternative approach to the 'no-\(\mathsf{CD}\) theorem'
We mention an alternative proof of the 'no-\(\mathsf{CD}\) theorem' for almost-Riemannian structures (i.e., sub-Riemannian structures that are Riemannian outside a closed nowhere dense singular set). The strategy relies on the Gromov-Hausdorff continuity of the metric tangent at interior points of geodesics in \(\mathsf{RCD}(K,N)\) spaces, with \(N<\infty\), proved by Deng in [22],
For example, consider the standard Grushin plane (introduced in Section 1.5) equipped with a smooth positive measure. The curve \(\gamma(t)=(t,0)\), \(t\in\mathbb{R}\), is a geodesic between any two of its point. The metric tangent at \(\gamma(t)\) is (isometric to) the Euclidean plane for every \(t\neq 0\), while it is (isometric to) the Grushin plane itself for \(t=0\). Since the Grushin plane cannot be bi-Lipschitz embedded into the Euclidean plane, the two spaces are at positive Gromov-Hausdorff distance, contradicting the continuity result.
This strategy has a few drawbacks. On the one hand, it relies on the (non-trivial) machinery developed in [22]. Consequently, this argument does not work in the case \(N=\infty\). On the other hand, the formalization of this strategy for general almost-Riemannian structures requires certain quantitative bi-Lipschitz non-embedding results for almost-Riemannian structures into Euclidean spaces, which we are able to prove only under the same assumptions of [37].
### Weighted Grushin structures
When the density of the smooth measure is allowed to vanish, the 'no-\(\mathsf{CD}\) theorem' breaks down. In fact, in this situation, the following two interesting phenomena occur:
1. the Bakry-Emery \(\mathsf{BE}(K,\infty)\) inequality no longer implies the \(\mathsf{CD}(K,\infty)\) condition;
2. there exist almost-Riemannian structures with boundary satisfying the \(\mathsf{CD}(0,N)\) condition for \(N\in[1,\infty]\).
We provide examples of both phenomena on the so-called _weighted Grushin plane_. This is the sub-Riemannian structure on \(\mathbb{R}^{2}\) induced by the family \(\mathscr{F}=\{X,Y\}\), where
\[X=\partial_{x},\quad Y=x\,\partial_{y},\quad(x,y)\in\mathbb{R}^{2}. \tag{1.4}\]
The induced distribution \(\mathscr{D}=\operatorname{span}\{X,Y\}\) has maximal rank outside the singular region \(S=\{x=0\}\) and rank \(1\) on \(S\). Since \([X,Y]=\partial_{y}\) on \(\mathbb{R}^{2}\), the resulting sub-Riemannian metric space \((\mathbb{R}^{2},\mathsf{d})\) is Polish and geodesic. It is _almost-Riemannian_ in the sense that, out of \(S\), the metric is locally equivalent to the Riemannian one given by the metric tensor
\[\mathrm{g}=\mathrm{d}x\otimes\mathrm{d}x+\frac{1}{x^{2}}\,\,\mathrm{d}y \otimes\mathrm{d}y,\quad x\neq 0. \tag{1.5}\]
We endow the metric space \((\mathbb{R}^{2},\mathsf{d})\) with the weighted Lebesgue measure
\[\mathsf{m}_{p}=|x|^{p}\,\mathrm{d}x\,\mathrm{d}y,\]
where \(p\in\mathbb{R}\) is a parameter. The choice \(p=-1\) corresponds to the Riemannian density
\[\operatorname{vol}_{\mathrm{g}}=\frac{1}{|x|}\,\,\mathrm{d}x\,\,\mathrm{d}y, \quad x\neq 0, \tag{1.6}\]
so that
\[\mathsf{m}_{p}=e^{-V}\operatorname{vol}_{\mathrm{g}},\quad V(x)=-(p+1)\log|x|,\quad x\neq 0. \tag{1.7}\]
We call the metric-measure space \(\mathbb{G}_{p}=(\mathbb{R}^{2},\mathsf{d},\mathsf{m}_{p})\) the _(\(p\)-)weighted Grushin plane_.
We can now state the following result, illustrating phenomenon (A).
**Theorem 1.10**.: _Let \(p\in\mathbb{R}\) and let \(\mathbb{G}_{p}=(\mathbb{R}^{2},\mathsf{d},\mathsf{m}_{p})\) be the weighted Grushin plane._
_(i) If \(p\geq 0\), then \(\mathbb{G}_{p}\) does not satisfy the \(\mathsf{CD}(K,\infty)\) property for all \(K\in\mathbb{R}\)._
_(ii) If \(p\geq 1\), then \(\mathbb{G}_{p}\) satisfies the \(\mathsf{BE}(0,\infty)\) inequality (1.1) almost everywhere._
To prove (i), we show that the corresponding Brunn-Minkowski inequality is violated. In fact, the case \(p=0\) is due to Juillet [32], while the case \(p>0\) can be achieved via a simple argument which was pointed out to us by J. Pan. Claim (ii), instead, is obtained by direct computations.
Somewhat surprisingly, the weighted Grushin _half_-plane \(\mathbb{G}_{p}^{+}\)--obtained by restricting the metric-measure structure of \(\mathbb{G}_{p}\) to the (closed) half-plane \([0,\infty)\times\mathbb{R}\)--does satisfy the \(\mathsf{CD}(0,N)\) condition for sufficiently large \(N\in[1,\infty]\). Precisely, we can prove the following result, illustrating phenomenon (B).
**Theorem 1.11**.: _Let \(p\geq 1\). The weighted Grushin half-plane \(\mathbb{G}_{p}^{+}\) satisfies the \(\mathsf{CD}(0,N)\) condition if and only if \(N\geq N_{p}\), where \(N_{p}\in(2,\infty]\) is given by_
\[N_{p}=\frac{(p+1)^{2}}{p-1}+2, \tag{1.8}\]
_with the convention that \(N_{1}=\infty\). Furthermore, \(\mathbb{G}_{p}^{+}\) is infinitesimally Hilbertian, and it is thus an \(\mathsf{RCD}(0,N)\) space for \(N\geq N_{p}\)._
While we were completing this work, Pan and Montgomery [41] observed that the spaces built in [42, 20] as _Ricci limits_ are actually the weighted Grushin half-spaces presented above. Our construction and method of proof are more direct with respect to the approach of [42, 20], and easily yield sharp dimensional bounds.
### Counterexample to gluing theorems
We end this introduction with an interesting by-product of our analysis, in in connection with the so-called _gluing theorems_.
Perelman's Doubling Theorem [43, Sect. 5.2] states that a finite dimensional Alexandrov space with a curvature lower bound can be doubled along its boundary yielding an Alexandrov space with _same_ curvature lower bound and dimension. This result has been extended by Petrunin [44, Th. 2.1] to the gluing of Alexandrov spaces.
It is interesting to understand whether these classical results hold true for general metric-measure spaces satisfying synthetic Ricci curvature lower bounds in the sense of Lott-Sturm-Villani. In [34], the gluing theorem was proved for \(\mathsf{CD}(K,N)\) spaces with Alexandrov curvature bounded from below (while it is false for \(\mathsf{MCP}\) spaces, see [46]).
Here we obtain that, in general, the assumption of Alexandrov curvature bounded from below cannot be removed from the results in [34]. More precisely, Theorems 1.10 and 1.11, and the fact that the metric-measure double of the Grushin half-plane \(\mathbb{G}_{p}^{+}\) is \(\mathbb{G}_{p}\) (see [46, Prop. 6]) yield the following corollary.
**Corollary 1.12** (Counterexample to gluing in \(\mathsf{RCD}\) spaces).: _For all \(N\geq 10\), there exists a geodesically convex \(\mathsf{RCD}(0,N)\) metric-measure space with boundary such that its metric-measure double does not satisfy the \(\mathsf{CD}(K,\infty)\) condition for any \(K\in\mathbb{R}\)._
In [34, Conj. 1.6], the authors conjecture the validity of the gluing theorem for _non-collapsed_\(\mathsf{RCD}(K,N)\), with \(N\) the Hausdorff dimension of the metric-measure space. As introduced in [21], a non-collapsed \(\mathsf{RCD}(K,N)\) space is an infinitesimally Hilbertian \(\mathsf{CD}(K,N)\) space with \(\mathsf{m}=\mathscr{H}^{N}\), where \(\mathscr{H}^{N}\) denotes the \(N\)-dimensional Hausdorff measure of \((X,\mathsf{d})\). Since the weighted half-Grushin spaces are indeed collapsed, Corollary 1.12 also shows that the non-collapsing assumption cannot be removed from [34, Conj. 1.6].
### Acknowledgments
We wish to thank Michel Bonnefont for fruitful discussions and, in particular, for bringing some technical details in [23] that inspired the strategy of the proof of Theorem 1.2 to our attention.
This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 945655) and the ANR grant 'RAGE' (ANR-18-CE40-0012). The second-named author is member of the Istituto Nazionale di Alta Matematica (INdAM), Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA), and is partially supported by the INdAM-GNAMPA 2022 Project _Analisi geometrica in strutture subriemanniane_, codec CUP_E55F22000270001.
## 2. Preliminaries
In this section, we introduce some notation and recall some results about sub-Riemannian manifolds and curvature-dimension conditions.
### Sub-Riemannian structures
For \(L\in\mathbb{N}\), we let \(\mathscr{F}=\{X_{1},\dots,X_{L}\}\) be a family of smooth vector fields globally defined on a smooth \(n\)-dimensional manifold \(M\), \(n\geq 2\). The (generalized) _sub-Riemannian distribution_ induced by the family \(\mathscr{F}\) is defined by
\[\mathscr{D}=\bigsqcup_{x\in M}\mathscr{D}_{x},\quad\mathscr{D}_{x}=\operatorname {span}\{X_{1}|_{x},\dots,X_{L}|_{x}\}\subset T_{x}M,\quad x\in M. \tag{2.1}\]
Note that we do not require the dimension of \(\mathscr{D}_{x}\) to be constant as \(x\in M\) varies, that is, we may consider _rank-varying_ distributions. With a standard abuse of notation, we let
\[\Gamma(\mathscr{D})=C^{\infty}\text{-module generated by }\mathscr{F}.\]
Notice that, for any smooth vector field \(V\), it holds
\[V\in\Gamma(\mathscr{D})\implies V_{x}\in\mathscr{D}_{x}\text{ for all }x\in M,\]
but the converse is false in general. We let
\[\|V\|_{x}=\min\biggl{\{}|u|:u\in\mathbb{R}^{L}\text{ such that }V=\sum_{i=1}^{L}u_{i}\,X_{i}|_{x},\ X_{i}\in \mathscr{F}\biggr{\}} \tag{2.2}\]
whenever \(V\in\mathscr{D}\) and \(x\in M\). The _norm_\(\|\cdot\|_{x}\) induced by the family \(\mathscr{F}\) satisfies the _parallelogram law_ and, consequently, it is induced by a _scalar product_
\[\mathrm{g}_{x}\colon\mathscr{D}_{x}\times\mathscr{D}_{x}\to\mathbb{R}.\]
An _admissible curve_ is a locally Lipschitz in charts path \(\gamma\colon[0,1]\to M\) such that there exists a _control_\(u\in\mathrm{L}^{\infty}([0,1];\mathbb{R}^{L})\) such that
\[\dot{\gamma}(t)=\sum_{i=1}^{L}u_{i}(t)X_{i}|_{\gamma(t)}\quad\text{for a.e. }t \in[0,1].\]
The _length_ of an admissible curve \(\gamma\) is defined via the norm (2.2) as
\[\operatorname{length}(\gamma)=\int_{0}^{1}\|\dot{\gamma}(t)\|_{\gamma(t)}\, \mathrm{d}t\]
and the _Carnot-Caratheodory_ (or _sub-Riemannian_) _distance_ between \(x,y\in M\) is
\[\mathsf{d}(x,y)=\inf\{\operatorname{length}(\gamma):\gamma\text{ admissible with }\gamma(0)=x,\ \gamma(1)=y\}.\]
We assume that the family \(\mathscr{F}\) satisfies the _bracket-generating condition_
\[T_{x}M=\{X|_{x}:X\in\operatorname{Lie}(\mathscr{F})\}\quad\text{for all }x\in M, \tag{2.3}\]
where \(\operatorname{Lie}(\mathscr{F})\) is the smallest Lie subalgebra of vector fields on \(M\) containing \(\mathscr{F}\), namely,
\[\operatorname{Lie}(\mathscr{F})=\operatorname{span}\Bigl{\{}[X_{i_{1}},\dots, [X_{i_{j-1}},X_{i_{j}}]]:X_{i_{\ell}}\in\mathscr{F},\ j\in\mathbb{N}\Bigr{\}}.\]
Under the assumption (2.3), the Chow-Rashevskii Theorem implies that \(\mathsf{d}\) is a well-defined finite distance on \(M\) inducing the same topology of the ambient manifold.
### Gradient, sub-Laplacian and Sobolev spaces
The _gradient_ of a function \(u\in C^{\infty}(M)\) is the unique vector field \(\nabla u\in\Gamma(\mathscr{D})\) such that
\[\mathrm{g}(\nabla u,V)=du(V)\quad\text{for all }V\in\Gamma(\mathscr{D}). \tag{2.4}\]
One can check that \(\nabla u\) can be globally represented as
\[\nabla u=\sum_{i=1}^{L}X_{i}u\,X_{i},\quad\text{with}\quad\|\nabla u\|^{2}= \sum_{i=1}^{L}(X_{i}u)^{2}, \tag{2.5}\]
even if the family \(\mathscr{F}\) is not linearly independent, see Corollary A.2 for a proof.
We equip the manifold \(M\) with a _positive smooth_ measure \(\mathsf{m}\). The _sub-Laplacian_ of a function \(u\in C^{\infty}(M)\) is the unique function \(\Delta u\in C^{\infty}(M)\) such that
\[\int_{M}\operatorname{g}(\nabla u,\nabla v)\,\mathrm{d}\mathsf{m}=-\int_{M}v\, \Delta u\,\mathrm{d}\mathsf{m} \tag{2.6}\]
for all \(v\in C^{\infty}_{c}(M)\). On can check that \(\Delta u\) can be globally represented as
\[\Delta u=\sum_{i=1}^{L}\left(X_{i}^{2}u+X_{i}u\,\operatorname{div}_{\mathsf{m} }(X_{i})\right), \tag{2.7}\]
see Corollary A.2 for a proof. In (2.7), \(\operatorname{div}_{\mathsf{m}}V\) is the divergence of the vector field \(V\) computed with respect to \(\mathsf{m}\), that is,
\[\int_{M}v\,\operatorname{div}_{\mathsf{m}}(V)\,\mathrm{d}\mathsf{m}=-\int_{M} \operatorname{g}(\nabla v,V)\,\mathrm{d}\mathsf{m}\quad\text{for all $v\in C^{ \infty}_{c}(M)$}.\]
For \(q\in[1,\infty)\), we say that \(u\in\operatorname{L}^{1}_{\mathrm{loc}}(M,\mathsf{m})\) has \(q\)-integrable _distributional \(X_{i}\)-derivative_ if there exists a function \(X_{i}u\in\operatorname{L}^{q}(M,\mathsf{m})\) such that
\[\int_{M}vX_{i}u\,\mathrm{d}\mathsf{m}=\int_{M}uX_{i}^{*}v\,\mathrm{d}\mathsf{ m}\quad\text{for all $v\in C^{\infty}_{c}(M)$},\]
where \(X_{i}^{*}v=-X_{i}v-v\operatorname{div}_{\mathsf{m}}(X_{i})\) denotes the adjoint action of \(X_{i}\). We thus let
\[\operatorname{HW}^{1,q}(M,\mathsf{m})=\{u\in\operatorname{L}^{q}(M,\mathsf{ m}):X_{i}u\in\operatorname{L}^{q}(M,\mathsf{m}),\ i=1,\dots,L\}\]
be the usual _horizontal \(\operatorname{W}^{1,q}\) Sobolev space_ induced by the the family \(\mathscr{F}\) and the measure \(\mathsf{m}\) on \(M\), endowed with the natural norm
\[\|u\|_{\operatorname{HW}^{1,q}(M,\mathsf{m})}=\left(\|u\|_{\operatorname{L}^ {q}(M,\mathsf{m})}^{q}+\|\nabla u\|_{\operatorname{L}^{q}(M,\mathsf{m})}^{q} \right)^{1/q}\]
for all \(u\in\operatorname{HW}^{1,q}(M,\mathsf{m})\), where \(\nabla u=\sum_{i=1}^{L}X_{i}u\,X_{i}\) in accordance with (2.5) and
\[\|\nabla u\|_{\operatorname{L}^{q}(M,\mathsf{m})}^{q}=\int_{M}\|\nabla u\|^{ q}\,\mathrm{d}\mathsf{m}.\]
### Privileged coordinates
Following [18, 30], we introduce _privileged coordinates_, a fundamental tool in the description of the tangent cone of sub-Riemannian manifolds.
Given a multi-index \(I\in\{1,\dots,L\}^{\times i}\), \(i\in\mathbb{N}\), we let \(|I|=i\) be its _length_ and we set
\[X_{I}=[X_{I_{1}},[\dots,[X_{I_{i-1}},X_{I_{i}}]]].\]
Accordingly, we define
\[\mathscr{D}_{x}^{i}=\operatorname{span}\{X_{I}|_{x}:|I|\leq i\} \tag{2.8}\]
and
\[k_{i}(x)=\dim\mathscr{D}_{x}^{i}\]
for all \(x\in M\) and \(i\in\mathbb{N}\). In particular, \(\mathscr{D}_{x}^{0}=\{0\}\) and \(\mathscr{D}_{x}^{1}=\mathscr{D}_{x}\) as in (2.1) for all \(x\in M\). The spaces defined in (2.8) naturally yield the filtration
\[\{0\}=\mathscr{D}_{x}^{0}\subset\mathscr{D}_{x}^{1}\subset\dots\subset \mathscr{D}_{x}^{s(x)}=T_{x}M\]
for all \(x\in M\), where \(s=s(x)\in\mathbb{N}\) is the _step_ of the sub-Riemannian structure at the point \(x\). We say that \(x\in M\) is a _regular_ point if the dimension of each space \(\mathscr{D}_{y}^{i}\) remains constant as \(y\in M\) varies in an open neighborhood of \(x\), otherwise \(x\) is a _singular_ point.
**Definition 2.1** (Adapted and privileged coordinates).: Let \(o\in M\) and let \(U\subset M\) be an open neighborhood of \(o\). We say that the local coordinates given by a diffeomorphism \(z\colon U\to\mathbb{R}^{n}\) are _adapted at \(o\)_ if they are _centered at \(o\)_, i.e. \(z(o)=0\), and \(\partial_{z_{1}}|_{0},\dots,\partial_{z_{k_{i}}}|_{0}\) form a basis for \(\mathscr{D}_{o}^{i}\) in these coordinates for all \(i=1,\dots,s(o)\). We say that the adapted coordinate \(z_{i}\) has _weight_\(w_{i}=j\) if \(\partial_{z_{i}}|_{0}\in\mathscr{D}_{o}^{j}\setminus\mathscr{D}_{o}^{j-1}\). Furthermore, we say that the coordinates \(z\) are _privileged at \(o\)_ if they are adapted at \(o\) and, in addition, \(z_{i}(x)=O(\mathsf{d}(x,o)^{w_{i}})\) for all \(x\in U\) and \(i=1,\dots,n\).
Privileged coordinates exist in a neighborhood of any point, see [18, Th. 4.15].
### Nilpotent approximation
From now on, we fix a set of privileged coordinates \(z\colon U\to\mathbb{R}^{n}\) around a point \(o\in M\) in the sense of Definition 2.1. Without loss of generality, we identify the coordinate domain \(U\subset M\) with \(\mathbb{R}^{n}\) and the base point \(o\in M\) with the origin \(0\in\mathbb{R}^{n}\). Similarly, the vector fields in \(\mathscr{F}\) defined on \(U\) are identified with vector fields on \(\mathbb{R}^{n}\), and the restriction of the sub-Riemannian distance \(\mathsf{d}\) to \(U\) is identified with a distance function on \(\mathbb{R}^{n}\), which is induced by the family \(\mathscr{F}\), for which we keep the same notation.
On \((\mathbb{R}^{n},\mathscr{F})\), we define a family of _dilations_, for \(\lambda\geq 0\), by letting
\[\operatorname{dil}_{\lambda}\colon\mathbb{R}^{n}\to\mathbb{R}^{n},\quad \operatorname{dil}_{\lambda}(z_{1},\dots,z_{n})=(\lambda^{w_{1}}z_{1},\dots, \lambda^{w_{n}}z_{n})\]
for all \(z=(z_{1},\dots,z_{n})\in\mathbb{R}^{n}\), where the \(w_{i}\)'s are the weights given by Definition 2.1. We say that a differential operator \(P\) is _homogeneous of degree \(-d\in\mathbb{Z}\)_ if
\[P(f\circ\operatorname{dil}_{\lambda})=\lambda^{-d}(Pf)\circ\operatorname{dil }_{\lambda}\quad\text{for all $\lambda>0$ and $f\in C^{\infty}(\mathbb{R}^{n})$.} \tag{2.9}\]
Note that the monomial \(z_{i}\) is homogeneous of degree \(w_{i}\), while the vector field \(\partial_{z_{i}}\) is homogeneous of degree \(-w_{i}\), for \(i=1,\dots,n\). As a consequence, the differential operator
\[z_{1}^{\mu_{1}}\cdot\dots\cdot z_{n}^{\mu_{n}}\frac{\partial^{|\nu|}}{ \partial z_{1}^{\nu_{1}}\cdots\partial z_{n}^{\nu_{n}}},\qquad\nu_{i},\mu_{j} \in\mathbb{N}\cup\{0\},\]
is homogeneous of degree \(\sum_{i=1}^{n}w_{i}(\mu_{i}-\nu_{i})\). For more details, see [18, Sec. 5].
We can now introduce the new family
\[\widehat{\mathscr{F}}=\left\{\widehat{X}_{1},\dots,\widehat{X}_{L}\right\}\]
by defining
\[\widehat{X}_{i}=\lim_{\varepsilon\to 0}X_{i}^{\varepsilon},\quad X_{i}^{ \varepsilon}=\varepsilon\,(\operatorname{dil}_{1/\varepsilon})_{*}X_{i}, \tag{2.10}\]
for all \(i=1,\dots,L\), where \((\operatorname{dil}_{1/\varepsilon})_{*}\) stands for the usual push-forward via the differential of the dilation map \(\operatorname{dil}_{1/\varepsilon}\), see [18, Sec. 5.3]. The convergence in (2.10) can be actually made more precise, in the sense that
\[X_{i}^{\varepsilon}=\widehat{X}_{i}+\mathscr{R}_{i}^{\varepsilon},\quad i=1, \dots,L,\]
where \(\mathscr{R}_{i}^{\varepsilon}\) locally uniformly converges to zero as \(\varepsilon\to 0\), see [18, Th. 5.19].
The family \(\widehat{\mathscr{F}}\) is a set of complete vector fields on \(\mathbb{R}^{n}\), homogeneous of degree \(-1\), with polynomial coefficients, and can be understood as the 'principal part' of \(\mathscr{F}\) upon blow-up by dilations. Since \(\mathscr{F}\) satisfies the bracket-generating condition (2.3), also the new family \(\widehat{\mathscr{F}}\) is bracket-generating at all points of \(\mathbb{R}^{n}\), and thus induces a finite sub-Riemannian distance \(\widehat{\mathsf{d}}\), see [18, Prop. 5.17]. The resulting \(n\)-dimensional sub-Riemannian structure \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) is called _nilpotent approximation_ of \((\mathbb{R}^{n},\mathscr{F})\) at \(0\in\mathbb{R}^{n}\).
The family \(\widehat{\mathscr{F}}=\left\{\widehat{X}_{1},\ldots,\widehat{X}_{L}\right\}\) generates a finite-dimensional stratified Lie algebra
\[\mathfrak{g}=\operatorname{Lie}(\widehat{\mathscr{F}})=\mathfrak{g}^{1}\oplus \cdots\oplus\mathfrak{g}^{s}\]
of step \(s=s(0)\in\mathbb{N}\), where the grading is given by the degree of the vector fields, according to the definition in (2.9), that is, the layer \(\mathfrak{g}^{i}\) corresponds to vector fields homogeneous of degree \(-i\) with respect to dilations, see [18, Sec. 5.4]. In particular, \(\mathfrak{g}^{1}=\operatorname{span}\bigl{\{}\widehat{X}_{1},\ldots,\widehat{ X}_{L}\bigr{\}}\), so that \(\mathfrak{g}\) is generated by its first stratum, namely,
\[\mathfrak{g}^{j+1}=[\mathfrak{g}^{1},\mathfrak{g}^{j}],\qquad\forall j=1, \ldots,s-1. \tag{2.11}\]
Finally, define the Lie subalgebra of vector fields vanishing at \(0\),
\[\mathfrak{h}=\left\{\widehat{X}\in\mathfrak{g}:\widehat{X}|_{0}=0\right\}= \mathfrak{h}^{1}\oplus\cdots\oplus\mathfrak{h}^{s},\]
which inherits the grading from the one of \(\mathfrak{g}\),
\[\mathfrak{h}^{j+1}=[\mathfrak{h}^{1},\mathfrak{h}^{j}],\qquad\forall j=1, \ldots,s-1. \tag{2.12}\]
It is a fundamental fact [18, Th. 5.21] that the nilpotent approximation \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) is diffeomorphic to the homogeneous sub-Riemannian space \(G/H\), where \(G\) is the Carnot group \(G=\exp\mathfrak{g}\) (explicitly realized as the subgroup of the flows of the vector fields of \(\mathfrak{g}\) acting on \(\mathbb{R}^{n}\) from the right) and \(H=\exp\mathfrak{h}\) is the Carnot subgroup induced by \(\mathfrak{h}\).
In particular, if \(0\in\mathbb{R}^{n}\) is a regular point, then \(H=\{0\}\), and so the nilpotent approximation \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) is diffeomorphic to the Carnot group \(G\), see [18, Prop. 5.22].
Recall that the smooth measure \(\mathsf{m}\) on the original manifold \(M\) can be identified with a smooth measure on \(U\simeq\mathbb{R}^{n}\), for which we keep the same notation. In particular, \(\mathsf{m}\) is absolutely continuous with respect to the \(n\)-dimensional Lebesgue measure \(\mathscr{L}^{n}\) on \(\mathbb{R}^{n}\), with \(\mathsf{m}=\rho\,\mathscr{L}^{n}\) for some positive smooth function \(\rho\colon\mathbb{R}^{n}\to(0,\infty)\). The corresponding blow-up measure on the nilpotent approximation is naturally given by
\[\widehat{\mathsf{m}}=\lim_{\varepsilon\to 0}\mathsf{m}^{\varepsilon}=\rho(0) \,\mathscr{L}^{n},\quad\mathsf{m}^{\varepsilon}=\varepsilon^{Q}\,(\operatorname {dil}_{1/\varepsilon})_{\#}\mathsf{m},\]
in the sense of weak\({}^{*}\) convergence of measures in \(\mathbb{R}^{n}\), where
\[Q=\sum_{i=1}^{n}i\,w_{i}\in\mathbb{N}\]
is the so-called homogeneous dimension of \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) and \((\operatorname{dil}_{1/\varepsilon})_{\#}\) stands for the push-forward in the measure-theoretic sense via the dilation map \(\operatorname{dil}_{1/\varepsilon}\). Consequently, without loss of generality, we can assume that \(\rho(0)=1\), thus endowing \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) with the \(n\)-dimensional Lebesgue measure. Notice that \(\operatorname{div}_{\mathscr{L}^{n}}\widehat{X}_{i}=0\), for all \(i=1,\ldots,L\), since each \(\widehat{X}_{i}\) is homogeneous of degree \(-1\). Hence, by (2.7), the sub-Laplacian of a function \(u\in C^{\infty}(\mathbb{R}^{n})\) can be globally represented as
\[\widehat{\Delta}u=\sum_{i=1}^{L}\widehat{X}_{i}^{2}u. \tag{2.13}\]
It is worth noticing that the metric space \((\mathbb{R}^{n},\widehat{\mathsf{d}})\) induced by the nilpotent approximation \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) actually coincides with the _metric tangent cone_ at \(o\in M\) of the metric space \((M,\mathsf{d})\) in the sense of Gromov [26], see [18, Th. 7.36] for the precise statement.
In fact, the sub-Riemmanian distance \(\mathsf{d}^{\varepsilon}\) induced by the vector fields \(X_{i}^{\varepsilon}\), \(i=1,\ldots,L\), defined in (2.10) is uniformly converging to the distance \(\widehat{\mathsf{d}}\) on compact sets as \(\varepsilon\to 0\).
It is not difficult to check that the family \(\{(\mathbb{R}^{n},\mathsf{d}^{\varepsilon},\mathsf{m}^{\varepsilon},0)\}_{ \varepsilon>0}\) of pointed metric-measure spaces converge to the pointed metric-measure space \((\mathbb{R}^{n},\widehat{\mathsf{d}},\mathscr{L}^{n},0)\) as \(\varepsilon\to 0\) in the _pointed measure Gromov-Hausdorff topology_, see [13, Sec. 10.3] for details.
### The curvature-dimension condition
We end this section by recalling the definition of curvature-dimension conditions of introduced in [48, 49, 36].
On a Polish (i.e., separable and complete) metric space \((X,\mathsf{d})\), we let \(\mathscr{P}(X)\) be the set of probability Borel measures on \(X\) and define the _Wasserstein (extended) distance_\(\mathsf{W}_{2}\)
\[\mathsf{W}_{2}^{2}(\mu,\nu)=\inf\biggl{\{}\int_{X\times X}\mathsf{d}^{2}(x,y) \,\mathrm{d}\pi:\pi\in\mathsf{Plan}(\mu,\nu)\biggr{\}}\in[0,\infty],\]
for \(\mu,\nu\in\mathscr{P}(X)\), where
\[\mathsf{Plan}(\mu,\nu)=\{\pi\in\mathscr{P}(X\times X):(\mathrm{p}_{1})_{\#} \pi=\mu,\ (\mathrm{p}_{2})_{\#}\pi=\nu\},\]
where \(\mathrm{p}_{i}\colon X\times X\to X\), \(i=1,2\), are the projections on each component and \(T_{\#}\mu\in\mathscr{P}(Y)\) denotes the push-forward measure given by any \(\mu\)-measurable map \(T\colon X\to X\). The function \(\mathsf{W}_{2}\) is a distance on the _Wasserstein space_
\[\mathscr{P}_{2}(X)=\biggl{\{}\mu\in\mathscr{P}(X):\int_{X}\mathsf{d}^{2}(x,x_ {0})\,\mathrm{d}\mu(x)<\infty\ \text{for some, and thus any, }x_{0}\in X\biggr{\}}.\]
Note that \((\mathscr{P}_{2}(X),\mathsf{W}_{2})\) is a Polish metric space which is geodesic as soon as \((X,\mathsf{d})\) is. In addition, letting \(\mathrm{Geo}(X)\) be the set of geodesics of \((X,\mathsf{d})\), namely, curves \(\gamma\colon[0,1]\to X\) such that \(\mathsf{d}(\gamma_{s},\gamma_{t})=|s-t|\,\mathsf{d}(\gamma_{0},\gamma_{1})\), for all \(s,t\in[0,1]\), any \(W_{2}\)-geodesic \(\mu\colon[0,1]\to\mathscr{P}_{2}(X)\) can be (possibly non-uniquely) represented as \(\mu_{t}=(e_{t})_{\sharp}\nu\) for some \(\nu\in\mathscr{P}(\mathrm{Geo}(X))\), where \(e_{t}\colon\mathrm{Geo}(X)\to X\) is the evaluation map at time \(t\in[0,1]\).
We endow the metric space \((X,\mathsf{d})\) with a non-negative Borel measure \(\mathsf{m}\) such that
\[\mathsf{m}\]
is finite on bounded sets and
\[\operatorname{supp}(\mathsf{m})=X.\]
We define the _(relative) entropy_ functional \[\mathsf{Ent}_{\mathsf{m}}\colon\mathscr{P}_{2}(X)\to[-\infty,+\infty]\] by letting
\[\mathsf{Ent}_{\mathsf{m}}(\mu)=\int_{X}\rho\log\rho\,\mathrm{d}\mathsf{m}\]
if \(\mu=\rho\mathsf{m}\) and \(\rho\log\rho\in\mathrm{L}^{1}(X,\mathsf{m})\), while we set \(\mathsf{Ent}_{\mathsf{m}}(\mu)=+\infty\) otherwise.
**Definition 2.2** (\(\mathsf{CD}(K,\infty)\) property).: We say that a metric-measure space \((X,\mathsf{d},\mathsf{m})\) satisfies the \(\mathsf{CD}(K,\infty)\) property if, for any \(\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)\) with \(\mathsf{Ent}_{\mathsf{m}}(\mu_{i})<+\infty\), \(i=0,1\), there exists a \(W_{2}\)-geodesic \([0,1]\ni s\mapsto\mu_{s}\in\mathscr{P}_{2}(X)\) joining them such that
\[\mathsf{Ent}_{\mathsf{m}}(\mu_{s})\leq(1-s)\,\mathsf{Ent}_{\mathsf{m}}(\mu_{0 })+s\,\mathsf{Ent}_{\mathsf{m}}(\mu_{1})-\frac{K}{2}\,s(1-s)\,\mathsf{W}_{2}^{2 }(\mu_{0},\mu_{1}) \tag{2.14}\]
for every \(s\in[0,1]\).
The geodesic \(K\)-convexity of \(\mathsf{Ent}_{\mathsf{m}}\) in (2.14) can be reinforced to additionally encode an upper bound on the dimension on the space, as recalled below. For \(N\in(1,\infty)\), we let
\[S_{N}(\mu,\mathsf{m})=-\int_{X}\rho^{-1/N}\,\mathrm{d}\mu,\qquad\mu=\rho \mathsf{m}+\mu^{\perp},\]
be the \(N\)_-Renyi entropy_ of \(\mu\in\mathscr{P}_{2}(X)\) with respect to \(\mathsf{m}\), where \(\mu=\rho\mathsf{m}+\mu^{\perp}\) denotes the Radon-Nikodym decomposition of \(\mu\) with respect to \(\mathsf{m}\).
**Definition 2.3** (\(\mathsf{CD}(K,N)\) property).: We say that a metric-measure space \((X,\mathsf{d},\mathsf{m})\) satisfies the \(\mathsf{CD}(K,N)\) property for some \(N\in[1,\infty)\) if, for any \(\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)\) with \(\mu_{i}=\rho_{i}\mathsf{m}\), \(i=0,1\), there exists a \(W_{2}\)-geodesic \([0,1]\ni s\mapsto\mu_{s}\in\mathscr{P}_{2}(X)\) joining them, with \(\mu_{s}=(e_{s})_{\sharp}\nu\) for some \(\nu\in\mathscr{P}(\operatorname{Geo}(X))\) such that
\[S_{N^{\prime}}(\mu_{s},\mathsf{m})\leq-\int_{\operatorname{Geo}(X)}\left[ \tau_{K,N^{\prime}}^{(1-s)}(\mathsf{d}(\gamma_{0},\gamma_{1}))\rho_{0}^{-1/N^{ \prime}}(\gamma_{0})+\tau_{K,N^{\prime}}^{(s)}(\mathsf{d}(\gamma_{0},\gamma_{1 }))\rho_{1}^{-1/N^{\prime}}(\gamma_{1})\right]\mathrm{d}\nu(\gamma)\]
for every \(s\in[0,1]\), \(N^{\prime}\geq N\). Here \(\tau_{K,N}^{(s)}\) is the _model distortion coefficient_, see [49, p. 137].
**Remark 2.4**.: The \(\mathsf{CD}(0,N)\) corresponds to the convexity of the \(N^{\prime}\)-Renyi entropy
\[S_{N^{\prime}}(\mu_{s},\mathsf{m})\leq(1-s)S_{N^{\prime}}(\mu_{0},\mathsf{m})+ sS_{N^{\prime}}(\mu_{1},\mathsf{m}),\]
for every \(s\in[0,1]\) and \(N^{\prime}\geq N\), with \(\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)\) as in Definition 2.3.
**Remark 2.5**.: For a \(\mathsf{CD}(K,N)\) metric-measure space, \(K\) and \(N\) represent a lower bound on the Ricci tensor and an upper bond on the dimension, respectively, and we have
\[\mathsf{CD}(K,N) \implies\mathsf{CD}(K,N^{\prime}) \text{for all }N^{\prime}\geq N,\ N,N^{\prime}\in[1,\infty],\] \[\mathsf{CD}(K,N) \implies\mathsf{CD}(K^{\prime},N) \text{for all }K^{\prime}\leq K,\ K,K^{\prime}\in\mathbb{R}.\]
In particular, the \(\mathsf{CD}(K,\infty)\) condition (2.14) is the weakest of all the curvature-dimension conditions for fixed \(K\in\mathbb{R}\).
## 3. Proofs
We first deal with Theorems 1.4 and 1.5, from which Theorem 1.2 immediately follows.
### Proof of Theorem 1.4
We divide the proof in four steps.
#### Step 1: passing to the nilpotent approximation via blow-up
Let \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) be the nilpotent approximation of \((M,\mathscr{F})\) at some fixed point \(o\in M\) as explained in Section 2.4. Let \(u\in C_{c}^{\infty}(M)\) and, without loss of generality, let us assume that \(\operatorname{supp}u\) is contained in the domain of the privileged coordinates at \(o\in M\). In particular, we identify \(u\) with a \(C_{c}^{\infty}\) function on \(\mathbb{R}^{n}\). We now apply (1.1) to the dilated function
\[u_{\varepsilon}=u\circ\operatorname{dil}_{1/\varepsilon}\in C_{c}^{\infty}( \mathbb{R}^{n}),\quad\text{for }\varepsilon>0,\]
and evaluate this expression at the point \(\operatorname{dil}_{\varepsilon}(x)\in\mathbb{R}^{n}\). Exploiting the expressions in Corollary A.2, we get that
\[\sum_{i,j=1}^{L}X_{i}^{\varepsilon}u\left(X_{ijj}^{\varepsilon}u-X_{jji}^{ \varepsilon}u\right)-(X_{ij}^{\varepsilon}u)^{2}+\mathscr{R}_{i,j}^{ \varepsilon}\,u\leq 0, \tag{3.1}\]
where \(X_{i}^{\varepsilon}\) is as in (2.10), \(X_{ijk}=X_{i}X_{j}X_{k}\) whenever \(i,j,k\in\{1,\ldots,L\}\), and \(\mathscr{R}_{i,j}^{\varepsilon}\) is a reminder locally uniformly vanishing as \(\varepsilon\to 0\). Therefore, letting \(\varepsilon\to 0\) in (3.1), by the convergence in (2.10) we get
\[\sum_{i,j=1}^{L}\widehat{X}_{i}u\left(\widehat{X}_{ijj}u-\widehat{X}_{jji}u \right)-\left(\widehat{X}_{ij}u\right)^{2}\leq 0, \tag{3.2}\]
which is (1.1) with \(K=0\) for the nilpotent approximation \((\mathbb{R}^{n},\widehat{\mathscr{F}})\).
_Step 2: improvement via homogeneous structure._ We now show that (3.2) implies a stronger identity, see (3.4) below, obtained from (3.2) by removing the squared term and replacing the inequality with an equality. Recall, in particular, the definition of weight of (privileged) coordinates in Definition 2.1. We take \(u\in C^{\infty}(\mathbb{R}^{n})\) of the form
\[u=\alpha+\gamma,\]
where \(\alpha\) and \(\gamma\) are homogeneous polynomial of weighted degree \(1\) and at least \(3\), respectively. Since \(X_{I}\alpha=0\) as soon as the multi-index satisfies \(|I|\geq 2\) (see [18, Prop. 4.10]), we can take the terms with lowest homogeneous degree in (3.2) to get
\[\sum_{i,j=1}^{L}\widehat{X}_{i}\alpha\left(\widehat{X}_{ijj}\gamma-\widehat{X }_{jji}\gamma\right)=\sum_{i=1}^{L}\widehat{X}_{i}\alpha\left[\widehat{X}_{i},\widehat{\Delta}\right](\gamma)\leq 0\]
for all such \(\alpha\) and \(\gamma\). In the second equality, we used the fact that the sub-Laplacian \(\widehat{\Delta}\) is a sum of squares as in (2.13). Since \(\alpha\) can be replaced with \(-\alpha\), we must have that
\[\sum_{i=1}^{L}\widehat{X}_{i}\alpha\left[\widehat{X}_{i},\widehat{\Delta} \right](\gamma)=0. \tag{3.3}\]
Observing that \(\widehat{X}_{i}\alpha\) is homogeneous of degree \(0\), and thus a constant function, we can rewrite (3.3) as
\[\left[\sum_{i=1}^{L}\widehat{X}_{i}\alpha\,\widehat{X}_{i},\widehat{\Delta} \right](\gamma)=0, \tag{3.4}\]
which is the seeked improvement of (3.2).
_Step 3: construction of the space \(\mathfrak{i}\subset\mathfrak{g}^{1}\)._ Let \(\mathbb{P}_{1}^{n}\) be the vector space of homogeneous polynomials of weighted degree \(1\) on \(\mathbb{R}^{n}\). Notice that
\[\mathbb{P}_{1}^{n}=\operatorname{span}\{z_{i}\mid i=1,\ldots,k_{1}\},\quad k_ {1}=\dim\mathscr{D}|_{0},\]
that is, \(\mathbb{P}_{1}^{n}\) is generated by the monomials given by the coordinates of lowest weight. We now define a linear map \(\phi\colon\mathbb{P}_{1}^{n}\to\mathfrak{g}^{1}\) by letting
\[\phi[\alpha]=\widehat{\nabla}\alpha=\sum_{i=1}^{L}\widehat{X}_{i}\alpha\, \widehat{X}_{i}\]
for all \(\alpha\in\mathbb{P}_{1}^{n}\) (recall Corollary A.2). We claim that \(\phi\) is injective. Indeed, if \(\phi[\alpha]=0\) for some \(\alpha\in\mathbb{P}_{1}^{n}\), then, by applying the operator \(\phi[\alpha]\) to the polynomial \(\alpha\), we get
\[0=\phi[\alpha](\alpha)=\left(\sum_{i=1}^{L}\widehat{X}_{i}\alpha\,\widehat{X} _{i}\right)(\alpha)=\sum_{i=1}^{L}(\widehat{X}_{i}\alpha)^{2}.\]
Thus \(\widehat{X}_{i}\alpha=0\) for all \(i=1,\ldots,L\). Hence \(\alpha\) must have weighted degree at least \(2\). However, since \(\alpha\) is homogeneous of weighted degree \(1\), we conclude that \(\alpha=0\), proving that \(\ker\phi=\{0\}\). We can thus define the subspace
\[\mathfrak{i}=\phi[\mathbb{P}_{1}^{n}]\subset\mathfrak{g}^{1}.\]
By (3.4), any \(\widehat{X}\in\mathfrak{i}\) is such that \([\widehat{X},\widehat{\Delta}](\gamma)=0\) for any homogeneous polynomial \(\gamma\) of degree at least \(3\). Exploiting the definitions given in Section 2.4, we observe that a differential operator \(P\), homogeneous of weighted degree \(-d\in\mathbb{Z}\), has the form
\[P=\sum_{\mu,\nu}a_{\mu,\nu}z^{\mu}\frac{\partial^{|\nu|}}{\partial z^{\nu}}, \tag{3.5}\]
where \(\mu=(\mu_{1},\dots,\mu_{n})\), \(\nu=(\nu_{1},\dots,\nu_{n})\), \(\mu_{i},\nu_{j}\in\mathbb{N}\cup\{0\}\), \(a_{\mu,\nu}\in\mathbb{R}\), and the weighted degree of every addend in (3.5) is equal to \(-d\), namely, \(\sum_{i=1}^{n}(\mu_{i}-\nu_{i})w_{i}=-d\).
Thus, since \(\widehat{X}\) and \(\widehat{\Delta}\) are homogeneous differential operators of order \(-1\) and \(-2\), respectively, then \([\widehat{X},\widehat{\Delta}]\) has order \(-3\), see [18, Prop. 5.16]. It follows that \([\widehat{X},\widehat{\Delta}]=0\) as differential operator acting on \(C^{\infty}(\mathbb{R}^{n})\).
We now show (1.3). Let us first observe that \(\mathfrak{i}\cap\mathfrak{h}=\{0\}\). Indeed, if \(\phi[\alpha]\in\mathfrak{h}\) for some \(\alpha\in\mathbb{P}^{n}_{1}\), that is, \(\phi[\alpha]|_{0}=0\), then \(\widehat{X}_{i}\alpha|_{0}=0\) for all \(i=1,\dots,L\). Since \(\widehat{X}_{i}\alpha\) is a constant function, this implies \(\phi[\alpha]=0\), as claimed. Therefore, since \(\dim\mathfrak{i}=\dim\mathbb{P}^{n}_{1}=k_{1}\), we must have \(\mathfrak{g}^{1}=\mathfrak{i}\oplus\mathfrak{h}^{1}\) thanks to Lemma 3.1 below.
**Lemma 3.1**.: _With the same notation of Section 2.4, if \(\mathfrak{g}^{1}=\mathfrak{v}\oplus\mathfrak{h}^{1}\), then \(\dim\mathfrak{v}=k_{1}\)._
Proof.: We claim that the dimension of \(\mathfrak{v}\) is preserved by evaluation at zero, that is, \(\dim\mathfrak{v}|_{0}=\dim\mathfrak{v}\), where \(\dim\mathfrak{v}|_{0}\) is the dimension of \(\mathfrak{v}|_{0}\) as a subspace of \(T_{0}\mathbb{R}^{n}\), while \(\dim\mathfrak{v}\) is the dimension of \(\mathfrak{v}\) as a subspace of \(\mathfrak{g}\). Indeed, we have the trivial inequality \(\dim\mathfrak{v}|_{0}\leq\dim\mathfrak{v}\). On the other hand, if strict inequality holds, then \(\mathfrak{v}\) must contain non-zero vector fields vanishing at zero, contradicting the fact that \(\mathfrak{v}\cap\mathfrak{h}=\{0\}\). Therefore, since \(\dim\mathfrak{g}^{1}|_{0}=k_{1}\) and \(\dim\mathfrak{h}^{1}|_{0}=0\), we get \(\dim\mathfrak{v}=\dim\mathfrak{v}|_{0}=k_{1}\) as desired.
_Step 4: proof of the Killing property._ We have so far proved the existence of \(\mathfrak{i}\) such that \(\mathfrak{g}^{1}=\mathfrak{i}\oplus\mathfrak{h}^{1}\), and such that any element \(Y\in\mathfrak{i}\) commutes with the sub-Laplacian \(\widehat{\Delta}\). We now show that all such \(Y\) is a Killing vector field.
Let \(Y\in\mathfrak{i}\). Since \([Y,\widehat{\Delta}]=0\), the induced flow \(\phi_{s}^{Y}\), for \(s\in\mathbb{R}\), commutes with \(\widehat{\Delta}\) when acting on smooth functions, that is,
\[\widehat{\Delta}(u\circ\phi_{s}^{Y})=(\widehat{\Delta}u)\circ\phi_{s}^{Y} \tag{3.6}\]
for all \(u\in C^{\infty}(\mathbb{R}^{n})\) and \(s\in\mathbb{R}\). Recall the sub-Riemannian Hamiltonian \(\widehat{H}:T^{*}\mathbb{R}^{n}\to\mathbb{R}\),
\[\widehat{H}(\lambda)=\frac{1}{2}\sum_{i=1}^{L}\langle\lambda,\widehat{X}_{i} \rangle^{2}, \tag{3.7}\]
for all \(\lambda\in T^{*}\mathbb{R}^{n}\). By (2.13), \(\widehat{H}\) is the principal symbol of \(\widehat{\Delta}\). Thus, from (3.6) it follows
\[\widehat{H}\circ\left(\phi_{s}^{Y}\right)^{*}=\widehat{H},\]
for all \(s\in\mathbb{R}\), where the star denotes the pull-back, and thus \(\left(\phi_{s}^{Y}\right)^{*}\) is a diffeomorphism on \(T^{*}\mathbb{R}^{n}\). This means that \(\phi_{s}^{Y}\) is an isometry, as we now show. Indeed, for any given \(x\in\mathbb{R}^{n}\), the restriction \(\widehat{H}|_{T^{*}_{x}\mathbb{R}^{n}}\) is a quadratic form on \(T^{*}_{x}\mathbb{R}^{n}\), so \((\phi_{s}^{Y})^{*}\) must preserve its kernel, that is,
\[(\phi_{s}^{Y})^{*}\ker\widehat{H}|_{T^{*}_{\phi_{s}^{Y}(x)}\mathbb{R}^{n}}= \ker\widehat{H}|_{T^{*}_{x}\mathbb{R}^{n}} \tag{3.8}\]
for all \(x\in\mathbb{R}^{n}\). By (3.7), it holds \(\ker\widehat{H}|_{T_{x}^{*}\mathbb{R}^{n}}=\widehat{\mathscr{D}}_{x}^{\perp}\), where \(\perp\) denotes the annihilator of a vector space. By duality, from (3.8) we obtain that \((\phi_{s}^{Y})_{*}\widehat{\mathscr{D}}_{x}=\widehat{\mathscr{D}}_{\phi_{s}^{Y }(x)}\) for all \(x\in\mathbb{R}^{n}\) as required by (1.2). Finally, for \(\lambda\in T_{x}^{*}M\), let \(\lambda^{\#}\in\mathscr{D}_{x}\) be uniquely defined by \(\mathrm{g}_{x}(\lambda^{\#},V)=\left\langle\lambda,V\right\rangle_{x}\) for all \(V\in\mathscr{D}_{x}\), and notice that the map \(\lambda\mapsto\lambda^{\#}\) is surjective on \(\mathscr{D}_{x}\). Then it holds \(\|\lambda^{\#}\|_{x}^{2}=2\widehat{H}(\lambda)\), see Lemma A.1. Thus, since \((\phi_{s}^{Y})^{*}\) preserves \(\widehat{H}\), the map \((\phi_{s}^{Y})_{*}\) preserves the sub-Riemannian norm, and thus \(\mathrm{g}\). This means that \(\phi_{s}^{Y}\) is an isometry, concluding the proof of Theorem 1.4.
### Proof of Theorem 1.5
We claim that
\[\mathfrak{g}^{j}=\mathfrak{h}^{j}\quad\text{for all }j\geq 2. \tag{3.9}\]
Note that (3.9) is enough to conclude the proof of Theorem 1.5, since, from (3.9) combined with (2.11) and (2.12), we immediately get that
\[\mathfrak{g}=\mathfrak{g}^{1}\oplus\mathfrak{h}^{2}\oplus\cdots\oplus \mathfrak{h}^{s}.\]
In particular, we deduce that \(\mathfrak{g}|_{0}=\mathfrak{g}^{1}|_{0}\), which in turn implies that \(\mathfrak{g}\) must be commutative, otherwise the bracket-generating condition would fail. To prove (3.9), we proceed by induction on \(j\geq 2\) as follows.
_Proof of the base case \(j=2\)._ We begin by proving the base case \(j=2\) in (3.9). To this aim, let \(\widehat{X}\in\mathfrak{i}\) and \(\widehat{Y}\in\mathfrak{g}^{1}\). By definition of Lie bracket, we can write
\[\left(\phi_{-s}^{\widehat{X}}\right)_{*}\widehat{Y}=s\left[\widehat{X}, \widehat{Y}\right]+o(s)\quad\text{as }s\to 0,\]
where \(\phi_{s}^{\widehat{X}}\), for \(s\in\mathbb{R}\), is the flow of \(\widehat{X}\). Since \(\mathfrak{g}^{1}|_{x}=\widehat{\mathscr{D}}|_{x}\) for all \(x\in\mathbb{R}^{n}\), and since \(\widehat{X}\) is Killing (in particular (1.2) holds for its flow), we have that \([\widehat{X},\widehat{Y}]|_{x}\in\widehat{\mathscr{D}}|_{x}\) for all \(x\in\mathbb{R}^{n}\). Since \([\widehat{X},\widehat{Y}]\in\mathfrak{g}^{2}\) and so, in particular, \([\widehat{X},\widehat{Y}]\) is homogeneous of degree \(-2\), we have
\[[\widehat{X},\widehat{Y}]|_{0}=\sum_{j\,:\,w_{j}=2}a_{j}\,\partial_{z_{j}}|_{0},\]
for some constants \(a_{j}\in\mathbb{R}\). But we also must have that \([\widehat{X},\widehat{Y}]|_{0}\in\widehat{\mathscr{D}}|_{0}\) and so, since
\[\widehat{\mathscr{D}}|_{0}=\mathrm{span}\Big{\{}\partial_{z_{j}}:w_{j}=1\Big{\}}\]
according to Definition 2.1, \([\widehat{X},\widehat{Y}]|_{0}=0\), that is, \([\widehat{X},\widehat{Y}]\in\mathfrak{h}\). We thus have proved that \([\mathfrak{i},\mathfrak{g}^{1}]\subset\mathfrak{h}^{2}\). In particular, since \(\mathfrak{g}^{1}=\mathfrak{i}\oplus\mathfrak{h}^{1}\), we get
\[[\mathfrak{i},\mathfrak{i}]\subset\mathfrak{h}^{2}\quad\text{and}\quad[ \mathfrak{i},\mathfrak{h}^{1}]\subset\mathfrak{h}^{2}, \tag{3.10}\]
from which we readily deduce (3.9) for \(j=2\).
_Proof of the induction step._ Let us assume that (3.9) holds for some \(j\in\mathbb{N}\), \(j\geq 2\). Since \(\mathfrak{g}^{1}=\mathfrak{i}\oplus\mathfrak{h}^{1}\), by the induction hypothesis we can write
\[\mathfrak{g}^{j+1}=[\mathfrak{g}^{1},\mathfrak{g}^{j}]=[\mathfrak{g}^{1}, \mathfrak{h}^{j}]=[\mathfrak{i},\mathfrak{h}^{j}]+[\mathfrak{h}^{1},\mathfrak{ h}^{j}]=[\mathfrak{i},\mathfrak{h}^{j}]+\mathfrak{h}^{j+1}.\]
We thus just need to show that \([\mathfrak{i},\mathfrak{h}^{j}]\subset\mathfrak{h}^{j+1}\) for all \(j\in\mathbb{N}\) with \(j\geq 2\). Note that we actually already proved the case \(j=1\) in (3.10). Again arguing by induction (taking \(j=1\) as base case), by the Jacobi identity and (3.10) we have
\[[\mathfrak{i},\mathfrak{h}^{j+1}]=[\mathfrak{i},[\mathfrak{h}^{1},\mathfrak{h}^{ j}]]=[\mathfrak{h}^{1},[\mathfrak{h}^{j},\mathfrak{i}]]+[\mathfrak{h}^{j},[ \mathfrak{i},\mathfrak{h}^{1}]]\subset[\mathfrak{h}^{1},\mathfrak{h}^{j+1}]+[ \mathfrak{h}^{j},\mathfrak{h}^{2}]=\mathfrak{h}^{j+2}\]
as desired, concluding the proof of the induction step.
**Remark 3.2** (Proof of Theorem 1.5 in the case \(\mathfrak{h}=\{0\}\)).: The proof of Theorem 1.5 is much simpler if the nilpotent approximation \((\mathbb{R}^{n},\widehat{\mathscr{F}})\) is a Carnot group, i.e., \(\mathfrak{h}=\{0\}\). Indeed, in this case, the base case \(j=2\) in (3.9) immediately implies that \(\mathfrak{g}^{2}=\mathfrak{h}^{2}=\{0\}\), which in turn gives \(\mathfrak{g}=\mathfrak{g}^{1}\), so that \(\mathfrak{g}\) is commutative.
### Proof of Theorem 1.8
In the following, we assume that the reader is familiar with the notions of _upper gradient_ and of \(q\)_-upper gradient_, see [5] for the precise definitions. The next two lemmas are proved in [27] for sub-Riemannians structures on \(\mathbb{R}^{n}\) equipped with the Lebesgue measure, and are immediately extended to the weighted case.
**Lemma 3.3**.: _Let \((M,\mathsf{d},\mathfrak{m})\) be as in Theorem 1.8. If \(u\in C(M)\) and \(0\leq g\in\mathrm{L}^{1}_{\mathrm{loc}}(M,\mathfrak{m})\) be an upper gradient of \(u\), then \(u\in\mathrm{HW}^{1,1}_{\mathrm{loc}}(M,\mathfrak{m})\) with \(\|\nabla u\|\leq g\)\(\mathfrak{m}\)-a.e. In particular, if \(u\in\mathrm{Lip}(M,\mathsf{d})\), then \(\|\nabla u\|\leq\mathrm{Lip}(u)\)._
Proof.: Without loss of generality we may assume that \(M=\Omega\subset\mathbb{R}^{n}\) is a bounded open set, the sub-Riemannian structure is induced by a family of smooth bracket-generating vector fields \(\mathscr{F}=\{X_{1},\ldots,X_{L}\}\) on \(\Omega\) and \(\mathfrak{m}=\theta\mathscr{L}^{n}\), where \(\theta\colon\Omega\to[0,\infty)\) is smooth and satisfies \(0<\inf_{\Omega}\theta\leq\sup_{\Omega}\theta<\infty\). Hence, \(\mathrm{L}^{1}(\Omega,\theta\mathscr{L}^{n})=\mathrm{L}^{1}(\Omega,\mathscr{L }^{n})\) as sets, with equivalent norms, so that \(0\leq g\in\mathrm{L}^{1}_{\mathrm{loc}}(\Omega,\mathscr{L}^{n})\) is an upper gradient of \(u\in C(\Omega)\). Hence, by [27, Th. 11.7], we get that \(u\in\mathrm{HW}^{1,1}_{\mathrm{loc}}(\Omega,\mathscr{L}^{n})\), with \(\|\nabla u\|\leq g\)\(\mathscr{L}^{n}\)-a.e., and thus \(\theta\mathscr{L}^{n}\)-a.e., on \(\Omega\). By definition of distributional derivative, we can write
\[\int_{\Omega}v\,X_{i}u\,\mathrm{d}x=\int_{\Omega}u\,[-X_{i}v+\mathrm{div}(X_{i })v]\,\mathrm{d}x,\quad\forall\,v\in C^{1}_{c}(\Omega),\ i=1,\ldots,L,\]
where \(\mathrm{div}\) denotes the Euclidean divergence. We apply the above formula with test function \(v=\theta w\), for any \(w\in C^{1}_{c}(\Omega)\), getting
\[\int_{\Omega}w\,X_{i}u\,\theta\,\mathrm{d}x=\int_{\Omega}u\left[-X_{i}w+ \mathrm{div}(X_{i})w+\frac{X_{i}\theta}{\theta}w\right]\theta\,\mathrm{d}x, \quad\forall\,w\in C^{1}_{c}(\Omega),\ i=1,\ldots,L.\]
The function within square brackets is the adjoint \(X_{i}^{*}w\) with respect to the measure \(\theta\mathscr{L}^{n}\). It follows that \(\mathrm{HW}^{1,q}(\Omega,\theta\mathscr{L}^{n})=\mathrm{HW}^{1,q}(\Omega, \mathscr{L}^{n})\) as sets, with equivalent norms. In particular, \(u\in\mathrm{W}^{1,1}_{\mathscr{D},\mathrm{loc}}(\Omega,\theta\mathscr{L}^{n})\) as desired.
**Lemma 3.4** (Meyers-Serrin).: _Let \((M,\mathsf{d},\mathfrak{m})\) be as in Theorem 1.8 and let \(q\in[1,\infty)\). Then \(\mathrm{HW}^{1,q}(M,\mathfrak{m})\cap C^{\infty}(M)\) is dense in \(\mathrm{HW}^{1,q}(M,\mathfrak{m})\)._
Proof.: Up to a partition of unity and exhaustion argument, we can reduce to the case \(M=\Omega\subset\mathbb{R}^{n}\) is a bounded open set and \(\mathfrak{m}=\theta\mathscr{L}^{n}\), where \(\theta\colon\Omega\to[0,\infty)\) is as in the previous proof, so that \(\mathrm{HW}^{1,q}(\Omega,\mathscr{L}^{n})=\mathrm{HW}^{1,q}(\Omega,\theta \mathscr{L}^{n})\) as sets, with equivalent norms. In particular, we can assume that \(\theta\equiv 1\). This case is proved in [27, Th. 11.9].
**Lemma 3.5**.: _Let \((M,\mathsf{d},\mathfrak{m})\) be as in Theorem 1.8 and let \(q\in[1,\infty)\). If \(u\in\mathrm{HW}^{1,q}(M,\mathfrak{m})\), then \(\|\nabla u\|\) is the minimal \(q\)-upper gradient of \(u\)._
Proof.: Let us first prove that \(\|\nabla u\|\) is a \(q\)-upper gradient of \(u\). Indeed, by Lemma 3.4, we can find \((u_{k})_{k\in\mathbb{N}}\subset\mathrm{HW}^{1,q}(M,\mathfrak{m})\cap C^{ \infty}(M)\) such that \(u_{k}\to u\) in \(\mathrm{HW}^{1,q}(M,\mathfrak{m})\) as \(k\to\infty\).
It is well-known that the sub-Riemannian norm of the gradient of a smooth function is an upper gradient, see [27, Prop. 11.6]. Thus, for \(u_{k}\) it holds
\[|u_{k}(\gamma(1))-u_{k}(\gamma(0))|\leq\int_{\gamma}\|\nabla u_{k}\|\,\mathrm{d}s.\]
Arguing as in [28, p. 179], using Fuglede's lemma (see [28, Lem. 7.5 and Sec. 10]), we pass to the limit for \(k\to\infty\) in the previous equality, outside a \(q\)-exceptional family of curves. This proves that any Borel representative of \(\|\nabla u\|\) is a \(q\)-upper gradient of \(u\).
We now prove that \(\|\nabla u\|\) is indeed minimal. Let \(0\leq g\in\mathrm{L}^{q}(M,\mathsf{m})\) be any \(q\)-upper gradient of \(u\). Arguing as in [28, p. 194], we can find a sequence \((g_{k})_{k\in\mathbb{N}}\subset\mathrm{L}^{q}(M,\mathsf{m})\) of upper gradients of \(u\) such that \(g_{k}\geq g\) for all \(k\in\mathbb{N}\) and \(g_{k}\to g\) both pointwise \(\mathsf{m}\)-a.e. on \(M\) and in \(\mathrm{L}^{q}(M,\mathsf{m})\) as \(k\to\infty\). By Lemma 3.3, we thus must have that \(\|\nabla u\|\leq g_{k}\)\(\mathsf{m}\)-a.e. on \(M\) for all \(k\in\mathbb{N}\). Hence, passing to the limit, we conclude that \(\|\nabla u\|\leq g\)\(\mathsf{m}\)-a.e. on \(M\) for any \(q\)-upper gradient \(g\), concluding the proof.
We are now ready to deal with the proof of Theorem 1.8.
Proof of (i).: Recall that, here, \(q>1\). We begin by claiming that
\[\mathrm{W}^{1,q}(M,\mathsf{d},\mathsf{m})\subset\mathrm{HW}^{1,q}(M,\mathsf{ m}) \tag{3.11}\]
isometrically, with \(\|\nabla u\|=|\mathsf{D}u|_{w,q}\). Indeed, let \(u\in\mathrm{W}^{1,q}(M,\mathsf{d},\mathsf{m})\). By a well-known approximation argument, combining [5, Prop. 4.3, Th. 5.3 and Th. 7.4], we find \((u_{k})_{k\in\mathbb{N}}\subset\mathrm{Lip}(M,\mathsf{d})\cap\mathrm{W}^{1,q }(M,\mathsf{d},\mathsf{m})\) such that
\[u_{k}\to u\quad\text{and}\quad|\mathsf{D}u_{k}|_{w,q}\to|\mathsf{D}u|_{w,q} \quad\text{in}\;\;\mathrm{L}^{q}(M,\mathsf{m}). \tag{3.12}\]
Since \(u_{k}\in\mathrm{Lip}(M,\mathsf{d})\), by Lemma 3.3 we know that \(u_{k}\in\mathrm{HW}^{1,q}(M,\mathsf{m})\). Hence, by Lemma 3.5, \(|\mathsf{D}u_{k}|_{w,q}=\|\nabla u_{k}\|\), and we immediately get that
\[\sup_{k\in\mathbb{N}}\int_{M}\|\nabla u_{k}\|^{q}\,\mathrm{d}\mathsf{m}<\infty.\]
Therefore, up to passing to a subsequence, \((X_{i}u_{k})_{k\in\mathbb{N}}\) is weakly convergent in \(\mathrm{L}^{q}(M,\mathsf{m})\), say \(X_{i}u_{k}\rightharpoonup\alpha_{i}\in\mathrm{L}^{q}(M,\mathsf{m})\), for all \(i=1,\ldots,L\). We thus get that \(u\in\mathrm{HW}^{1,q}(M,\mathsf{m})\) with \(X_{i}u=\alpha_{i}\) and thus \(\nabla u=\sum_{i=1}^{L}\alpha_{i}X_{i}\). By stability of \(q\)-upper gradients, [5, Th. 5.3 and Thm. 7.4], \(\|\nabla u\|\) is a \(q\)-upper gradient of \(u\). By semi-continuity of the norm, we obtain
\[\int_{M}\|\nabla u\|^{q}\,\mathrm{d}\mathsf{m}\leq\liminf_{k\to\infty}\int_ {M}\|\nabla u_{k}\|^{q}\,\mathrm{d}\mathsf{m}=\int_{M}|\mathsf{D}u|_{w,q}^{q} \,\mathrm{d}\mathsf{m},\]
where we used (3.12). By definition of minimal \(q\)-upper gradient we thus get that \(\|\nabla u\|=|\mathsf{D}u|_{w,q}\)\(\mathsf{m}\)-a.e., and the claimed inclusion in (3.11) immediately follows.
We now observe that it also holds
\[\mathrm{HW}^{1,q}(M,\mathsf{m})\cap C^{\infty}(M)\subset\mathrm{W}^{1,q}(M, \mathsf{d},\mathsf{m}), \tag{3.13}\]
with \(\|\nabla u\|=|\mathsf{D}u|_{w,q}\). We just need to notice that, if \(u\in C^{\infty}(M)\), then \(\|\nabla u\|\) is an upper gradient of \(u\), see [27, Prop. 11.6]. Therefore, by Lemma 3.3, \(\|\nabla u\|\) must coincide with the minimal \(q\)-upper gradient of \(u\), i.e., \(\|\nabla u\|=|\mathsf{D}u|_{w}\)\(\mathsf{m}\)-a.e., and (3.13) readily follows. In view of the isometric inclusions (3.11) and (3.13), and of the density provided by Lemma 3.4, this concludes the proof of (i).
**Proof of** (ii).: Let us assume that \((M,\mathsf{d},\mathsf{m})\) satisfies the \(\mathsf{CD}(K,\infty)\) property for some \(K\in\mathbb{R}\). By the previous point (i), we know that \((M,\mathsf{d},\mathsf{m})\) satisfies the \(\mathsf{RCD}(K,\infty)\) property. Consequently, since clearly \(C_{c}^{\infty}(M)\subset\mathrm{W}^{1,2}(M,\mathsf{d},\mathsf{m})\) by (3.13), [6, Rem. 6.3] (even if the measure \(\mathsf{m}\) is \(\sigma\)-finite, see [4, Sec. 7] for a discussion) implies that
\[\frac{1}{2}\int_{M}\Delta v\left\|\nabla u\right\|^{2}\mathrm{d}\mathsf{m}- \int_{M}v\ \mathrm{g}(\nabla u,\nabla\Delta u)\,\mathrm{d}\mathsf{m}\geq K\int_{M}v\left\| \nabla u\right\|^{2}\mathrm{d}\mathsf{m}\]
for all \(u,v\in C_{c}^{\infty}(M)\) with \(v\geq 0\) on \(M\), from which we readily deduce (1.1).
**Remark 3.6**.: The above proofs work for more general measures \(\mathsf{m}\). Namely, we can assume that, locally on any bounded coordinate neighborhood \(\Omega\subset\mathbb{R}^{n}\), \(\mathsf{m}=\theta\mathscr{L}^{n}\) with \(\theta\in\mathrm{W}^{1,1}(\Omega,\mathscr{L}^{n})\cap\mathrm{L}^{\infty}( \Omega,\mathscr{L}^{n})\). In this case, the positivity of \(\mathsf{m}\) corresponds to the requirement that \(\theta\) is locally essentially bounded from below away from zero, in charts.
### Proof of Theorem 1.10
We prove the two points in the statement separately.
**Proof of** (i).: The case \(p=0\) has been already considered by Juillet in [32]. For \(p>0\), we can argue as follows. Let \(A_{0}=[-\ell-1,-\ell]\times[0,1]\) and \(A_{1}=[\ell,\ell+1]\times[0,1]\) for \(\ell>0\). We will shortly prove that the _midpoint set_
\[A_{1/2}=\left\{q\in\mathbb{R}^{2}:\exists\,q_{0}\in A_{0},\ \exists\,q_{1}\in A_{1}\ \text{with}\ \mathsf{d}(q,q_{0})=\mathsf{d}(q,q_{1})=\frac{1}{2}\,\mathsf{d}(q_{0},q_{1})\right\}\]
satisfies
\[A_{1/2}\subset[-1-\varepsilon_{\ell},1+\varepsilon_{\ell}]\times[0,1] \tag{3.14}\]
for some \(\varepsilon_{\ell}>0\), with \(\varepsilon_{\ell}\downarrow 0\) as \(\ell\to\infty\). Since \(\mathsf{m}_{p}(A_{0})=\mathsf{m}_{p}(A_{1})\sim\ell^{p}\) as \(\ell\to\infty\), we get
\[\sqrt{\mathsf{m}_{p}(A_{0})\,\mathsf{m}_{p}(A_{1})}>\mathsf{m}_{p}(A_{1/2})\]
for large \(\ell>0\). This contradicts the logarithmic Brunn-Minkowski \(\mathsf{BM}(0,\infty)\) inequality, which is a consequence of the \(\mathsf{CD}(0,\infty)\) condition, see [50, Th. 30.7].
To prove (3.14), let \(q_{i}\in A_{i}\), \(q_{i}=(x_{i},y_{i})\), and let \(\gamma(t)=(x(t),y(t))\), \(t\in[0,1]\), be a geodesic such that \(\gamma(i)=q_{i}\), with \(i=0,1\). We first note that
\[\min\{y_{0},y_{1}\}\leq y(t)\leq\max\{y_{0},y_{1}\}\quad\text{for all $t\in[0,1]$}, \tag{3.15}\]
since any curve that violates (3.15) can be replaced by a strictly shorter one satisfying (3.15). In particular, we get that \(A_{1/2}\subset\mathbb{R}\times[0,1]\). Let us now observe that
\[|x_{a}-x_{b}|\leq\mathsf{d}(a,b)\leq|x_{a}-x_{b}|+\frac{|y_{a}-y_{b}|}{\max\{ |x_{a}|,|x_{b}|\}}\]
for all \(a=(x_{a},y_{a})\) and \(b=(x_{b},y_{b})\) with \(x_{a},x_{b}\neq 0\). Therefore, if \(q=(x,y)\in A_{1/2}\), then
\[|x-x_{0}|\leq\mathsf{d}(q,q_{0})=\frac{1}{2}\,\mathsf{d}(q_{0},q_{1})\leq\ell +1+O(1/\ell)\]
and, similarly, \(|x-x_{1}|\leq\ell+1+O(1/\ell)\). Since \(x_{0}\in[-\ell-1,-\ell]\) and \(x_{1}\in[\ell,\ell+1]\), we deduce that \(|x|\leq 1+O(1/\ell)\), concluding the proof of the claimed (3.14).
**Proof of (ii).** Out of the negligible set \(\{x=0\}\), the metric \(\mathrm{g}\) on \(\mathbb{G}_{p}\) given by (1.5) is locally Riemannian. Recalling (1.6) and (1.7), the \(\mathsf{BE}(K,\infty)\) inequality (1.1) is implied by the lower bound \(\mathrm{Ric}_{\infty,V}\geq K\) via Bochner's formula, where \(\mathrm{Ric}_{\infty,V}\) is the \(\infty\)-Bakry-Emery Ricci tensor of \((\mathbb{R}^{2},\mathrm{g},e^{-V}\mathrm{vol}_{\mathrm{g}})\), see [50, Ch. 14, Eqs. (14.36) - (14.51)]. By Lemma 3.7 below, we have \(\mathrm{Ric}_{\infty,V}\geq 0\) for all \(p\geq 1\), concluding the proof.
**Lemma 3.7**.: _Let \(p\in\mathbb{R}\) and \(N>2\). The \(N\)-Bakry-Emery Ricci tensor of the Grushin metric (1.5), with weighted measure \(\mathsf{m}_{p}=|x|^{p}\,\mathrm{d}x\,\mathrm{d}y\), for all \(x\neq 0\) is_
\[\mathrm{Ric}_{N,V}=\frac{p-1}{x^{2}}\,\mathrm{g}-\frac{(p+1)^{2}}{N-2}\frac{ \mathrm{d}x\otimes\mathrm{d}x}{x^{2}},\]
_with the convention that \(1/\infty=0\)._
Proof.: The \(N\)-Bakry-Emery Ricci tensor of a \(n\)-dimensional weighted Riemannian structure \((\mathrm{g},e^{-V}\mathrm{vol}_{g})\), for \(N>n\), is given by
\[\mathrm{Ric}_{N,V}=\mathrm{Ric}_{\mathrm{g}}+\mathrm{Hess}_{\mathrm{g}}V- \frac{\mathrm{d}V\otimes\mathrm{d}V}{N-n}, \tag{3.16}\]
see [50, Eq. (14.36)]. In terms of the frame (1.4), the Levi-Civita connection is given by
\[\nabla_{X}X=\nabla_{X}Y=0,\quad\nabla_{Y}X=-\frac{1}{x}Y,\quad\nabla_{Y}Y= \frac{1}{x}X,\]
whenever \(x\neq 0\). Recalling that, from (1.7), \(V(x)=-(p+1)\log|x|\), for \(x\neq 0\), we obtain
\[\mathrm{Ric}_{\mathrm{g}}=-\frac{2}{x^{2}}\,\mathrm{g},\quad\mathrm{Hess}_{ \mathrm{g}}V=\frac{(p+1)}{x^{2}}\,\mathrm{g},\quad\mathrm{d}V=-\frac{p+1}{x} \,\mathrm{d}x, \tag{3.17}\]
whenever \(x\neq 0\). The conclusion thus follows by inserting (3.17) into (3.16).
### Proof of Theorem 1.11
The statement is a consequence of the geodesic convexity of \(\mathbb{G}_{p}^{+}\) and the computation of the \(N\)-Bakry-Emery curvature in Lemma 3.7. Since the proof uses quite standard arguments, we simply sketch its main steps.
The interior of \(\mathbb{G}_{p}^{+}\), i.e., the open half-plane, can be regarded as a (non-complete) weighted Riemannian manifold with metric \(\mathrm{g}\) as in (1.5) and weighted volume as in (1.7). Let \(\mu_{0},\mu_{1}\in\mathscr{P}_{2}(\mathbb{G}_{p}^{+})\), \(\mu_{0},\mu_{1}\ll\mathsf{m}_{p}\), with bounded support contained in the Riemannian region \(\{x>\varepsilon\}\), for some \(\varepsilon\geq 0\).
Let \((\mu_{s})_{s\in[0,1]}\) be a \(W_{2}\)-geodesic joining \(\mu_{0}\) and \(\mu_{1}\). By a well-known representation theorem (see [50, Cor. 7.22]), there exists \(\nu\in\mathscr{P}(\mathrm{Geo}(\mathbb{G}_{p}^{+}))\), supported on the set \(\Gamma=(e_{0}\times e_{1})^{-1}(\mathrm{supp}\,\mu_{0}\times\mathrm{supp}\, \mu_{1})\), such that \(\mu_{s}=(e_{s})_{s}\nu\) for all \(s\in[0,1]\). Since the set \(\{x\geq\varepsilon\}\) is a geodesically convex subset of the full Grushin plane \(\mathbb{G}_{p}\) (by the same argument of [46, Prop. 5]), any \(\gamma\in\Gamma\) is contained for all times in the region \(\{x>0\}\). Therefore, \(\Gamma\) is a set of Riemannian geodesics contained in the weighted Riemannian structure \((\{x>0\},\mathrm{g},e^{-V}\mathrm{vol}_{\mathrm{g}})\). By Lemma 3.7, we have \(\mathrm{Ric}_{N,V}\geq 0\) for all \(N\geq N_{p}\), where \(N_{p}\) is as in (1.8). At this point, a standard argument shows that the Renyi entropy is convex along Wasserstein geodesics joining \(\mu_{0}\) with \(\mu_{1}\), see the proof of [49, Th. 1.7] for example.
The extension to \(\mu_{0},\mu_{1}\in\mathscr{P}_{2}(\mathbb{G}_{p}^{+})\), with \(\mu_{0},\mu_{1}\ll\mathsf{m}_{p}\) and compact support possibly touching the singular region \(\{x=0\}\), is achieved via a standard approximation argument. More precisely, one reduces to the previous case and exploits the stability of optimal transport [50, Th. 28.9] and the lower semi-continuity of the Renyi entropy [50, Th. 29.20].
Finally, the extension to general \(\mu_{0},\mu_{1}\in\mathscr{P}_{2}(\mathbb{G}_{p}^{+})\) follows the routine argument outlined in [9, Rem. 2.12], which works when \(\mu_{s}=(e_{s})_{\sharp}\nu\), \(s\in[0,1]\), and \(\nu\) is concentrated on a set of non-branching geodesics. This proves the 'if' part of the statement.
The 'only if' part is also standard. The \(\mathsf{CD}(0,N)\) condition for \(N>2\) implies that, on the Riemannian region \(\{x>0\}\), \(\operatorname{Ric}_{N,V}\geq 0\), but this is false for \(N<N_{p}\).
The fact that \(\mathbb{G}_{p}^{+}\) is infinitesimally Hilbertian follows from Remark 1.9, by noting that \(\mathsf{m}_{p}\) is positive and smooth out of the closed set \(\{x=0\}\), which has zero measure. An alternative proof follows from the observation that \(\mathbb{G}_{p}^{+}\) is a Ricci limit, see [42].
## Appendix A Gradient and Laplacian representations formulas
For the reader's convenience, in this appendix we provide a short proof of the representation formulas (2.5) and (2.7), in the rank-varying case.
**Lemma A.1**.: _For \(\lambda\in T^{*}M\), let \(\lambda^{\#}\in\mathscr{D}\) be uniquely defined by_
\[\mathrm{g}(\lambda^{\#},V)=\langle\lambda,V\rangle\]
_for all \(V\in\mathscr{D}\), where \(\langle\cdot,\cdot\rangle\) denotes the action of covectors on vectors. Then_
\[\|\lambda^{\#}\|^{2}=\sum_{i=1}^{L}\bigl{\langle}\lambda^{\#},X_{i}\bigr{\rangle} ^{2}.\] (A.1)
_As a consequence, if \(\lambda,\mu\in T^{*}M\), then_
\[\mathrm{g}(\lambda^{\#},\mu^{\#})=\sum_{i=1}^{L}\langle\lambda,X_{i}\rangle \langle\mu,X_{i}\rangle.\] (A.2)
Proof.: Given \(u\in\mathbb{R}^{L}\), we set \(X_{u}=\sum_{i=1}^{L}u_{i}X_{i}\) and define
\[u^{*}\in\operatorname{argmin}\bigl{\{}v\mapsto|v|:v\in\mathbb{R}^{L},\ X_{v}=X_{u} \bigr{\}}.\]
In other words, for \(X_{u}\in\mathscr{D}\), \(u^{*}\) is the element of minimal Euclidean norm such that \(X_{u^{*}}=X_{u}\). Note that, by definition, it holds \(\|X_{u}\|=|u^{*}|\). We thus have
\[\|\lambda^{\#}\|=\sup\Bigl{\{}\mathrm{g}(\lambda^{\#},X):\|X\|=1,\ X\in \mathscr{D}\Bigr{\}}=\sup\Bigl{\{}\mathrm{g}(\lambda^{\#},X_{u}):|u^{*}|=1,\ u\in\mathbb{R}^{L}\Bigr{\}}.\]
We now claim that
\[\sup\Bigl{\{}\mathrm{g}(\lambda^{\#},X_{u}):|u^{*}|=1,\ u\in\mathbb{R}^{L} \Bigr{\}}=\sup\Bigl{\{}\mathrm{g}(\lambda^{\#},X_{u}):|u|=1,\ u\in\mathbb{R}^{L }\Bigr{\}}.\] (A.3)
Indeed, the inequality \(\leq\) in (A.3) is obtained by observing that \(X_{u}=X_{u^{*}}\) for any \(u\in\mathbb{R}^{L}\). To prove the inequality \(\geq\) in (A.3), we observe that, if \(u\in\mathbb{R}^{L}\) is such that \(|u|=1\) and \(0<|u^{*}|<1\), then \(v=u/|u^{*}|\) satisfies \(|v^{*}|=1\) and gives
\[\mathrm{g}(\lambda^{\#},X_{v})>\mathrm{g}(\lambda^{\#},X_{v})\,|u^{*}|= \mathrm{g}(\lambda^{\#},X_{u}).\] (A.4)
Furthermore, if \(|u|=1\) and \(u^{*}=0\), then \(X_{u}=0\) so also in this case we find \(v\in\mathbb{R}^{n}\) with \(v^{*}=1\) such that (A.4) holds. This ends the proof of the claimed (A.3). Hence, since
\[\mathrm{g}(\lambda^{\#},X_{u})=\sum_{i=1}^{L}\mathrm{g}(\lambda^{\#},X_{i})\, u_{i},\]
we easily conclude that
\[\|\lambda^{\#}\|=\sup\Bigl{\{}\mathrm{g}(\lambda^{\#},X_{u}):|u|=1,\ u\in\mathbb{R} ^{L}\Bigr{\}}=\sqrt{\sum_{i=1}^{L}\mathrm{g}(\lambda^{\#},X_{i})^{2}},\]
proving (A.1). Equality (A.2) then follows by polarization.
**Corollary A.2**.: _The following formulas hold:_
\[\nabla u=\sum_{i=1}^{L}X_{i}u\,X_{i},\] (A.5)
\[\Delta u=\sum_{i=1}^{L}\left(X_{i}^{2}u+X_{i}u\ \mathrm{div}_{\mathfrak{m}}(X_ {i})\right),\] (A.6)
\[\mathrm{g}(\nabla u,\nabla v)=\sum_{i=1}^{L}X_{i}u\,X_{i}v,\] (A.7)
_for all \(u,v\in C^{\infty}(M)\). In particular, \(\|\nabla u\|=\sum_{i=1}^{L}(X_{i}u)^{2}\) for all \(u\in C^{\infty}(M)\)._
Proof.: We prove each formula separately.
_Proof of (A.5)._ Recalling the definition in (2.4), we can pick \(\lambda=du\) in (A.2) to get
\[\left\langle du,\mu^{\#}\right\rangle =\mathrm{g}(\nabla u,\mu^{\#})=\sum_{i=1}^{L}\langle du,X_{i} \rangle\langle\mu,X_{i}\rangle\] \[=\sum_{i=1}^{L}X_{i}u\,\langle\mu,X_{i}\rangle=\mathrm{g}\left( \mu^{\#},\sum_{i=1}^{L}X_{i}u\,X_{i}\right)\]
whenever \(\mu\in T_{x}^{*}M\). Since the map \(\#\colon T_{x}^{*}M\to\mathscr{D}_{x}\) is surjective, we immediately get (A.5).
_Proof of (A.6)._ Recall that
\[\mathrm{div}_{\mathfrak{m}}(fX)=Xf+f\,\mathrm{div}_{\mathfrak{m}}(X)\]
for any \(f\in C^{\infty}(M)\) and \(X\in\Gamma(TM)\). Hence, from the definition in (2.6), we can compute
\[\Delta u=\mathrm{div}_{\mathfrak{m}}(\nabla u)=\sum_{i=1}^{L}\mathrm{div}_{ \mathfrak{m}}(X_{i}u\,X_{i})=\sum_{i=1}^{L}\left(X_{i}^{2}u\,X_{i}+X_{i}u\ \mathrm{div}_{\mathfrak{m}}(X_{i})\right),\]
which is the desired (A.6).
_Proof of (A.7)._ Choosing \(\lambda=du\) and \(\mu=dv\) in (A.2), we can compute
\[g(\nabla u,\nabla v)=\sum_{i=1}^{L}\langle du,X_{i}\rangle\,\langle dv,X_{i} \rangle=\sum_{i=1}^{L}X_{i}u\,X_{i}v\]
and the proof is complete. |
2305.03913 | A Hilbertian projection method for constrained level set-based topology
optimisation | We present an extension of the projection method proposed by Challis et al.
(Int J Solids Struct 45(14$\unicode{x2013}$15):4130$\unicode{x2013}$4146, 2008)
for constrained level set-based topology optimisation that harnesses the
Hilbertian velocity extension-regularisation framework. Our Hilbertian
projection method chooses a normal velocity for the level set function as a
linear combination of (1) an orthogonal projection operator applied to the
extended optimisation objective shape sensitivity and (2) a weighted sum of
orthogonal basis functions for the extended constraint shape sensitivities.
This combination aims for the best possible first-order improvement of the
optimisation objective in addition to first-order improvement of the
constraints. Our formulation utilising basis orthogonalisation naturally
handles linearly dependent constraint shape sensitivities. Furthermore, use of
the Hilbertian extension-regularisation framework ensures that the resulting
normal velocity is extended away from the boundary and enriched with additional
regularity. Our approach is generally applicable to any topology optimisation
problem to be solved in the level set framework. We consider several benchmark
constrained microstructure optimisation problems and demonstrate that our
method is effective with little-to-no parameter tuning. We also find that our
method performs well when compared to a Hilbertian sequential linear
programming method. | Zachary J. Wegert, Anthony P. Roberts, Vivien J. Challis | 2023-05-06T03:32:24Z | http://arxiv.org/abs/2305.03913v4 | # A Hilbertian projection method for constrained level set-based topology optimisation
###### Abstract
We present an extension of the projection method proposed by Challis et al (_Int J Solids Struct._ Volume **45**(14-15) (2008) 4130-4146) for constrained level set-based topology optimisation that harnesses the Hilbertian velocity extension-regularisation framework. Our _Hilbertian projection method_ chooses a normal velocity for the level set function as a linear combination of: 1) an orthogonal projection operator applied to the extended optimisation objective shape sensitivity; and 2) a weighted sum of orthogonal basis functions for the extended constraint shape sensitivities. This combination aims for the best possible first-order improvement of the optimisation objective in addition to first-order improvement of the constraints. Our formulation utilising basis orthogonalisation naturally handles linearly dependent constraint shape sensitivities. Furthermore, use of the Hilbertian extension-regularisation framework ensures that the resulting normal velocity is extended away from the boundary and enriched with additional regularity. Our approach is generally applicable to any topology optimisation problem to be solved in the level set framework. We consider several benchmark constrained microstructure optimisation problems and demonstrate that our method is effective with little-to-no parameter tuning. We also find that our method performs well when compared to a Hilbertian sequential linear programming method.
**Keywords:** Level set method, topology optimisation, constraints, Hilbertian projection method
*Corresponding author(s). E-mail(s): [email protected];
[email protected];
Contributing authors: [email protected];
## 1 Introduction
The field of topology optimisation has enjoyed rapid growth owing to improved computing power, new optimisation techniques, and application to a wide range of design problems (Bendsoe and Sigmund, 2004; Deaton and Grandhi, 2013; Sigmund and Maute, 2013). Classical computational methods for topology optimisation include density-based methods (Bendsoe, 1989; Rozvany et al, 1992) in which the design variables are material densities of elements/nodes in a mesh, and level set-based methods (Wang et al, 2003; Allaire et al, 2004) in which the boundary of the shape is implicitly tracked as the zero level set of a higher dimensional level set function. Conventional level set methods rely on the Hamilton-Jacobi evolution equation to update the design according to a normal velocity field defined on the boundary (e.g., Wang et al, 2003; Allaire et al, 2004).
An important aspect of these methods is extending the normal velocity away from the boundary. To this end, the Hilbertian velocity extension-regularisation framework, which is well known in the context of level set methods (see discussion by Allaire et al (2021)), can be used to generate a velocity field that guarantees a descent direction and has additional regularity (smoothness) over the whole computational domain.
Topology optimisation problems often include multiple constraints. In density-based topology optimisation (Bendsoe, 1989; Rozvany et al, 1992) the application of constraints is usually straightforward and handled by the optimisation algorithm (e.g., Method of Moving Asymptotes (Svanberg, 1987)). In the context of conventional level set-based methods, applying constraints is more complicated. The _augmented Lagrangian method_(Nocedal and Wright, 2006; Birgin and Martinez, 2009) is a classical approach for constrained optimisation problems. It converts a constrained optimisation problem into a sequence of unconstrained problems that are a combination of the classical Lagrangian method and quadratic penalty method. In a level set framework, applying the method is straightforward: the shape sensitivity of the augmented Lagrangian is used to inform the normal velocity for the Hamilton-Jacobi equation (e.g., Guo et al, 2014; Allaire et al, 2016; Cao et al, 2021). However, the difficulty associated with tuning the accompanying parameters is problem dependent and scales with the number of constraints (Allaire et al, 2021). The level-set _sequential linear programming method_(SLP) (Dunning et al, 2015; Dunning and Kim, 2015) involves linearising the optimisation problem into a number of sub-problems that are then solved using a linear programming method (e.g., the simplex method (Kambampati et al, 2020)). For level set-based topology optimisation, applying SLP is fairly straightforward except for implementing appropriate trust region constraints (Dunning and Kim, 2015). The _projection method_(Wang and Wang, 2004; Challis et al, 2008) projects the objective shape sensitivity onto a space that will leave the constraints unchanged and combines this with constraint shape sensitivities. This approach has been used to successfully design material microstructures subject to isotropy constraints (Challis et al, 2008) but has not been widely adopted in the literature. However, similar methods have more recently been proposed in the literature by Barbarosie et al (2020) and Feppon et al (2020) for level set topology optimisation. The projection method and these two recent works are examples of general null-space gradient methods (e.g., Nocedal and Wright, 2006).
It is natural to consider methods of constrained optimisation that take advantage of the Hilbertian extension-regularisation framework. For example, Allaire et al (2021) recently presented an SLP method in the Hilbertian framework. In this paper we revisit the projection method from Challis et al (2008) and combine it with the Hilbertian extension-regularisation procedure. Our method constructs an orthogonal basis that spans the set of extended constraint shape sensitivities and an orthogonal projection operator that projects onto the set perpendicular to the extended constraint shape sensitivities. We then define the normal velocity for the level set function as a linear combination of the orthogonal projection operator applied to the extended objective function shape sensitivity and a weighted sum of basis functions for the extended constraint shape sensitivities. This normal velocity is naturally extended onto the bounding domain and endowed with additional regularity due to the Hilbertian extension-regularisation. While our method is similar to other recently proposed approaches (Barbarosie et al, 2020; Feppon et al, 2020), our formulation utilising an orthogonal basis provides significant benefits.
To demonstrate our presented _Hilbertian projection method_ we consider several linear elastic microstructure optimisation (i.e., inverse homogenisation) problems with multiple constraints. The constraints naturally arise under the enforcement of symmetries for the effective material properties, such as isotropy. Irrespective of optimisation method, microstructure optimisation has been used successfully for a range of design problems including linear elastic materials with extremal properties (e.g., Gibiansky and Sigmund, 2000; Andreassen et al, 2014), multifunctional composites (e.g., Challis et al, 2008), auxetic materials (e.g., Vogiatzis et al, 2017), piezoelectric materials (e.g., Silva et al, 1998; Wegert et al, 2022), and multi-material composites (e.g., Zhuang et al, 2010; Faure et al, 2017). In
this work we consider maximising the bulk modulus with and without isotropy constraints and the design of auxetic and multi-phase materials. We compare our Hilbertian projection method with a Hilbertian sequential linear programming (SLP) method (Sec. 5.3.2, Allaire et al, 2021) and show that the Hilbertian projection method is able to successfully handle these optimisation problems with little-to-no parameter tuning.
The remainder of the paper is as follows. In Section 2 we discuss the mathematical background for the level set method, Hilbertian extension-regularisation procedure, and linear elastic microstructure optimisation. In Section 3 we formulate the Hilbertian projection method and compare our formulation to the null space method presented by Feppon et al (2020). In Section 4 we discuss our numerical implementation. In Section 5 we present and discuss the example optimisation problems and results. Finally, in Section 6 we present our concluding remarks.
## 2 Mathematical background
In this section we give a brief introduction to the level set method for topology optimisation and shape derivatives. We then discuss the Hilbertian extension-regularisation framework. We conclude by describing linear elastic microstructure optimisation for single- and multi-phase materials.
### The level set method
Level set methods track the boundary of a domain \(\Omega\) inside a bounding domain \(D\subset\mathbb{R}^{d}\) implicitly via the zero level set of a function \(\phi:D\rightarrow\mathbb{R}\)(Sethian, 1996; Osher and Fedkiw, 2006). For a domain \(\Omega\) inside a bounding domain \(D\), the level set function \(\phi\) is typically defined as
\[\begin{cases}\phi(\mathbf{x})<0&\text{if }\mathbf{x}\in\Omega,\\ \phi(\mathbf{x})=0&\text{if }\mathbf{x}\in\partial\Omega,\\ \phi(\mathbf{x})>0&\text{if }\mathbf{x}\in D\backslash\bar{\Omega}.\end{cases} \tag{1}\]
Using this definition and assuming that the interface may evolve in time, a material derivative of \(\phi\) on \(\partial\Omega\) gives
\[\frac{\partial\phi}{\partial t}(t,\mathbf{x})+v(t,\mathbf{x})|\nabla\phi(t,\mathbf{x})|=0, \tag{2}\]
where \(v\) is the normal velocity of the interface. In practice, the above is solved over the whole bounding domain \(D\) instead of only on the interface \(\partial\Omega\) by extending the velocity \(v\) away from the boundary. Assuming that the time interval \((0,T)\) is small so that the velocity does not vary in time gives the Hamilton-Jacobi evolution equation (Sethian, 1996; Osher and Fedkiw, 2006; Allaire et al, 2021):
\[\begin{cases}\frac{\partial\phi}{\partial t}(t,\mathbf{x})+v(\mathbf{x})|\nabla\phi(t,\mathbf{x})|=0,\\ \phi(0,\mathbf{x})=\phi_{0}(\mathbf{x}),\\ \mathbf{x}\in D,\ t\in(0,T),\end{cases} \tag{3}\]
where \(\phi_{0}(\mathbf{x})\) is the initial condition for \(\phi\) at \(t=0\).
It is often useful to reinitialise the level set function as the _signed distance function_\(d_{\Omega}\). This ensures that the level set function is neither too steep nor too flat near the boundary of \(\Omega\)(Osher and Fedkiw, 2006). The signed distance function may be defined as (Allaire et al, 2021):
\[d_{\Omega}(\mathbf{x})=\begin{cases}-d(\mathbf{x},\partial\Omega)&\text{if }\mathbf{x}\in \Omega,\\ 0&\text{if }\mathbf{x}\in\partial\Omega,\\ d(\mathbf{x},\partial\Omega)&\text{if }\mathbf{x}\in D\backslash\bar{\Omega},\end{cases} \tag{4}\]
where \(d(\mathbf{x},\partial\Omega):=\min_{\mathbf{p}\in\partial\Omega}|\mathbf{x}-\mathbf{p}|\) is the minimum Euclidean distance from \(\mathbf{x}\) to the boundary \(\partial\Omega\). Several methods are available for constructing the signed distance function and the reader is referred to Osher and Fedkiw (2006) and Allaire et al (2021) and the references therein for a detailed discussion. In this work we use the following _reinitialisation equation_(Peng et al, 1999; Osher and Fedkiw, 2006) to reinitialise a pre-existing level set function \(\phi_{0}(\mathbf{x})\) as the signed distance function:
\[\begin{cases}\frac{\partial\phi}{\partial t}(t,\mathbf{x})+S(\phi_{0}(\mathbf{x})) \left(|\nabla\phi(t,\mathbf{x})|-1\right)=0,\\ \phi(0,\mathbf{x})=\phi_{0}(\mathbf{x}),\\ \mathbf{x}\in D,t>0.\end{cases} \tag{5}\]
Here \(S\) is the sign function and Equation 5 is solved until close to steady state. Similar numerical schemes may be used to solve for Hamilton-Jacobi evolution and reinitialisation (Eqs. 3 and 5).
### Shape derivatives
To find a normal velocity \(v\) that reduces some functional \(J(\Omega)\) via solution of the Hamilton-Jacobi equation (Eq. 3) we use the notion of shape derivatives. We recall the following useful results from Allaire et al (2004, 2021).
Suppose that we consider smooth variations of the domain \(\Omega\) of the form \(\Omega_{\mathbf{\theta}}=(\mathbf{I}+\mathbf{\theta})(\Omega)\), where \(\mathbf{\theta}\in W^{1,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\). Then the following definition and lemma follow:
**Definition 1** (Allaire et al (2004)): The shape derivative of \(J(\Omega)\) at \(\Omega\) is defined as the Frechet derivative in \(W^{1,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) at \(\mathbf{\theta}\) of the application \(\mathbf{\theta}\to J(\Omega_{\mathbf{\theta}})\), i.e.,
\[J(\Omega_{\mathbf{\theta}})(\Omega))=J(\Omega)+J^{\prime}(\Omega)(\mathbf{\theta})+ \mathrm{o}(\mathbf{\theta}) \tag{6}\]
with \(\lim_{\mathbf{\theta}\to 0}\frac{\mathrm{o}(\mathbf{\theta})|}{\|\mathbf{\theta}\|}=0\), where the shape derivative \(J^{\prime}(\Omega)\) is a continuous linear form on \(W^{1,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\).
**Lemma 1** (Allaire et al (2004)): Let \(\Omega\) be a smooth bounded open set and \(f\in W^{1,1}(\mathbb{R}^{d})\). Define
\[J(\Omega)=\int_{\Omega}f\ \mathrm{d}\Omega. \tag{7}\]
Then \(J\) is differentiable at \(\Omega\) and
\[J^{\prime}(\Omega)(\mathbf{\theta})=\int_{\Omega}\mathrm{div}(\mathbf{\theta}f)\ \mathrm{d}\Omega=\int_{\partial\Omega}f\ \mathbf{\theta}\cdot\mathbf{n}\ \mathrm{d}\Gamma \tag{8}\]
for any \(\mathbf{\theta}\in W^{1,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\).
Cea's formal method (Cea, 1986) can be applied to find the shape derivative of a functional \(J\) that depends on fields that satisfy specified state equations (e.g., Allaire et al, 2004, 2021). The method relies on defining a Lagrangian functional \(\mathcal{L}\) that satisfies the two following properties:
1. The state equations are generated by stationarity of \(\mathcal{L}\) under variations of the fields.
2. \(\mathcal{L}\) is equal to the functional of interest \(J\) at the solution to the state equations.
Once these properties are satisfied the shape derivative of the functional of interest can be found using Lemma 1(Allaire et al, 2004).
### Hilbertian extension-regularisation
To infer a descent direction from \(J^{\prime}(\Omega)\) we utilise the Hilbertian extension-regularisation method as discussed by Allaire et al (2021). This involves solving an identification problem over a Hilbert space \(H\) on \(D\) with inner product \(\langle\cdot,\cdot\rangle_{H}\): _Find \(g_{\Omega}\in H\) such that_
\[\langle g_{\Omega},w\rangle_{H}=-J^{\prime}(\Omega)(w\mathbf{n})\ \forall w\in H. \tag{9}\]
For an unconstrained optimisation problem the resulting field \(g_{\Omega}\) is the extended shape sensitivity that is used to evolve the interface with \(\mathbf{\theta}=\tau g_{\Omega}\mathbf{n}\) where \(\tau>0\) is sufficiently small.
The Hilbertian extension-regularisation method provides two important benefits: it naturally extends the shape sensitivity from \(\partial\Omega\) onto the bounding domain \(D\), and ensures a descent direction for \(J(\Omega)\) with additional regularity (i.e., \(H\) as opposed to \(L^{2}(\partial\Omega)\)) (Allaire et al, 2021). As discussed by Allaire et al (2021), this may be viewed as an analog to the sensitivity filtering used in density-based topology optimisation algorithms.
A common choice for the Hilbert space \(H\) is \(H^{1}(D)\) with the inner product
\[\langle u,v\rangle_{H}=\beta^{2}\int_{D}\nabla u\cdot\nabla v\ \mathrm{d}\Omega+\int_{D}uv\ \mathrm{d}\Omega, \tag{10}\]
where \(\beta\) is the so-called regularisation length scale (e.g., Allaire et al, 2016; Feppon et al, 2019; Allaire et al, 2021). For microstructure optimisation we use the periodic Sobolev space \(H=H^{1}_{\mathrm{per}}(D)\) and use the inner product defined in Equation 10.
### Linear elastic microstructure optimisation
In this section we briefly discuss computational homogenisation and topology optimisation in the context of periodic microstructure design.
The state equations for linear elastic homogenisation over a domain \(\Omega\) contained in a representative volume element (RVE) \(D\subset\mathbb{R}^{d}\) under an applied strain field \(\bar{\varepsilon}_{ij}\) are (e.g., Yvonnet, 2019)
\[-\sigma_{ij,i} =0\ \text{in}\ \Omega, \tag{11}\] \[\sigma_{ij}n_{j} =0\ \text{on}\ \partial\Omega,\] (12) \[\sigma_{ij} =C_{ijkl}\varepsilon_{kl},\] (13) \[\varepsilon_{ij} =\frac{1}{2}\left(u_{i,j}+u_{j,i}\right), \tag{14}\]
\[\frac{1}{\text{Vol}(D)}\int_{\Omega}\varepsilon_{kl}\ \text{d}\Omega=\bar{ \varepsilon}_{kl}, \tag{15}\]
where \(\sigma_{ij}\) is the stress tensor, \(\varepsilon_{ij}=\varepsilon_{ij}(\mathbf{u})\) is the \(D\)-periodic strain field with displacement \(\mathbf{u}\), and \(C_{ijkl}\) is the spatially dependent elasticity tensor. Note that in the above we use summation notation for indices and comma notation for derivatives.
To compute the homogenised stiffness tensor \(\bar{C}_{ijkl}\) of a periodic material, the above state equations are solved over \(\Omega\) for three (\(d=2\)) or six (\(d=3\)) different combinations of macroscopic strain fields. These macroscopic strain fields are applied by decomposing the strain into the constant macroscopic strain field and fluctuation strain field as \(\varepsilon_{ij}=\bar{\varepsilon}_{ij}+\bar{\varepsilon}_{ij}\). The macroscopic strain fields are then given by the unique components of \(\bar{\varepsilon}_{ij}^{(kl)}=\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{ il}\delta_{jk}\right)\) in \(k\) and \(l\). For example, in two dimensions the unique macroscopic strains are: \(\bar{\varepsilon}_{ij}^{(11)}\), \(\bar{\varepsilon}_{ij}^{(22)}\), \(\bar{\varepsilon}_{ij}^{(12)}\). The notation \(\bar{\varepsilon}_{ij}^{(kl)}\) is used to denote the strain field fluctuation arising from the applied strain field \(\bar{\varepsilon}_{ij}^{(kl)}\).
In practice, Equations 11-15 are solved using a finite element method and the weak formulation given here:
**Weak form 1**: _For each unique constant macroscopic strain field \(\bar{\varepsilon}_{ij}^{(kl)}\), find \(\tilde{\mathbf{u}}^{(kl)}\in H_{per}^{1}(\Omega)^{d}\) such that_
\[\int_{\Omega}C_{pqrs}\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)}) \varepsilon_{pq}(\mathbf{v})\ \text{d}\Omega \tag{16}\] \[\quad=-\int_{\Omega}C_{pqrs}\bar{\varepsilon}_{rs}^{(kl)} \varepsilon_{pq}(\mathbf{v})\ \text{d}\Omega\ \ \forall\,\mathbf{v}\in H_{per}^{1}(\Omega)^{d}\]
_where \(\varepsilon_{ij}(\mathbf{v})=\frac{1}{2}\left(v_{i,j}+v_{j,i}\right).\)_
#### Single-phase problems
For single-phase problems (one solid and a void phase), once the solution \(\tilde{\mathbf{u}}^{(ij)}\) to Weak Form 1 has been found for each unique macroscopic strain \(\bar{\varepsilon}_{pq}^{(ij)}\), the resulting homogenised stiffness tensor may be computed via (Yvonnet, 2019)
\[\bar{C}_{ijkl}(\Omega)=\int_{\Omega}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}} ^{(ij)})+\bar{\varepsilon}_{pq}^{(ij)})\bar{\varepsilon}_{rs}^{(kl)}\ \text{d}\Omega, \tag{17}\]
assuming that \(\text{Vol}(D)=1\).
To evaluate Weak Form 1 and the homogenised stiffness tensor above, we utilise the _ersatz material approximation_. This method, which is classical in the literature (e.g., Allaire et al, 2004), fills the void phase with a soft material so that the state equations can be resolved without a body-fitted mesh. To this end, for small \(\varepsilon_{\text{void}}\) we take
\[C_{ijkl}(\mathbf{x})=\begin{cases}C_{ijkl},&\mathbf{x}\in\Omega\\ \varepsilon_{\text{void}}C_{ijkl},&\mathbf{x}\in D\setminus\Omega\end{cases} \tag{18}\]
and relax integration to be over \(D\). We can provide a smooth approximation to Equation 18 using a smoothed Heaviside function \(H_{\eta}\)
\[H_{\eta}(\phi)=\begin{cases}0&\text{if }\phi<-\eta,\\ \frac{1}{2}+\frac{\phi}{2\eta}+\frac{1}{2\pi}\sin\left(\frac{\pi\phi}{\eta} \right)&\text{if }|\phi|\leq\eta,\\ 1&\text{if }\phi>\eta,\end{cases} \tag{19}\]
where \(\eta\) is half the length of the small transition region of \(H_{\eta}(\phi)\) between 0 and 1. Equation 18 can then be replaced with
\[C_{ijkl}(\phi)=C_{ijkl}(1-H_{\eta}(\phi))+\varepsilon_{\text{void}}C_{ijkl}H_ {\eta}(\phi). \tag{20}\]
It is important to note that the ersatz material approximation is consistent (Allaire et al, 2021). That is, as \(\varepsilon_{\text{void}}\to 0\), the approximation becomes exact.
We conclude this section by stating the shape derivative of the homogenised stiffness tensor:
**Lemma 2**: The shape derivative of Equation 17 is given by
\[\bar{C}_{ijkl}^{\prime}(\Omega)(\mathbf{\theta})= \int_{\partial\Omega}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{( ij)})+\bar{\varepsilon}_{pq}^{(ij)}) \tag{21}\] \[\times(\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})+\bar{\varepsilon}_{ rs}^{(kl)})\ \mathbf{\theta}\cdot\mathbf{n}\ \text{d}\Gamma.\]
_Proof_ See Appendix A. \(\Box\)
#### Multi-phase problems
For multi-phase problems, we utilise the colour level set method in which up to \(2^{M}\) phases can be represented by the sign of \(M\) level set functions (Wang and Wang, 2004; Allaire et al, 2014). For example, in the case of four phases \(\Omega_{1},\Omega_{2},\Omega_{3}\) and
\(\Omega_{4}\) with two level set functions \(\phi_{1}\) and \(\phi_{2}\), we have
\[\begin{cases}\phi_{1}<0\ \&\ \phi_{2}>0,&x\in\Omega_{1}\\ \phi_{1}>0\ \&\ \phi_{2}<0,&x\in\Omega_{2}\\ \phi_{1}<0\ \&\ \phi_{2}<0,&x\in\Omega_{3}\\ \phi_{1}>0\ \&\ \phi_{2}>0,&x\in\Omega_{4}\end{cases}. \tag{22}\]
We further denote the domains associated with each level set function \(\phi_{1}\) and \(\phi_{2}\) as \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), respectively. Figure 1 shows an illustration of this case.
In a similar way to Equation 20 we can interpolate the value of the stiffness tensor between these domains via
\[\begin{split} C_{ijkl}(d_{\mathcal{D}_{1}},d_{\mathcal{D}_{2}}) \\ \quad=C_{ijkl,1}(1-H_{\eta}(d_{\mathcal{D}_{1}}))H_{\eta}(d_{ \mathcal{D}_{2}})\\ \quad+C_{ijkl,2}H_{\eta}(d_{\mathcal{D}_{1}})(1-H_{\eta}(d_{ \mathcal{D}_{2}}))\\ \quad+C_{ijkl,3}(1-H_{\eta}(d_{\mathcal{D}_{1}}))(1-H_{\eta}(d_{ \mathcal{D}_{2}}))\\ \quad+C_{ijkl,4}H_{\eta}(d_{\mathcal{D}_{1}})H_{\eta}(d_{ \mathcal{D}_{2}}),\end{split} \tag{23}\]
where \(C_{ijkl,\alpha}\) is the elasticity tensor for the phase occupying \(\Omega_{\alpha}\). In this multi-phase case we have replaced the level set functions \(\phi_{1}\) and \(\phi_{2}\) with \(d_{\mathcal{D}_{1}}\) and \(d_{\mathcal{D}_{2}}\) denoting their respective signed distance functions. This change facilitates shape differentiation. Unlike the situation discussed by Allaire et al (2016), periodicity of \(D\) ensures that this replacement is valid provided the level set functions are reinitialised often. Additional care should then be taken regarding the calculation of certain quantities and their shape derivatives. Namely, the homogenised stiffness tensor (Eq. 17) becomes
\[\begin{split}&\bar{C}_{ijkl}(\mathcal{D}_{1},\mathcal{D}_{2})\\ &=\int_{D}C_{pqrs}(d_{\mathcal{D}_{1}},d_{\mathcal{D}_{2}})( \varepsilon_{pq}(\tilde{\boldsymbol{u}}^{(ij)})+\bar{\varepsilon}_{pq}^{(ij) })\bar{\varepsilon}_{rs}^{(kl)}\ \mathrm{d}\Omega.\end{split} \tag{24}\]
The volume of \(\Omega_{1}\) is given by
\[\mathrm{Vol}_{\Omega_{1}}(\mathcal{D}_{1},\mathcal{D}_{2})=\int_{D}(1-H_{\eta }(d_{\mathcal{D}_{1}}))H_{\eta}(d_{\mathcal{D}_{2}})\ \mathrm{d}\Omega. \tag{25}\]
Similar expressions are used for \(\Omega_{2}\), \(\Omega_{3}\) and \(\Omega_{4}\).
Integration over the whole cell \(D\) in Equations 24 and 25 with dependence on the signed distance functions differs from the single-phase case where integration is over the domain \(\Omega\) occupied by the solid phase (see Eq. 17). Shape differentiability of the signed distance function and a coarea formula can be used in this multi-phase case to derive the shape derivative. We utilise the "approximate" formula discussed by Allaire et al (2014). This assumes that the Heaviside smoothing parameter \(\eta\) is small and that the principal curvatures of \(\partial\Omega\) vanish.
Figure 1: An illustration of colour level sets with two level set functions and four phases. (a) and (b) show the domain \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) represented via the level set function \(\phi_{1}\) and \(\phi_{2}\) respectively. (c) shows the colour representation of \(\Omega_{1}\), \(\Omega_{2}\), \(\Omega_{3}\), and \(\Omega_{4}\) for different signs of \(\phi_{1}\) and \(\phi_{2}\).
**Lemma 3**: The approximate shape derivatives of Equations 24 and 25 under variation of the domain \(\mathcal{D}_{1}\) by \(\mathbf{\theta}_{1}\) are
\[\bar{C}^{\prime}_{ijkl}(\mathcal{D}_{1},\mathcal{D}_{2})(\mathbf{ \theta}_{1})\] \[\quad\approx-\int_{\partial\mathcal{D}_{1}}\frac{\partial C_{pqrs }}{\partial g}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+\bar{\varepsilon}^{( ij)}_{pq}) \tag{26}\] \[\qquad\qquad\times(\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})+\bar{ \varepsilon}^{(kl)}_{rs})\ \mathbf{\theta}_{1}\cdot\mathbf{n}\ \mathrm{d}\Gamma,\]
where \(g=H_{\eta}(d_{\mathcal{D}_{1}})\), and
\[\mathrm{Vol}^{\prime}_{\Omega_{1}}(\mathcal{D}_{1},\mathcal{D}_{2})(\mathbf{ \theta}_{1})\approx\int_{\partial\mathcal{D}_{1}}H_{\eta}(d_{\mathcal{D}_{2}} )\ \mathbf{\theta}_{1}\cdot\mathbf{n}\ \mathrm{d}\Gamma. \tag{27}\]
Analogous expressions follow for \(\mathbf{\theta}_{2}\) and \(\Omega_{2}\), \(\Omega_{3}\) and \(\Omega_{4}\).
See Appendix B. \(\Box\)
We note that comparisons between the "true" formula, "Jacobian-free" formula (zero principal curvatures), and "approximate" formula have been discussed for compliance elsewhere in the literature (Allaire et al, 2014, 2016). It suffices to mention that Allaire et al (2016) found that the "approximate" formula does not capture the distortion that arises due to the ray integration and approximation of the principal curvatures in the "true" formula.
## 3 Hilbertian projection method
The Hilbertian framework yields a descent direction \(\mathbf{\theta}=\tau g_{\Omega}\mathbf{n}\) for unconstrained optimisation problems. However, for constrained optimisation problems such as
\[\min_{\Omega\in\mathcal{U}_{\mathrm{ad}}} J(\Omega) \tag{28}\] \[\mathrm{s.t.} C_{i}(\Omega)=0,\ i=1,\ldots,N,\] \[a(\mathbf{u},\mathbf{v})=l(\mathbf{v}),\ \forall\mathbf{v}\in V,\]
the choice of \(\mathbf{\theta}\) is more difficult.
In the literature a variety of optimisation methods deal with this problem but few of these take advantage of the Hilbertian framework. Allaire et al (2021) recently presented a sequential linear programming (SLP) method in the Hilbertian framework. The projection method uses orthogonal projections to evolve the design in a direction that aims for best possible improvement of the objective functional while improving the constraint functionals (Challis et al, 2008). In the following we present a Hilbertian extension of the projection method for constrained topology optimisation.
### Preliminaries
We proceed by first solving the following set of scalar Hilbertian extension-regularisation problems over \(H\) for an objective functional \(J(\Omega)\) and constraint functionals \(C_{i}(\Omega)\):
_Find \(g_{\Omega}\in H\) and \(\mu_{\Omega i}\in H\) such that_
\[\langle g_{\Omega},v\rangle_{H} =-J^{\prime}(\Omega)(v\mathbf{n}),\ \forall v\in H,\text{ and } \tag{29}\] \[\langle\mu_{\Omega i},v\rangle_{H} =-C^{\prime}_{i}(\Omega)(v\mathbf{n}),\ \forall v\in H, \tag{30}\]
_for all \(i=1,\ldots,N,\) with inner product \(\langle\cdot,\cdot\rangle_{H}\) and norm \(\|\cdot\|_{H}=\sqrt{\langle\cdot,\cdot\rangle_{H}}\)._
Next we use Gram-Schmidt orthogonalisation to remove linearly dependent constraints from the set \(\{\mu_{\Omega i}\}_{i=1}^{N}\) to obtain the set \(\{\mu_{\Omega p}\}_{p=1}^{N}\), where \(\bar{N}\leq N\). We use \(\{\bar{\mu}_{\Omega p}\}\) to denote the corresponding orthogonal basis that spans the set \(C\subset H\) of extended constraint shape sensitivities. The basis \(\{\bar{\mu}_{\Omega p}\}\) can be used to construct an orthogonal projection operator \(P_{C^{\perp}}\) that projects the shape sensitivity \(g_{\Omega}\) onto the set \(C^{\perp}\) perpendicular to the set of extended constraint shape sensitivities. We define this operator as
\[P_{C^{\perp}}g_{\Omega}=g_{\Omega}-\sum_{p=1}^{\bar{N}}\frac{\langle\bar{\mu} _{\Omega p},g_{\Omega}\rangle_{H}}{\|\bar{\mu}_{\Omega p}\|_{H}^{2}}\bar{\mu} _{\Omega p}. \tag{31}\]
Then, by construction, evolving the level set function using normal velocity \(P_{C^{\perp}}g_{\Omega}\) would to first order improve the objective functional \(J(\Omega)\) while leaving the constraint functionals \(C_{i}(\Omega)\) unchanged. On the other hand, the set of basis functions \(\{\bar{\mu}_{\Omega p}\}\) describes directions that to first order improve the constraint functionals \(C_{i}(\Omega)\).
### Formulation
The Hilbertian projection method can then be formulated as follows: For some rate parameter \(\lambda\in\mathbb{R}\), suppose we choose \(v_{\Omega}\in H\) in the deformation field \(\mathbf{\theta}=\tau v_{\Omega}\mathbf{n}\) so that \(J(\Omega)\) and \(C_{i}(\Omega)\)
decrease via (Challis et al, 2008)
\[\begin{cases}J^{\prime}(\Omega)(v_{\Omega}\mathbf{n})=\text{ min possible},\\ C^{\prime}_{i}(\Omega)(v_{\Omega}\mathbf{n})=-\lambda C_{i}.\end{cases} \tag{32}\]
It is important to note that we purposefully pose the former requirement as "min possible" so that the objective functional may increase when required to improve constraints. Furthermore, linearly dependent constraints in the optimisation problem need to be consistent to ensure that the second line of Equation 32 is well posed. Specifying constraints that have linearly dependent shape sensitivities but for which the directions of improvement are in contradiction would violate this requirement.
In the Hilbertian framework, Equation 32 may be rewritten as
\[\begin{cases}\langle g_{\Omega},v_{\Omega}\rangle_{H}=\text{ max possible},\\ \langle\mu_{\Omega i},v_{\Omega}\rangle_{H}=\lambda C_{i}.\end{cases} \tag{33}\]
Note that the change in sign comes from the application of Equations 29 and 30. We choose the following linear combination for \(v_{\Omega}\):
\[v_{\Omega}=\sqrt{1-\sum_{p=1}^{\bar{N}}\alpha_{p}^{2}}\frac{P_{C^{\perp}}g_{ \Omega}}{\|P_{C^{\perp}}g_{\Omega}\|_{H}}+\sum_{p=1}^{\bar{N}}\alpha_{p}\frac {\bar{\mu}_{\Omega p}}{\|\bar{\mu}_{\Omega p}\|_{H}}, \tag{34}\]
where \(\alpha_{p}\in\mathbb{R}\) are determined using \(\langle\mu_{\Omega p},v_{\Omega}\rangle_{H}=\lambda C_{p}\) for \(p=1,\ldots,\bar{N}\). This generates a lower-triangular linear system of the form
\[\lambda C_{p}=\sum_{l=1}^{p-1}\alpha_{l}\frac{\langle\bar{\mu}_{\Omega l},\mu _{\Omega p}\rangle_{H}}{\|\bar{\mu}_{\Omega l}\|_{H}}+\alpha_{p}\|\bar{\mu}_{ \Omega p}\|_{H}, \tag{35}\]
which can easily be solved via forward substitution (Challis et al, 2008).
To first order the first term of Equation 34 improves the objective while leaving the constraints unchanged due to orthogonality, while the second term improves the constraints with extended shape sensitivities that contribute to the basis \(\{\bar{\mu}_{\Omega p}\}\). In the numerical examples we have observed satisfaction of all constraints at convergence of the optimisation algorithm, including those that have linearly dependent shape sensitivities. The square root term of Equation 34 is included to facilitate a balance between improving the objective and constraints.
### Parameters
As discussed by Challis et al (2008), the rate parameter \(\lambda\) should be chosen to ensure \(1-\sum_{p=1}^{\bar{N}}\alpha_{p}^{2}\geq 0\) and \(\sum_{p=1}^{\bar{N}}\alpha_{p}^{2}\geq\alpha_{\text{min}}^{2}\). The new parameter \(\alpha_{\text{min}}\) then controls the balance between improving the objective or constraints. For example, \(\alpha_{\text{min}}=1\) ignores the objective function in Equation 28 and instead solves a constraint satisfaction problem. As a result, the method only has a single parameter \(\alpha_{\text{min}}\) while \(\lambda\) is dictated by the inequalities above. In general we find that \(\alpha_{\text{min}}\) does not require fine tuning and unless otherwise stated we choose \(\alpha_{\text{min}}^{2}=0.1\).
### Comparison to null space methods
Our formulation is similar to null space methods recently developed by Barbarosie et al (2020) and Feppon et al (2020), both of which present a similar formulation. In the following we discuss some differences between our Hilbertian projection method and the null space method proposed by Feppon et al (2020).
Most notably, our formulation makes use of an orthogonal basis for the set of extended constraint shape sensitivities. This avoids a possibly expensive matrix inversion that appears in the algorithm presented by (Feppon et al, 2020). Our use of the orthogonal basis also avoids reliance on the 'Linear Independence Constraint Qualification' (LICQ) condition (Sec. 2.1, Feppon et al, 2020). Our method therefore naturally handles linearly dependent constraint shape sensitivities. Such dependencies often appear in microstructure optimisation problems when symmetries are imposed on the effective material properties (e.g., Secs. 5.2 and 5.4 below). Multi-phase level set-based topology optimisation via the colour level set method (Wang and Wang, 2004; Allaire et al, 2014) can also give rise to such linear dependency (e.g., Sec. 5.4 below). The ability of the Hilbertian projection method to naturally handle these situations gives the user more freedom in how topology optimisation problems are posed
and avoids additional special treatment of linearly dependent constraint sensitivities.
For equality constrained problems when LICQ is satisfied, both the null space method (Feppon et al, 2020) and our Hilbertian project method give equivalent directions for improvement of the objective and violated constraints, up to coefficients \(\alpha_{p}\) and constant \(\lambda\). However, the method of attaining this improvement is quite different. In this work, the second term of Equation 34 is a linear combination of the orthogonal basis of the set of extended constraint shape sensitivities. The coefficients \(\alpha_{p}\) are chosen to improve constraints exponentially, as per the second requirement in Equation 33. The null space instead uses a Gauss-Newton direction for ensuring exponential decay of violated constraints (Lemma 2.5 & Prop. 2.6, Feppon et al, 2020). This again relies on LICQ and possibly expensive matrix inversion discussed above.
Finally, unlike the null space methods proposed by Barbarosie et al (2020) and Feppon et al (2020), the Hilbertian projection method as formulated above is unable to handle inequality constraints. Such an extension could be considered in future using slack variables or by adopting the procedure used by either Barbarosie et al (2020) or Feppon et al (2020). Interestingly, in terms of the total length covered to reach the optimum, Feppon et al (2020) found that their dual quadratic programming method for handling inequality constraints yielded equivalent performance when compared to the method of slack variables. However, using slack variables introduces additional computational cost and possibly further parameter tuning (Feppon et al, 2020). For our implementation this additional cost should be small owing to the use of orthogonalisation. As such the slack variable method would be an appropriate first recourse for implementing inequality constraints within the Hilbertian projection method. This would be similar to the approach taken by Schropp and Singer (2000).
## 4 Numerical implementation
In the following we describe the numerical implementation of our topology optimisation algorithm. We first discuss the resolution of state equations and Hamilton-Jacobi type equations followed by an overview of the optimisation algorithm. We finish with a brief discussion of the Hilbertian SLP method which is compared to our presented Hilbertian projection method.
### Resolving state and Hamilton-Jacobi-type equations
To solve the state equations and the Hilbertian extension-regularisation problems we use the finite element package _Gridap_(Badia and Verdugo, 2020; Verdugo and Badia, 2022) in the programming language _Julia_. In particular, we discretise the periodic domain \(D\subset\mathbb{R}^{d}\) into \(n^{d}\) linear quadrilateral (\(d\)=2) or hexahedral (\(d\)=3) elements with element width \(\Delta x\) and discretise the level set function at the nodes of the triangulation. To reduce computational cost when solving the state equations, we remove any elements that are completely void phase and leave a strip of \(\operatorname{ersatz}\) material near the phase interface. The resulting linear systems for the state equations and Hilbertian extension-regularisation problems are then solved using a direct method in 2D or a GPU-based Jacobi pre-conditioned conjugate gradient method in 3D.
For the Hamilton-Jacobi evolution equation and signed distance reinitialisation equation (Eqs. 3 and 5) we use standard first order Godunov upwind finite difference schemes (Peng et al, 1999; Allaire et al, 2004; Osher and Fedkiw, 2006) that have been implemented on GPUs using _CUDA.jl_(Besard et al, 2018). For the Hamilton-Jacobi evolution equation we use \(\lfloor n/10\rfloor\) or \(\lfloor n/3\rfloor\) number of time steps in two dimensions or three dimensions, respectively. We are more conservative in three dimensions because we have less elements along each axial direction. For the reinitialisation equation we iterate until reaching a stopping condition
\[\|\phi^{q}-\phi^{q-1}\|_{\infty}<5\times 10^{-5}, \tag{36}\]
where \(q\) is the iteration number. In addition, for the sign function \(S\) we use the common approximation
\[S(\phi)=\frac{\phi}{\sqrt{\phi^{2}+|\nabla\phi|^{2}\Delta x^{2}}}, \tag{37}\]
that applies on a Cartesian grid with square elements of side length \(\Delta x\)(Osher and Fedkiw, 2006).
### Algorithm overview
In Algorithm 1 we present our optimisation algorithm that is based on the theory discussed in Sections 2 and 3. Algorithm 1 is similar to Algorithm 5 presented by Allaire et al (2021), with the addition of a line search method for determining the Courant-Friedrichs-Lewy (CFL) coefficient \(\gamma\) for solving the Hamilton-Jacobi evolution equation (e.g., Allaire et al, 2021) with time step (Osher and Fedkiw, 2006)
\[\Delta t=\frac{\gamma\Delta x}{\|v_{\Omega}\|_{\infty}}. \tag{38}\]
Note that we omit the indices on \(\gamma\) that appear in Algorithm 1 for sake of clarity. In general, the line search method helps to remove oscillations in the optimisation history and improve convergence. For the stopping criterion we require that the current objective value compared to the previous five is stationary and that the constraints are satisfied within specified tolerances.
The Hilbertian projection method is implemented using the package _DoubleFloats.jl_ that gives a machine epsilon of roughly \(5\times 10^{-32}\). This prevents accumulation of round-off error when generating the projection operator that can affect the optimisation history. All other computations are completed in standard double precision.
Table 1 gives the parameter values used for all the optimisation examples unless otherwise stated in Section 5.
```
1:Initialise \(\Omega^{0}\) inside a computational domain \(D\) with mesh \(\mathcal{T}\) and a level set function \(\phi^{0}\).
2:Find the initial solution \(\mathbf{u}^{(ij)}\) to the homogenisation problem for each unique \(\bar{\varepsilon}^{(ij)}\).
3:for\(q=1,\ldots,q_{\max}\)do
4: Calculate the shape sensitivity of the objective \(J(\Omega^{q-1})\) and constraints \(C_{i}(\Omega^{q-1})\).
5: Solve scalar Hilbertian extension-regularisation problems for the objective and constraints with length scale \(\beta\).
6: Apply the Hilbertian projection method to find \(v_{\Omega}\).
7:for\(k=1,\ldots,k_{\max}\)do
8: Solve H-J evolution equation with CFL coefficient \(\gamma^{q-1,k}\) to find new domain \(\bar{\Omega}^{k}\) with associated level set function \(\bar{\phi}^{k}\).
9: Solve reinitialisation equation.
10: Solve state equations for linear elastic homogenisation and calculate the new objective \(J_{\text{new}}\).
11:if\(J_{\text{new}}<J(\Omega^{q-1})+\xi|J(\Omega^{q-1})|\) or \(\gamma^{q-1,k}<\gamma_{\min}\)then
12: Increase the CFL coefficient: \(\gamma^{q,k}=\min(\delta_{\text{inc}}\gamma^{q-1,k},\gamma_{\max})\).
13: Accept the new iteration with \(\Omega^{q}=\bar{\Omega}^{k}\) and \(\phi^{q}=\bar{\phi}^{k}\).
14: Break inner loop.
15:else
16: Decrease the CFL coefficient: \(\gamma^{q-1,k+1}=\max(\delta_{\text{dec}}\gamma^{q-1,k},\gamma_{\min})\).
17: Reject the new iteration and continue loop.
18:endif
19:endfor
20:if\(|J(\Omega^{q})-J(\Omega^{q-j})|\leq\epsilon_{1}|J(\Omega^{q})|,\forall j=1, \ldots,j_{\max}\) and \(|C_{i}(\Omega^{q})|<\epsilon_{2}\ \forall i\)then
21:return\(\Omega^{q}\) and \(\phi^{q}\).
22:endif
23:endfor
```
**Algorithm 1** Optimisation Algorithm
### Comparison to sequential linear programming (SLP)
We compare the Hilbertian projection method to the Hilbertian SLP method presented by Allaire et al (2021) (Sec. 5.3.2). We make two adjustments to the method. Firstly, to match our formulation we replace the inequality constraints with equality constraints. In addition, we change the trust region constraints for the constraint
\begin{table}
\begin{tabular}{l|l|l} Parameter & Value & Type \\ \hline \(q_{\max}\) & 1000 & Max iterations \\ \(k_{\max}\) & 10 & Max line search iterations \\ \(\Delta x\) & \(1/n\) & Mesh spacing \\ \(\varepsilon_{\text{void}}\) & 0.001 & Erastz material coeff. \\ \(\eta\) & \(1.5\Delta x\) & Heaviside smoothing \\ \(\beta\) & \(4\Delta x\) & Hilbertian ext.-reg. \\ \(\alpha_{\min}^{2}\) & 0.1 & Hilbertian proj. meth. \\ \(\lambda\) & 0.5 & Hilbertian proj. meth. \\ \(\gamma_{\min}\) & 0.001 & H-J equation \\ \(\gamma_{\max}\) & 0.1 & H-J equation \\ \(\gamma_{\text{reinit}}\) & 0.1 & Reinit. equation \\ \(\xi\) & 0.005 & Line search \\ \(\delta_{\text{inc}}\), \(\delta_{\text{dec}}\) & 1.1, 0.7 & Line search \\ \(\epsilon_{1}\), \(\epsilon_{2}\) & 0.01, 0.0001 & Stopping criteria \\ \(j_{\max}\) & 5 & Stopping criteria \\ \end{tabular}
\end{table}
Table 1: Parameter values used for optimisation examples. Note: Any variation in the parameters from these values will be specifically stated in the relevant example problem section.
functionals to be
\[|\lambda_{i}|\leq\frac{\Delta x}{2\|\boldsymbol{\theta}_{i}\|_{H}}, \tag{39}\]
for \(i=1,\ldots,N\). For several two-dimensional problems we find that this choice promotes convergence and better optimisation results. However, as we will discuss later, choosing trust region constraints is not straightforward for our example optimisation problems.
We implement Hilbertian SLP by adjusting line 5 of Algorithm 1 accordingly. To solve the resulting linearised optimisation problem we use the Julia packages _JuMP.jl_(Dunning et al, 2017) and _Ipopt.jl_(Wachter and Biegler, 2006).
## 5 Example problems
In the following we give the optimisation results for several example problems that have been solved with both the Hilbertian projection method and Hilbertian SLP method.
### Example 1: Maximum bulk modulus
In this example we consider a bounding domain \(D=[0,1]^{d}\) that contains a solid phase and void phase. The solid phase is constructed from an isotropic medium with \(E=1\) and \(\nu=0.3\). Subject to a volume constraint \(\mathrm{Vol}(\Omega)=1/2\), we maximise the effective bulk modulus \(\bar{\kappa}(\Omega)\) of the material. The bulk modulus is a measure of stiffness to volumetric strain given by
\[\bar{\kappa}=\frac{1}{4}(\bar{C}_{1111}+\bar{C}_{2222}+2\bar{C}_{1122}) \tag{40}\]
in two dimensions, or
\[\begin{split}\bar{\kappa}=\frac{1}{9}(&\bar{C}_{11 11}+\bar{C}_{2222}+\bar{C}_{3333}\\ &+2(\bar{C}_{1122}+\bar{C}_{1133}+\bar{C}_{2233}))\end{split} \tag{41}\]
in three dimensions. In other words, we seek to solve the optimisation problem:
\[\begin{split}\min_{\Omega\in\mathcal{U}_{\mathrm{ad}}}& -\bar{\kappa}(\Omega)\\ \mathrm{s.t.}&\mathrm{Vol}(\Omega)=1/2,\\ & a(\boldsymbol{u},\boldsymbol{v})=l(\boldsymbol{v}),\ \forall \boldsymbol{v}\in V.\end{split} \tag{42}\]
The last line represents the satisfaction of the state equations.
In two dimensions we use a periodic starting structure with four equally spaced holes. For three dimensions the initial boundary between void and solid material is given by a Schwarz P minimal surface. It is well-known that in two dimensions hole nucleation is not possible under Hamilton-Jacobi evolution (e.g., Allaire et al, 2004). For this reason we initialise the two-dimensional optimisation problems with more holes than required. Topological derivatives could be incorporated to rectify this, but this is outside of the scope of the current paper. We also note that different starting structures could be used provided they are periodic and have non-zero stiffness.
Figures 2 and 3 show the starting structures and optimisation results for two and three dimensions, respectively for both the Hilbertian projection method and SLP. In addition, we compare the objective value with the Hashin-Shtrikman (HS) upper bound (Hashin and Shtrikman, 1963). Table 2 shows a summary of the results.
Figure 3: Three-dimensional optimisation results for Example 1: maximum bulk modulus. For the starting structure (a), (b) and (c) show the final structures for the Hilbertian projection method and SLP respectively, while (d) and (e) show the respective iteration histories. The Hashin-Shtrikman upper bound for the bulk modulus is given by the dashed black line.
Figure 2: Two-dimensional optimisation results for Example 1: maximum bulk modulus. For the starting structure (a), (b) and (c) show the final structures for the Hilbertian projection method and SLP respectively, while (d) and (e) show the respective iteration histories. The Hashin-Shtrikman upper bound for the bulk modulus is given by the dashed black line.
Hilbertian projection method matches other classical results (Sec. 2.10.3, Bendsoe and Sigmund, 2004).
### Example 2: Maximum bulk modulus with isotropy
In Example 2 we consider the same problem setup as Example 1 with the addition of macroscopic isotropy constraints. This ensures that the resulting homogenised stiffness tensor is invariant under rotation.
In two dimensions the optimisation problem is given by
\[\begin{split}\min_{\Omega\in\mathcal{U}_{\mathrm{ad}}}& -\bar{\kappa}(\Omega)\\ \mathrm{s.t.}&\mathrm{Vol}(\Omega)=1/2,\\ & C_{i}(\Omega)=0,\ i=1,\ldots,6,\\ & a(\boldsymbol{u},\boldsymbol{v})=l(\boldsymbol{v}),\ \forall \boldsymbol{v}\in V.\end{split} \tag{43}\]
The constraints \(C_{i}(\Omega)\) are given by
\[\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}C_{1} =\bar{C}_{1111}-\bar{\kappa}-\bar{\mu}, \tag{44}\] \[\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}C_{2} =\bar{C}_{2222}-\bar{\kappa}-\bar{\mu},\] (45) \[\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}C_{3} =\sqrt{2}(\bar{C}_{3333}-\bar{\kappa}+\bar{\mu}),\] (46) \[\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}C_{4} =2\bar{C}_{1133},\] (47) \[\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}C_{5} =2\bar{C}_{2233},\] (48) \[\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}C_{6} =2(\bar{C}_{3333}-\bar{\mu}), \tag{49}\]
where \(\bar{\mu}\) is the isotropic shear modulus given by
\[\bar{\mu}=\frac{1}{8}(\bar{C}_{1111}+\bar{C}_{2222})-\frac{1}{4}\bar{C}_{1122 }+\frac{1}{2}\bar{C}_{3333}. \tag{50}\]
Analogous expressions appear in three dimensions and the number of isotropy constraints increases from 6 to 21 (Challis, 2009). It should be noted that the term \(\sqrt{4\bar{\kappa}^{2}+8\bar{\mu}^{2}}\) appears as a normalisation coefficient and is considered constant for the purpose of the shape differentiation. Furthermore, owing to the symmetry of the isotropy constraints the extended constraint shape sensitivities have a nullity of two. Our method handles these with no special treatment.
To visualise the behaviour of the isotropy constraints in the iteration history, we define the effective anisotropy \(\bar{\mathcal{A}}\) to be the sum of squares of the violation of these constraints. That is,
\[\bar{\mathcal{A}}(\Omega)=\sqrt{\sum C_{i}(\Omega)^{2}}. \tag{51}\]
For the two-dimensional problem we implement both the full set of isotropy constraints as well as the single constraint \(\bar{A}=0\). We find that the SLP method struggles with the full set of constraints. For this reason we instead use \(\bar{\mathcal{A}}=0\) as the isotropy constraint for the SLP method. For the Hilbertian projection method we find that the full set of constraints is more effective. This matches previous literature (e.g., Challis et al, 2008; Challis, 2009).
Figure 4 shows the two-dimensional results for the Hilbertian projection method with the full set of constraints and single anisotropy measure constraint. We omit the starting structure as this is the same as previously (Fig. 2a). Figure 5 shows the three-dimensional results for the Hilbertian projection method and SLP. Table 3 shows a summary of the results for this example.
In two dimensions the Hilbertian projection method performs well for the full set of constraints and the single anisotropy measure constraint. The number of iterations is markedly lower when using the full set of constraints (78 vs 220). This is unsurprising because including the individual symmetry constraints enables the projection method to improve each constraint separately (Challis et al, 2008; Challis, 2009). The final optimisation values with the full set or single constraint are very similar, being 99.68% and 99.69% of the HS bound, while the final structures are geometrically identical under a periodic shift of half the cell edge length along each coordinate direction. The resulting structures match other classical results (Sec. 2.10.3, Bendsoe and Sigmund, 2004). In contrast, the SLP method does not manage to reduce the measure of anisotropy to zero and reaches the maximum number of iterations. However, the final structure obtained with SLP (Fig. 4c) is very close to those obtained with the Hilbertian projection method (Fig. 4a and 4b). We suspect that SLP fails to converge due to the difficulty associated with choosing the trust region constraints.
For the three-dimensional results, we find that the Hilbertian projection method with the full
Figure 4: Two-dimensional optimisation results for Example 2: maximum bulk modulus with isotropy. (a), (b), and (c) show the final structures for the Hilbertian projection method with the full set of constraints, single anisotropy constraint, and SLP with the single anisotropy constraint respectively, while (d), (e), and (f) show the respective iteration histories. The Hashin-Shtrikman upper bound for the bulk modulus is given by the dashed black line. Due to very little change between iteration 600 and 1000, the upper bound on the \(x\)-axis in (f) has been reduced to 700.
Figure 5: Three-dimensional optimisation results for Example 2: maximum bulk modulus with isotropy. (a) and (b) show the final structures for the Hilbertian projection method and SLP respectively, while (c) and (d) show the respective iteration histories. The Hashin-Shtrikman upper bound for the bulk modulus is given by the dashed black line.
set of constraints converges in 53 iterations to 99.12% of the HS bound. The final structure again matches known results from the literature (Sec. 2.10.3, Bendsoe and Sigmund, 2004). For SLP, the optimisation algorithm fails to converge in 1000 iterations and the final structure is clearly not optimal.
These results demonstrate that the Hilbertian projection method is able to handle a large number of constraints (22 in 3-dimensions) without requiring any parameter tuning of \(\alpha_{\min}\) or \(\lambda\). In addition, the optimisation histories for the Hilbertian projection method with the full set of isotropy constraints (Fig. 3(d) and 4(c)) are smooth and converge fairly quickly.
### Example 3: Auxetic materials
In this example we consider two-dimensional minimum volume auxetic materials with a Poisson's ratio of \(-0.5\). For the problem set up, we consider a bounding domain \(D=[0,1]^{2}\) that contains a solid phase and void phase. As previously, the solid phase is constructed from an isotropic material with \(E=1\) and \(\nu=0.3\).
To obtain an effective Poisson's ratio \(\bar{\nu}=-0.5\), we require that \(\bar{C}_{1111}=\bar{C}_{2222}\) and prescribe a value to \(\bar{C}_{1111}\) and \(\bar{C}_{1122}\) so that the effective Poisson's ratio \(\bar{\nu}\) given by
\[\bar{\nu}=\frac{\bar{C}_{1122}}{\bar{C}_{1111}} \tag{52}\]
gives the required \(\bar{\nu}=-0.5\).
It may be noted that this is similar to the approach taken by Vogiatzis et al (2017). However, they instead minimise a weighted sum of \(\bar{C}_{ijkl}\) and prescribed value subject to a volume constraint.
For the purpose of this example we choose \(\bar{C}_{1111}=0.1\), which results in \(\bar{C}_{1122}=-0.05\). The resulting optimisation problem is then
\[\min_{\Omega\in\mathcal{U}_{\mathrm{ad}}} \mathrm{Vol}(\Omega)\] (53) s.t. \[\bar{C}_{1111}(\Omega)=0.1,\] \[\bar{C}_{2222}(\Omega)=0.1,\] \[\bar{C}_{1122}(\Omega)=-0.05,\] \[\bar{C}_{1112}(\Omega)=0,\] \[\bar{C}_{2212}(\Omega)=0,\] \[a(\mathbf{u},\mathbf{v})=l(\mathbf{v}),\ \forall\mathbf{v}\in V.\]
For this example we use \(\gamma_{\max}=0.05\) and \(\alpha_{\min}^{2}=0.5\). This means that the optimiser favors improvement of the constraints rather than the objective and does not move too quickly to avoid disconnecting in the first few iterations.
Figure 6 shows the starting structure (Fig. 5(a)) and optimisation results (Fig. 5(b) and 5(c)) for this problem using the Hilbertian projection method. We use a periodic starting structure with sixteen equally spaced holes. The method converges in 61 iterations with a final volume of 0.3159 and Poisson's ratio of -0.4998. In contrast, SLP fails because the optimiser prioritises the objective leading to a disconnected solid phase and the algorithm is unable to recover.
### Example 4: Multi-phase materials
For our final example we consider two-dimensional multi-phase maximum bulk modulus problems with and without isotropy constraints. We consider a bounding domain \(D=[0,1]^{2}\) that contains two solid phases and void. The solid phase contained in \(\Omega_{2}\) is an isotropic material with \(E=1\) and \(\nu=0.3\), while \(\Omega_{3}\) contains isotropic material with \(E=1/2\) and \(\nu=0.3\). The void phase is contained in \(\Omega_{1}\) and \(\Omega_{4}\). As previously, most void phase is removed from the mesh (see Sec. 4) while
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} Method & \(d\) & Iso. Const. Type & Fig. & \(\bar{\kappa}\) & HS bound & \(\bar{\mathcal{A}}\) & Vol & Iters. \\ \hline Proj. & 2 & Full set & 4a/4d & 0.1854 & 0.1860 & 0.0001 & 0.5000 & 78 \\ Proj. & 2 & Aniso. Meas. & 4b/4e & 0.1854 & 0.1860 & 0.0001 & 0.5000 & 220 \\ SLP & 2 & Aniso. Meas. & 4c/4f & 0.1856 & 0.1860 & 0.0363 & 0.5010 & 1000* \\ Proj. & 3 & Full set & 5a/5c & 0.2287 & 0.2308 & 0.0001 & 0.5000 & 53 \\ SLP & 3 & Aniso. Meas. & 5b/5d & 0.1295 & 0.2308 & 0.1348 & 0.5007 & 1000* \\ \end{tabular}
\end{table}
Table 3: Summary of optimisation results for Example 2: maximum bulk modulus with isotropy. The asterisk denotes failure to converge.
any material close to the interface is specified as weak material with \(E=10^{-3}\) and \(\nu=0.3\).
We consider two optimisation problems. The first is maximum bulk modulus subject to volume constraints on \(\Omega_{2}\) and \(\Omega_{3}\) given by
\[\begin{split}\min_{\mathcal{D}_{1},\mathcal{D}_{2}\in\mathcal{U} _{\text{ad}}}&-\bar{\kappa}(\mathcal{D}_{1},\mathcal{D}_{2})\\ \text{s.t.}&\text{Vol}_{\Omega_{2}}(\mathcal{D}_{1 },\mathcal{D}_{2})=1/4,\\ &\text{Vol}_{\Omega_{3}}(\mathcal{D}_{1},\mathcal{D}_{2})=1/4, \\ & a(\boldsymbol{u},\boldsymbol{v})=l(\boldsymbol{v}),\ \forall \boldsymbol{v}\in V.\end{split} \tag{54}\]
The second is maximum bulk modulus subject to macroscopic isotropy constraints and volume constraints on \(\Omega_{2}\) and \(\Omega_{3}\) given by
\[\begin{split}\min_{\mathcal{D}_{1},\mathcal{D}_{2}\in\mathcal{U} _{\text{ad}}}&-\bar{\kappa}(\mathcal{D}_{1},\mathcal{D}_{2})\\ \text{s.t.}&\text{Vol}_{\Omega_{2}}(\mathcal{D}_{1 },\mathcal{D}_{2})=1/4,\\ &\text{Vol}_{\Omega_{3}}(\mathcal{D}_{1},\mathcal{D}_{2})=1/4,\\ & C_{i}(\mathcal{D}_{1},\mathcal{D}_{2})=0,\ i=1,\ldots,6,\\ & a(\boldsymbol{u},\boldsymbol{v})=l(\boldsymbol{v}),\ \forall \boldsymbol{v}\in V,\end{split} \tag{55}\]
where the constraints \(C_{i}\) are as in Example 2. For SLP we use the single anisotropy constraint and for the Hilbertian projection method we use the full set of isotropy constraints. For these examples we use \(\gamma_{\text{max}}=0.05\) so that the optimiser does not evolve the designs too quickly. It should be noted that for the case of only volume constraints the extended constraint shape sensitivities have a nullity of one (in \(\boldsymbol{\theta}_{1}\)) and zero (in \(\boldsymbol{\theta}_{2}\)) owing to the structure of the shape derivatives for the volume constraints. For the case of volume and isotropy constraints the extended constraint shape sensitivities have a nullity of three (in \(\boldsymbol{\theta}_{1}\)) and two (in \(\boldsymbol{\theta}_{2}\)) due to the underlying symmetry of the shape derivatives for the volume and isotropy constraints. Our method handles these with no special treatment.
We initialise with two overlapping level set functions that give a starting structure completely comprised of the less stiff material (\(\Omega_{3}\)) and the void phase. Regions of stiffer material (\(\Omega_{2}\)) are readily generated during the optimisation via independent evolution of the two level set functions. We use a starting structure of four equally spaced holes for the optimisation problem without isotropy constraints and nine equally spaced holes for the problem with isotropy constraints.
Figures 7 and 8 show the optimisation results and history without and with isotropy constraints, respectively. We denote the stiff and less stiff material phase by blue and green, respectively, while the smooth interface is given by the dark green overlap. We include the Hashin-Shtrikman-Walpole (HSW) upper bound (Walpole, 1966) for this problem as the black dashed line in these figures. Table 4 shows a summary of the results.
For the case of no isotropy constraints, both the Hilbertian projection method and SLP converge to roughly 96% of the HSW bound while the resulting structures (Fig. 6(b) and 6(c)) are geometrically similar apart from the thin interface that presents in the results for the Hilbertian projection method. The difference between our results and the theoretical upper bound is likely due to the use of the approximate formula for
Figure 6: Optimisation results for Example 3: auxetic materials. For the starting structure (a), (b) shows the final structure while (c) shows the iteration history for the Hilbertian projection method. The desired Poisson’s ratio of -0.5 is given by the dashed black line.
Figure 8: Optimisation results for Example 4: maximum bulk modulus multi-phase materials with isotropy constraints. For the starting structure (a), (b) and (c) show the final structures for the Hilbertian projection method and SLP respectively, while (d) and (e) show the respective iteration histories. The Hashin-Shtrikman-Walpole upper bound for the bulk modulus is given by the dashed black line. Note that the volume constraint for each phase is \(0.25\).
Figure 7: Optimisation results for Example 4: maximum bulk modulus multi-phase materials without isotropy constraints. For the starting structure (a), (b) and (c) show the final structures for the Hilbertian projection method and SLP respectively, while (d) and (e) show the respective iteration histories. The Hashin-Shtrikman-Walpole upper bound for the bulk modulus is given by the dashed black line. Note that the volume constraint for each phase is \(0.25\).
the shape derivative. Indeed, results from Allaire et al (2014) showed that the approximate formula yields slightly less optimal results than the true or Jacobian-free counterpart. The iteration history for the Hilbertian projection method (Fig. 6(d)) is fairly smooth while the history for SLP (Fig. 6(e)) moves rapidly at the beginning of the optimisation to satisfy the volume constraints and increase the objective. As previously noted, this is likely due to the trust region constraints that, in this case, need to be chosen to be more conservative.
With the addition of isotropy constraints, we again find that the Hilbertian projection method works well without significant parameter tuning. In 148 iterations the optimisation algorithm is able to satisfy all constraints and obtain a final objective value that is 93.44% of the HS bound. In contrast, SLP does not converge within 1000 iterations.
## 6 Conclusion
In this paper we have presented a Hilbertian extension of the projection method for constrained level set-based topology optimisation. At its core, the method relies on the Hilbertian extension-regularisation method in which a set of identification problems are solved over a Hilbert space \(H\) on \(D\) with inner product \(\langle\cdot,\cdot\rangle_{H}\). This procedure naturally extends shape sensitivities onto the bounding domain \(D\) and enriches them with the regularity of \(H\). For a constrained optimisation problem the projection method framework aims for the best first-order improvement of the objective in addition to first-order improvement of the constraints. These requirements for the projection method may then be reposed in the Hilbertian framework in terms of \(H\) and \(\langle\cdot,\cdot\rangle_{H}\). We satisfy these reposed requirements by defining the normal velocity of the level set function as a linear combination of the orthogonal projection operator applied to the extended objective shape sensitivity and basis functions for the extended constraint shape sensitivities. Owing to the Hilbertian extension-regularisation of shape sensitivities, the chosen normal velocity is already extended onto the bounding domain \(D\) and endowed with the regularity of \(H\).
To demonstrate the Hilbertian projection method for constrained level set-based topology optimisation we have solved several example microstructure optimisation problems with multiple constraints. We showed that the Hilbertian projection method successfully handled all of these optimisation problems with little-to-no tuning of the parameter \(\alpha_{\min}\) that controls the balance between improving the objective and constraints. The Hilbertian projection method also naturally handles linearly dependent constraint shape sensitivities. Such linear dependencies often appear in microstructure optimisation and multi-phase optimisation problems.
We found that our method performs well when compared to a Hilbertian sequential linear programming (SLP) method (Allaire et al, 2021). For problems only involving volume constraint(s), both methods converged to appropriate optimised microstructures. However, SLP did not successfully solve some of the more complex example optimisation problems. These results demonstrate the capacity of the Hilbertian projection method and likely other projection/null space methods (e.g., Barbarosie et al, 2020; Feppon et al, 2020) for solving constrained level set-based topology optimisation problems.
Applying multiple constraints is challenging in level set-based topology optimisation owing to the reliance on implicitly defined domains. Alongside other recent work (Barbarosie et al, 2020; Feppon et al, 2020; Allaire et al, 2021), our proposed Hilbertian projection method makes significant progress towards improving the capacity of conventional level set-based methods for constrained
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} Method & Iso. Const. Type & Fig. & \(\bar{\kappa}\) & HSW bound & \(\bar{\mathcal{A}}\) & Vol\({}_{\Omega_{2}}\) & Vol\({}_{\Omega_{3}}\) & Iters. \\ \hline Proj. & N.A. & 7a/7d & 0.1469 & 0.1524 & N.A. & 0.2501 & 0.2500 & 61 \\ SLP & N.A. & 7b/7e & 0.1468 & 0.1524 & N.A. & 0.2500 & 0.2501 & 221 \\ Proj. & Full set & 8b/8d & 0.1424 & 0.1524 & 0.0001 & 0.2500 & 0.2500 & 148 \\ SLP & Aniso. Meas. & 8c/8e & 0.1254 & 0.1524 & 0.0236 & 0.2506 & 0.2497 & 1000* \\ \end{tabular}
\end{table}
Table 4: Summary of optimisation results for Example 4: two-dimensional maximum bulk modulus multi-phase materials. The asterisk denotes failure to converge.
topology optimisation. Furthermore, due to its generality, the Hilbertian projection method is not confined to microstructure optimisation. It may be applied to any topology optimisation problem to be solved in a level-set framework, including macroscopic or multi-physics problems. Inequality constraints could likely be incorporated into the method using slack variables. In addition, the method is not confined to Eulerian level set methods and can be readily applied to Lagrangian or body-fitted level set methods (e.g., Allaire et al., 2014). These extensions could be considered in future work.
Acknowledgments.This work was supported by the Australian Research Council through the Discovery Grant scheme (DP220102759). Computational resources used in this work were provided by the eResearch Office, Queensland University of Technology. The first author is supported by a QUT Postgraduate Research Award and a Supervisor Top-Up Scholarship. The authors would also like to thank the anonymous reviewers for their constructive comments that have resulted in improvements to the manuscript.
Conflict of interest.The authors have no competing interests to declare that are relevant to the content of this article.
Replication of Results.As this work is a part of a new project, we do not provide the source code. However, we do provide Algorithm 1 to help readers reproduce results. In addition, we provide all parameter values along with an iteration history for all problems. Interested readers can contact the authors for further information.
## Appendix A Proof of Lemma 2
We proceed via Cea's method (Cea, 1986): suppose we define the Lagrangian \(\mathcal{L}\) to be
\[\begin{split}\mathcal{L}&(\Omega,\tilde{\mathbf{u}}^ {(ij)},\tilde{\mathbf{u}}^{(kl)})\\ &=\int_{\Omega}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+ \varepsilon_{pq}^{(ij)})\bar{\varepsilon}_{rs}^{(kl)}\ \mathrm{d}\Omega\\ &+\int_{\Omega}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+ \bar{\varepsilon}_{pq}^{(ij)})\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})\ \mathrm{d}\Omega.\end{split}\] (A1)
We do not include auxiliary fields as it turns out that the problem is self-adjoint.
Under a variation \(\delta\tilde{\mathbf{u}}^{(ij)}\) of \(\tilde{\mathbf{u}}^{(ij)}\) with \(ij\neq kl\), the corresponding variation of \(\mathcal{L}\) is
\[\begin{split}\frac{\delta\mathcal{L}}{\delta\tilde{\mathbf{u}}^{(ij) }}&=\int_{\Omega}C_{pqrs}\delta\tilde{u}_{p,q}^{(ij)}\bar{ \varepsilon}_{rs}^{(kl)}\ \mathrm{d}\Omega\\ &\qquad\qquad+\int_{\Omega}C_{pqrs}\delta\tilde{u}_{p,q}^{(ij)} \varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})\ \mathrm{d}\Omega\\ &=\int_{\Omega}C_{pqrs}(\bar{\varepsilon}_{rs}^{(kl)}+ \varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)}))\delta\tilde{u}_{p,q}^{(ij)}\ \mathrm{d}\Omega\\ &=\int_{\Omega}\sigma_{pq}^{(kl)}\delta\tilde{u}_{p,q}^{(ij)}\ \mathrm{d}\Omega\\ &=-\int_{\Omega}\sigma_{pq,q}^{(kl)}\delta\tilde{u}_{p}^{(ij)} \ \mathrm{d}\Omega+\int_{\partial\Omega}\sigma_{pq}^{(kl)}n_{q}\delta\tilde{u}_{p}^ {(ij)}\ \mathrm{d}\Gamma\end{split}\]
where symmetry of material coefficients has been used along with integration by parts. Requiring \(\mathcal{L}\) to be stationary gives the state equations for stress under loading \(\bar{\varepsilon}_{rs}^{kl}\). In particular, allowing arbitrary \(\delta\tilde{u}_{p}^{(ij)}\) within \(\Omega\) gives \(\sigma_{pq,q}^{(kl)}=0\) in \(\Omega\) and allowing arbitrary \(\delta\tilde{u}_{p}^{(ij)}\) on \(\partial\Omega\) gives \(\sigma_{pq}^{(kl)}n_{q}=0\) on \(\partial\Omega\).
Next, under a variation \(\delta\tilde{\mathbf{u}}^{(kl)}\) of \(\tilde{\mathbf{u}}^{(kl)}\) with \(ij\neq kl\), the corresponding variation of \(\mathcal{L}\) is
\[\begin{split}&\frac{\delta\mathcal{L}}{\delta\tilde{\mathbf{u}}^{(kl) }}=\int_{\Omega}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+\bar{ \varepsilon}_{pq}^{(ij)})\delta\tilde{u}_{r,s}^{(kl)}\ \mathrm{d}\Omega\\ &=\int_{\Omega}\sigma_{rs}^{(ij)}\delta\tilde{u}_{r,s}^{(kl)}\ \mathrm{d} \Omega\\ &=-\int_{\Omega}\sigma_{rs,s}^{(ij)}\delta\tilde{u}_{r}^{(kl)}\ \mathrm{d} \Omega+\int_{\partial\Omega}\sigma_{rs}^{(ij)}n_{s}\delta\tilde{u}_{r}^{(kl)} \ \mathrm{d}\Gamma\end{split}\]
where symmetry of material coefficients has been used along with integration by parts. Requiring \(\mathcal{L}\) to be stationary gives the state equations for stress under loading \(\bar{\varepsilon}_{pq}^{(ij)}\). It should be noted that when \(ij=kl\), we can apply the product rule which results in the state equations for stress under a constant macroscopic strain \(\bar{\varepsilon}_{pq}^{(ij)}\).
Together, the above results give the state equations for the fields \(\tilde{\mathbf{u}}^{(ij)}\) and \(\tilde{\mathbf{u}}^{(kl)}\), as required.
We also require that the Lagrangian \(\mathcal{L}\) equals the objective at the solution to the state equations. Indeed, at the solution to the state equations we obtain
\[\mathcal{L}(\Omega)=\bar{C}_{ijkl}(\Omega),\] (A2)
as required.
The shape derivative of \(\mathcal{L}\) at fixed \(\tilde{\mathbf{u}}^{(ij)}\) and \(\tilde{\mathbf{u}}^{(kl)}\) can then be calculated using Lemma 1 to be
\[\begin{split} C^{\prime}(\Omega)(\mathbf{\theta})&=\mathcal{ L}^{\prime}(\Omega)(\mathbf{\theta})|_{\tilde{\mathbf{u}}^{(ij)},\tilde{\mathbf{u}}^{(kl)}}\\ &=\int_{\partial\Omega}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^ {(ij)})+\bar{\varepsilon}^{(ij)}_{pq})\\ &\quad\times(\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})+\bar{ \varepsilon}^{(kl)}_{rs})\ \mathbf{\theta}\cdot\mathbf{n}\ \mathrm{d}\Gamma.\end{split}\] (A3)
## Appendix B Proof of Lemma 3
Similarly to Appendix A, suppose we define the Lagrangian \(\mathcal{L}\) to be
\[\begin{split}\mathcal{L}(\mathcal{D}_{1}&,\mathcal{ D}_{2},\tilde{\mathbf{u}}^{(ij)},\tilde{\mathbf{u}}^{(kl)})\\ &=\int_{D}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+\bar{ \varepsilon}^{(ij)}_{pq})\bar{\varepsilon}^{(kl)}_{rs}\ \mathrm{d}\Omega\\ &+\int_{D}C_{pqrs}(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+\bar{ \varepsilon}^{(ij)}_{pq})\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})\ \mathrm{d}\Omega.\end{split}\] (B4)
where \(C_{pqrs}=C_{pqrs}(d_{\mathcal{D}_{1}},d_{\mathcal{D}_{2}})\). As previously, stationarity of \(\mathcal{L}\) under variations \(\delta\tilde{\mathbf{u}}^{(ij)}\) and \(\delta\tilde{\mathbf{u}}^{(kl)}\) retrieve the state equations and at the solution to the state equations \(\mathcal{L}(\Omega)=\bar{C}_{ijkl}(\Omega)\). The given Lagrangian therefore satisfies the requirements for Cea's method.
Using differentiability of the signed distance function \(d_{\mathcal{D}_{i}}\)(Lemma 2.4, Proposition 2.5, Allaire et al, 2014) and the chain rule, we have
\[\begin{split}\bar{C}^{\prime}_{ijkl}&(\mathcal{D}_{ 1},\mathcal{D}_{2})(\mathbf{\theta}_{1})\\ &=\int_{D}d^{\prime}_{\mathcal{D}_{1}}(\mathbf{\theta}_{1})\frac{ \partial C_{pqrs}}{\partial d_{\mathcal{D}_{1}}}(\varepsilon_{pq}(\tilde{\bm {u}}^{(ij)})+\bar{\varepsilon}^{(ij)}_{pq})\\ &\quad\quad\quad\quad\times(\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl )})+\bar{\varepsilon}^{(kl)}_{rs})\ \mathrm{d}\Omega\end{split}\] (B5)
where \(d^{\prime}_{\mathcal{D}_{i}}\) is the shape derivative of the signed distance function for \(\mathbf{x}\in D\setminus\Sigma\) given by
\[d^{\prime}_{\mathcal{D}_{1}}(\mathbf{x})=-\mathbf{\theta}(p_{\partial\mathcal{D}_{1}} (\mathbf{x}))\cdot\mathbf{n}(p_{\partial\mathcal{D}_{1}}(\mathbf{x}))\] (B6)
where \(p_{\partial\mathcal{D}_{1}}(\mathbf{x})\) is the projection of a point \(x\) onto the boundary \(\partial\mathcal{D}_{1}\) and \(\Sigma\) is the set of points in the skeleton of \(\partial\mathcal{D}_{1}\)(Definition 2.3, Allaire et al, 2014).
Using this and the Jacobian-free coarea formula (Corollary 2.13, Equation 2.15, Allaire et al, 2014) results in
\[\begin{split}&\bar{C}^{\prime}_{ijkl}(\mathcal{D}_{1},\mathcal{D}_{ 2})(\mathbf{\theta}_{1})\\ &=-\int_{\partial\mathcal{D}_{1}}\mathbf{\theta}_{1}\cdot\mathbf{n}\int_{ \operatorname{ray}_{\partial\mathcal{D}_{1}}(\mathbf{x})\cap D}H^{\prime}_{\eta}(d _{\mathcal{D}_{1}})\frac{\partial C_{pqrs}}{\partial g}\\ &\quad\times(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+\bar{ \varepsilon}^{(ij)}_{pq})(\varepsilon_{rs}(\tilde{\mathbf{u}}^{(kl)})+\bar{ \varepsilon}^{(kl)}_{rs})\ \mathrm{d}z\ \mathrm{d}\Gamma\end{split}\] (B7)
where \(g(x)=H_{\eta}(x)\).
Finally, the support of \(H^{\prime}_{\eta}(x)\) is \(|x|<2\eta\), so the integral over \(\operatorname{ray}_{\partial\mathcal{D}_{1}}(\mathbf{x})\cap D\) is restricted to a tubular region about \(\partial\mathcal{D}_{1}\)(Allaire et al, 2014). Therefore, for small \(\eta\), we may assume that
\[\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})(\mathbf{z})\approx\varepsilon_{pq}(\tilde{ \mathbf{u}}^{(ij)})(\mathbf{y})\] (B8)
and
\[d_{\mathcal{D}_{2}}(\mathbf{z})\approx d_{\mathcal{D}_{2}}(\mathbf{y})\] (B9)
where \(\mathbf{z}\in\operatorname{ray}_{\partial\mathcal{D}_{1}}(\mathbf{x})\cap D\) and \(\mathbf{y}\in\partial\mathcal{D}_{1}\). In addition, the derivative \(\frac{\partial C_{pqrs}}{\partial g}(H_{\eta}(d_{\mathcal{D}_{1}}))\) is independent of \(d_{\mathcal{D}_{1}}\) by Equation 23. Equation B7 can therefore be written as
\[\begin{split}&\bar{C}^{\prime}_{ijkl}(\mathcal{D}_{1},\mathcal{D}_{ 2})(\mathbf{\theta}_{1})\\ &\quad\approx-\int_{\partial\mathcal{D}_{1}}\mathbf{\theta}_{1}\cdot \mathbf{n}\ \frac{\partial C_{pqrs}}{\partial g}\left(\varepsilon_{pq}(\tilde{\mathbf{u}}^{(ij)})+ \bar{\varepsilon}^{(ij)}_{pq}\right)\\ &\quad\quad\quad\quad\times\left(\varepsilon_{rs}(\tilde{\mathbf{u}}^{( kl)})+\bar{\varepsilon}^{(kl)}_{rs}\right)\\ &\quad\quad\quad\quad\times\left[\int_{\operatorname{ray}_{ \partial\mathcal{D}_{1}}(\mathbf{x})\cap D}H^{\prime}_{\eta}(d_{\mathcal{D}_{1}})\ \mathrm{d}z\right]\ \mathrm{d}\Gamma\end{split}\] (B10)
Finally, it can be shown using elementary vector calculus that
\[\int_{\operatorname{ray}_{\partial\mathcal{D}_{1}}(\mathbf{x})\cap D}H^{\prime}_{ \eta}(d_{\mathcal{D}_{1}}(z))\ \mathrm{d}z=1,\] (B11)
which completes this portion of the proof.
For Equation 25, (Corollary 2.8, Allaire et al, 2014) and the chain rule gives
\[\begin{split}&\mathrm{Vol}^{\prime}_{\Omega_{1}}(\mathcal{D}_{1}, \mathcal{D}_{2})(\mathbf{\theta}_{1})\\ &=\int_{D}d^{\prime}_{\mathcal{D}_{1}}(\mathbf{\theta}_{1})H^{\prime}_{ \eta}(d_{\mathcal{D}_{1}})H_{\eta}(d_{\mathcal{D}_{2}})\ \mathrm{d}\Omega.\end{split}\] (B12)
As previously, shape differentiability of \(d_{\mathcal{D}_{i}}\) along with the Jacobian-free coarea formula gives
\[\mathrm{Vol}^{\prime}_{\Omega_{1}}(\mathcal{D}_{1},\mathcal{D}_{2})( \boldsymbol{\theta}_{1})=\int_{\partial\mathcal{D}_{1}}\boldsymbol{\theta}_{1} \cdot\boldsymbol{n}\] \[\times\left[\int_{\mathrm{ray}_{\partial\mathcal{D}_{1}}( \boldsymbol{x})\cap D}H^{\prime}_{\eta}(d_{\mathcal{D}_{1}})H_{\eta}(d_{ \mathcal{D}_{2}})\ \mathrm{d}s\right]\ \mathrm{d}\Gamma.\] (B13)
The prior approximations then give the result
\[\mathrm{Vol}^{\prime}_{\Omega_{1}}(\mathcal{D}_{1},\mathcal{D}_{2})( \boldsymbol{\theta}_{1})\approx\int_{\partial\mathcal{D}_{1}}\boldsymbol{ \theta}_{1}\cdot\boldsymbol{n}\ H_{\eta}(d_{\mathcal{D}_{2}})\mathrm{d}\Gamma.\] (B14)
which concludes the proof.
|
2310.16616 | Context Does Matter: End-to-end Panoptic Narrative Grounding with
Deformable Attention Refined Matching Network | Panoramic Narrative Grounding (PNG) is an emerging visual grounding task that
aims to segment visual objects in images based on dense narrative captions. The
current state-of-the-art methods first refine the representation of phrase by
aggregating the most similar $k$ image pixels, and then match the refined text
representations with the pixels of the image feature map to generate
segmentation results. However, simply aggregating sampled image features
ignores the contextual information, which can lead to phrase-to-pixel
mis-match. In this paper, we propose a novel learning framework called
Deformable Attention Refined Matching Network (DRMN), whose main idea is to
bring deformable attention in the iterative process of feature learning to
incorporate essential context information of different scales of pixels. DRMN
iteratively re-encodes pixels with the deformable attention network after
updating the feature representation of the top-$k$ most similar pixels. As
such, DRMN can lead to accurate yet discriminative pixel representations,
purify the top-$k$ most similar pixels, and consequently alleviate the
phrase-to-pixel mis-match substantially.Experimental results show that our
novel design significantly improves the matching results between text phrases
and image pixels. Concretely, DRMN achieves new state-of-the-art performance on
the PNG benchmark with an average recall improvement 3.5%. The codes are
available in: https://github.com/JaMesLiMers/DRMN. | Yiming Lin, Xiao-Bo Jin, Qiufeng Wang, Kaizhu Huang | 2023-10-25T13:12:39Z | http://arxiv.org/abs/2310.16616v1 | Context Does Matter: End-to-end Panoptic Narrative Grounding with Deformable Attention Refined Matching Network
###### Abstract
Panoramic Narrative Grounding (PNG) is an emerging visual grounding task that aims to segment visual objects in images based on dense narrative captions. The current state-of-the-art methods first refine the representation of phrase by aggregating the most similar \(k\) image pixels, and then match the refined text representations with the pixels of the image feature map to generate segmentation results. However, simply aggregating sampled image features ignores the contextual information, which can lead to phrase-to-pixel mis-match. In this paper, we propose a novel learning framework called Deformable Attention Refined Matching Network (DRMN), whose main idea is to bring deformable attention in the iterative process of feature learning to incorporate essential context information of different scales of pixels. DRMN iteratively re-encodes pixels with the deformable attention network after updating the feature representation of the top-\(k\) most similar pixels. As such, DRMN can lead to accurate yet discriminative pixel representations, purify the top-\(k\) most similar pixels, and consequently alleviate the phrase-to-pixel mis-match substantially. Experimental results show that our novel design significantly improves the matching results between text phrases and image pixels. Concretely, DRMN achieves new state-of-the-art performance on the PNG benchmark with an average recall improvement 3.5%. The codes are available in: [https://github.com/jaMselsiMers/DRMN](https://github.com/jaMselsiMers/DRMN).
Visual Grounding, Panoptic Narrative Grounding, One-stage Method
## I Introduction
Panoptic Narrative Grounding (PNG) [1], one emerging visual grounding task, has recently drawn great attention in data mining and computer vision including grounded context recognition [2], visual question answering [3], and visual-language model pre-training [4]. Given an image and its associated dense narrative caption, the goal of PNG is to segment the visuals of things and stuff based on the visuals mentioned in the caption (see illustration in Fig. 1). In contrast to other related tasks, PNG extends the grounding range from the bounding box of the foreground class (called "object") to a segmentation mask containing both foreground and background classes (named "object" and "Stuff"), thus defining the finest-grained alignment between multiple noun phrases and segments. A detailed comparison between PNG and other related vision-based tasks can be seen in Sect. II.
In general, there are two families of methods for PNG. The first type of methods typically exploits a two-stage pipeline [1], which matches by computing the affinity matrix between object proposals (extracted by off-the-shelf models) and noun phrases. As such, the object proposal model, i.e., the off-the-shell model will limit the performance ceiling. On the other hand, the one-stage or end-to-end methods [5, 6] alleviate this problem by directly generating a response map between all noun phrases and image pixels. To better fuse information from different modalities, Ding et al. [5] propose a language-compatible pixel aggregation (LCPA) module to aggregate the most compatible features from images to noun phrases. Namely, taking each noun phrase as a query feature, LCPA samples top-\(k\) image-compatible features, which are then used as key and value features. Finally the multi-head cross-modal attention is adopted to aggregate visual features.
Albeit its promising performance, LCPA simply aggregates sampled image features without taking into account the contextual information, which could lead to serious phrase-to-pixel mis-match. Concretely, LCPA tends to push the phrase feature towards the center of top-\(k\) sampled image features. This algorithm works well when the top-\(k\) sampled features are related to the target visual object with high similarity.
Fig. 1: Illustration of the PNG problem: Given an image (left) and corresponding caption (middle), the goal is to generate a panoptic segmentation (right) based on all visual objects contained in the caption (i.e., labeling each object and its associated segmented region with the same color).
However, such strategy may also inevitably introduce unrelated image pixels. As illustrated in a hard example of Fig. 2, the unrelated object's pixels (for "feet") fused with the related object's pixels ( for "ball") dominate the top-\(k\) sampled features with high similarity, thus inducing a serious mis-match. To alleviate this problem, we argue that relevant context information is crucial for differentiating and purifying the top-\(k\) sampled points. Essentially, integration of context information would enable more accurate and discriminate pixel representation, since related pixels enjoy more similar context whilst unrelated ones may share distinctive context. In other words, integration of context information could sharpen the representation of different visual pixels, thereby providing potentials to improve the segmentation performance.
Motivated from the above observations and inspired by the current object detection method [7], we design a novel deformable attention mechanism to extract essential pixel contextual information in multi-scale feature maps in an iterative way, resulting in a simple yet effective end-to-end model, called Deformable Attention Fine Matching Network (DRMN). An overview of its framework is shown in Fig. 3. Similar to Transformer, DRMN is a multi-scale encoding-decoding method that offers a full range of context information at different scales, including global information and local information. For feature extraction, we engage the deformable attention to encode image features at different levels, which will generate a cross-fused multi-scale word-pixel matching matrix to obtain initial word-pixel matching results. In addition, in the feature aggregation stage, we further incorporate word embedding representations into pixel encoding representations. Specifically, we follow insights from the DETR decoder [8] to refine the sampled features: for each word vector, our model queries its nearest \(k\) pixels and applies an attention mechanism to encode them after updating the vector representations of these pixels with the word vector.
Our contributions are four-fold:
* From the perspective of multimodal information fusion, we design a deformable attention model with multi-scale encoding and decoding functions in the aggregation process of pixel features to better encode the context information around the pixel, effectively alleviating the phrase-to-pixel mis-match problem.
* From the perspective of model structure, unlike the existing DETR-based models [8, 9], our novel design leverages directly multimodal transformers for highly intertwined feature aggregation. Inheriting insights from DETR feature aggregation, our new approach offers a more sparse and interpretable way of exploiting DETR for vision-based tasks.
* From an algorithmic perspective, we simplify the multi-round pixel feature refinement process into an iterative process of two subproblems: the fuzzy K-means clustering subproblem and the multi-objective assignment subproblem, the latter of which can be efficiently solved
Fig. 2: Insight of our proposed method. The upper part addresses the limitation of LCPA with a hard example. We introduce the essential context information in the multi-scale feature map as a cue to refine the sampled top-\(k\) image feature. By sharpening the representation of different visual objects points, our method can filter out the sampled points by purifying unrelevant visual objects, which further enhances the final segmentation result.
by online gradient descent.
* From an experimental point of view, the results of multiple categories and the overall results on the public PNG benchmark show the superiority of our method compared to previous methods, where the average recall rate is 3.5% higher than the second-ranked method.
## II Related Work
In this section, we overview PNG in contrast to different related vision-based tasks. Overall, Table I shows the comparison granularities of related vision-based tasks, among which the PNG task provides the most fine-grained alignment between different types of nouns and segmentation.
### _Visual Grounding with Bounding Box Regression_
The goal of Referent Expression Comprehension (REC) task is to predict the corresponding bounding box in an image for a given referring expression. Current methods can be categorized into two-stage and one-stage approaches. Two-stage methods [13] first propose bounding box proposals in the image, then match the proposal-referring expression pairs. Inspired by object detection techniques [14], the one-stage methods [15] directly generate results based on the input textual information without explicit matching. Recently, some methods have explored multi-modal pre-training models [16] in REC [17], taking architectures similar to BERT, to obtain joint representations of images and texts.
The phrase grounding task aims to find the corresponding bounding box in an image for multiple noun phrases mentioned in an input caption. Early methods [11, 18] adopted representation learning, which first project region proposal and phrase embeddings onto a same subspace, then learn semantic similarity between them. In recent years, researchers have explored various methods [19, 20, 21] for fusing and learning multi-modal features. It is worth mentioning that recent large-scale visual-language pre-training models [22, 23] have adopted weakly supervised phrase grounding [24] loss to align image-noun phrase pairs.
Recently, some methods [8, 9, 25] have modified the transformer-based object detection framework to address the aforementioned bounding box regression problems. TransVG [25] first proposed a pure transformer framework for visual grounding tasks. Furthermore, some methods [8, 9] drew inspirations from the DETR object detection framework. MDETR [8] employed a transformer encoder-decoder structure, where the transformer simultaneously extracted features from both the image and text in the encoder, and introduced QA-specific queries in the decoder for visual grounding-related task decoding. Dynamic MDETR [9] engaged the idea of deformable attention in the decoder to reduce computation. It is worth noting that our approach differs from the aforementioned methods. Instead of using a transformer to simultaneously encode image and text information, we handle the multi-modal feature interaction in the decoder through top-\(k\) sampling. In particular, in the decoder, we consider the features of the top-\(k\) image positions as object queries, and design the deformable attention mechanism to extract object-relevant features and then aggregate the extracted object query features into the textual features.
### _Visual Grounding with Segmentation_
The task of Referent Expression Segmentation (RES) is to generate a segmentation map of the referred object according to the input referring expression. The first proposed method [12] on this task is a one-stage model that first concatenated textual features and global image feature, then decoded the segmentation mask through deconvolution layers. Recently, inspired by multi-modal transformers, various fine-grained modeling approaches [26, 27] have been proposed to facilitate interactions between different modalities. For example, SHNET [28] concatenated textual features with different levels of image features as joint input of the transformer, then adopted language features to guide the information exchange between different levels of image features. The LAVT model [27] developed the PWAM module, which directly used attention to expand textual information to the size of the image feature map for pixel-word feature fusion.
PNG aims to segment corresponding things or stuff in an image based on the multiple noun phrases mentioned in the image caption. This task was initially proposed with a two-stage method by Gonzalez et al. [1], along with a dataset. They extracted segmentation proposals from off-the-shelf models which were matched with the extracted noun phrase features. Later, some work explored the one-stage paradigm. For example, PPMN [5] achieved feature fusion between different modalities through a sampling strategy end-to-end. EPNG [6] further optimized the inference speed and achieved real-time segmentation effects while sacrificing little accuracy.
## III Main Method
In this section, we first introduce the process of feature extraction for image text (III-A). Then, we describe how initial segmentation results are generated (III-C). Subsequently, we present our proposed Multi-round Visual-Language Aggregation Module, which selectively aggregates image features into textual features to enhance the model's performance (III-D). Finally, we detail the loss function and introduce the training process of the model (III-E). The overall workflow of our
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \begin{tabular}{c} **Grounding** \\ **Task** \\ \end{tabular} & \begin{tabular}{c} **Language** \\ _Granularity_ \\ \end{tabular} & \begin{tabular}{c} **Visual** \\ _Granularity_ \\ \end{tabular} &
\begin{tabular}{c} **Semantic** \\ _Granularity_ \\ \end{tabular} \\ \hline REC [10] & Shot phrase & Bounding box & Things \\ \hline PG [11] & Noun phrase & Bounding box & Mainly Things \\ \hline RES [12] & Shot phrase & Segmentation & Things \\ \hline PNG [1] & Noun phrase & Segmentation & Things + Stuff \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of different granularities of related vision-based tasks. Considering the typical segmentation categories in computer vision tasks between things (countable objects) and stuff (amorphous regions of similar texture), the datasets of the other tasks mainly focus on things categories.
model is shown in Fig. 3 and the pseudo code of the algorithm is shown in Alg. 1.
### _Feature Extraction_
In the feature extraction stage, we can employ off-the-shelf methods to extract features from visual and linguistic modalities.
For the text modality, we leverage BERT to extract features for each word in the image caption. Specifically, we focus on extracting features from the noun phrase part \(T\) of the title \(\mathcal{G}=\text{BERT}(T)\in R^{n\times d}\), where \(n\) represents the maximum number of words in all input noun phrases and \(d\) denotes the dimensionality of the textual feature embedding representation.
As for image modality, given any image \(I\), we use ResNet as the backbone to extract multi-scale feature pyramid \(S=\{F_{2},F_{3},F_{4},F_{5}\}\) such as
\[F_{l}=\text{flatten}(\text{ResNet}(I,l)),\quad l=2,3,4,5. \tag{1}\]
Here \(I\) is an RGB image with height \(h\) and width \(w\) and the \(l\)-th scale output will be a matrix of size \(\frac{hw}{4^{l}}\times c\), where \(c\) is the number of channels in the output map.
In the subsequent feature extraction stage, first we normalize the coordinates \(x\)'s and \(y\)'s of all points in the feature map of each scale to the range \([0,1]\). We then grid them to obtain the reference point matrix with the same size \(p_{l}=\text{flatten}(\text{grid}(M_{l}))\in R^{\frac{hw}{4^{l}}\times 2}\), where the size of the tensor \(M\) is \(\frac{h}{2^{l}}\times\frac{w}{2^{l}}\times 2\) and its elements \(M_{l}(i,j)=(i,j)\).
Meanwhile, we add the feature maps of multiple scales obtained by the feature pyramid and their position codes and straighten them to get the matrix of \(\frac{hw}{4^{l}}\times c\)
\[\hat{F}_{l}=F_{l}+\text{pos}(F_{l}),\quad l=2,3,4,5, \tag{2}\]
where \(\text{pos}(\cdot)\) represents the positional encoding function. We take these feature representations and concatenate them row by row into a matrix as the input of the DeformLayer function in the initial stage
\[\hat{F}=\text{catrow}(\{\hat{F}_{l}\}). \tag{3}\]
### _Deformable Layer_
In order to better integrate feature information at different scales, we use multiple deformable attention layers to aggregate information from various levels of the feature pyramid according to the position and characteristics of the input feature points, where the input of deformable attention is the query features \(Q=\hat{F}\), the reference positions \(P=\{p_{2},p_{3},p_{4},p_{5}\}\) corresponding to the feature point in the image and multi-scale feature map \(S=\{F_{2},F_{3},F_{4},F_{5}\}\).
Similar to Transformer, on multiple feature maps of the pyramid, each feature point will be re-expressed as a linear combination of other feature points. The difference is that its value \(V\) is based on the feature map with a bilinear sampling operation instead of itself. Moreover, both the sampling offset
Fig. 3: Overview of our model. We integrate essential image context information in feature extraction and multi-round feature aggregation phases with deformable attention. First, we utilize BERT to encode textual features and employ deformable attention to encode multi-scale image feature maps. Furthermore, we generate initial image-text matching results based on textual and image features. Finally, in the multi-round feature aggregation, we aggregate the top-\(k\) image features into text feature based on the matching results. The model utilizes deformable attention to refine sampled image features further, then aggregates the refined features into textual features through a cross-attention mechanism to generate improved matching results.
and the self-attention correlation coefficient depend on the query \(Q\), specifically described as follows
\[\Delta p_{l} = \hat{F}W_{l}^{p} \tag{4}\] \[V_{l} = \phi_{\text{blin}}(F_{l},p_{l}+\Delta p_{l})\] (5) \[V = \text{catrow}(\{V_{l}\})\] (6) \[\hat{V} = \text{softmax}(\hat{F}W^{a})VW^{v}, \tag{7}\]
where \(l=2,3,4,5\) means transformation on multiple scales, \(W_{l}^{p}\), \(W^{v}\) and \(W^{a}\) represent the linear mapping to be learned, \(\phi_{\text{blin}}(F_{l},p_{l}+\Delta p_{l})\) indicates that pixels are sampled by bilinear interpolation on the position \(p_{l}+\Delta p_{l}\) of the feature map \(F_{l}\). Similarly, multiple heads are introduced to obtain image feature representations with multiple attentions, and these representations are concatenated and linearly mapped to obtain a multi-scale refined representation \(\mathcal{V}\).
Subsequently, we can construct a deformable coding layer with deformable attention representation
\[\mathcal{F}=\text{FFN}(\text{norm}(\mathcal{V}+\text{dropout}(\mathcal{V}))), \tag{8}\]
where FFN, norm and dropout indicate feedforward network layer, normalization layer, and dropout layer respectively.
For convenience, we represent the above whole process as the following function
\[\mathcal{F}=\text{DeformLayer}(\hat{F},\{F_{l}\},\{p_{l}\}). \tag{9}\]
### _Image and Text Matching_
Below we describe how to match text embedding \(\mathcal{G}\) with multiple feature maps \(\mathcal{F}_{l}\) of different scales to obtain multiple similarity matrices.
In the initial stage, we concatenate multi-scale feature maps by row as DeformLayer function to get their attention representation, and then we restore them into multiple feature maps \(\mathcal{F}_{l}\) (line 8-12 in Alg. 1). Then, using the third layer as a benchmark, all the other layers are down-sampled or upsampled to the same size as the feature map of the third layer
\[\bar{F}_{l}=\phi_{\text{sampling}}(\text{reshape}(\mathcal{F}_{l}),2^{l-3}), \quad l=2,3,4,5. \tag{10}\]
In this way, we can fuse the feature map output with the same scale, and convert the fused image output into vectors to facilitate matching with the text vector (line 13 in Alg. 1)
\[\mathbb{F}=\text{vect}\left(\frac{1}{4}\sum_{l=2}^{5}\bar{F}_{l}\right), \tag{11}\]
where \(\text{vect}(\cdot)\) means pulling a three-dimensional tensor \(\mathbb{F}\in R^{(h/8)\times(w/8)\times c}\) into a two-dimensional vector \(\mathbb{F}\in R^{(hw/8^{2})\times c}\).
Now, we map the representation \(G\) of phrases to the feature space of pixels to compute their similarity matrix (line 15 in Alg. 1)
\[\hat{\mathcal{G}}=\mathcal{G}V^{g},\quad H=\text{sigmoid}\left(\hat{ \mathcal{G}}\mathbb{F}^{T}\right), \tag{12}\]
where \(V^{g}\) is a projection matrix and sigmoid is the sigmoid function.
It is worth noting that during the iterative process, we directly use the latest representation \(\hat{F}\) of the pixel to calculate the similarity matrix (line 25 in Alg. 1)
\[H=\text{sigmoid}\left(\hat{\mathcal{G}}\hat{F}^{T}\right). \tag{13}\]
### _Multi-round Feature Aggregation Module_
```
1:Input: Image \(I\) and caption \(T\)
2:\(\mathcal{G}=\text{BERT}(T)\)
3:for\(l=2,3,4,5\)do
4:\(p_{l}=\text{flatten}(\text{grid}(M_{l}))\)
5:\(F_{l}=\text{flatten}(\text{ResNet}(I,l))\)
6:\(\hat{F}=F_{l}+\text{pos}(F_{l})\)
7:endfor
8:\(\hat{F}=\text{catrow}(\{\hat{F}_{l}\})\)
9:for\(t=1,2,\cdots,T\)do
10:\(\hat{F}=\text{DeformLayer}(\hat{F},\{F_{l}\},\{p_{l}\})\)
11:endfor
12:\(\{\mathcal{F}_{l}\}=\text{splitrow}(\hat{F})\)
13:\(\hat{F}=\text{avg}(\{\mathcal{F}_{l}\})\)
14:\(\hat{\mathcal{G}}=\mathcal{G}V^{g}\)
15:\(H=\text{sigmoid}(\hat{\mathcal{G}}\hat{F}^{T})\)
16:\(\mathcal{H}=[]\)
17:for\(i=1,2,\cdots,I\)do
18:\(S=\text{topk}(H,k)\)
19:for\(j=1,2,\cdots,n\)do
20:\(s=S[j,:]\)
21:\(\hat{F}[s,:]=\hat{F}[s,:]+\text{pos}(\hat{F}[s,:])+\hat{\mathcal{G}}[j,:]\)
22:\(\hat{F}[s,:]=\text{DeformLayer}(\hat{F}[s,:],\{\mathcal{F}_{l}\},\{p_{l}\})\)
23:\(\hat{\mathcal{G}}[j,:]=\text{CrossAttention}(\hat{\mathcal{G}}[j,:],\hat{F}[s,:], \hat{F}[s,:])\)
24:endfor
25:\(H=\text{sigmoid}(\hat{\mathcal{G}}\hat{F}^{T})\)
26:\(\mathcal{H}.\text{append}(\phi_{sampling}(H,2^{-3})))\)
27:endfor
28:return\(\mathcal{H}\)
```
**Algorithm 1** Multi-round Feature Aggregation
Alg. 1 shows the entire process of our multi-round feature aggregation: initially establish the relationship between text and pixels, and then refine these relationships through continuous iteration.
First, we select \(k\) pixels with the highest similarity for each row on the multi-scale similarity matrix \(S=\text{topk}(H,k)\), where \(S\) is a matrix of dimension \(n\times k\).
In the refinement phase (lines 12-24), we first update the embedded representations of the \(k\) nearest image pixels each time with the text's representation based on the previous iteration (line 20-21 in Alg. 1). Next, we apply **DeformLayer** again to regenerate the multi-scale representation of pixels for the top-\(k\) image positions (line 22 in Alg. 1). Notably, we re-use the multi-scale output \(\{\mathcal{F}_{l}\}\) of **DeformLayer** from the initial stage as its input.
Subsequently, we update the representation of noun phrases using the weighted sum of the current top-\(k\) image features
(line 23 in Alg. 1). Below we will describe its implementation in detail.
Given a query \(Q\), a key \(K\) and a value \(V\), through the attention we can get \(Q\)'s updated weighted representation
\[\text{attn}(Q,K,V)=\text{softmax}\left(\frac{QW_{q}(KW_{k})^{T}}{\sqrt{c}}\right) VW_{v}. \tag{14}\]
Here \(W_{q}\), \(W_{k}\) and \(W_{v}\) represent the projection matrices, which project a row vector to the \(c\)-dimensional space.
We treat each row of \(\hat{\mathcal{G}}\) as a query, \(\hat{F}\) as key and value, and split them into \(M\) blocks along the dimension of representation. The multi-head attention representation of \(\hat{\mathcal{G}}[j,:]\) can be computed (\(s=S[j,:]\))
\[G[j,:]=\text{catcol}(\{\text{attn}(\hat{\mathcal{G}}[j,\text{ids}(i)],\hat{F} [s,\text{idx}(i)],\hat{F}[s,\text{idx}(i)]\})\}).\]
where \(j=1,2,\cdots,n\) and catcol represents the concatenation along the column and \(\text{idx}(i)\) represents the column index set of the \(i\)-th sub-block. Then we sequentially perform addition, dropout, norm, and FFN operations on \(G\) and \(\hat{\mathcal{G}}\)
\[\bar{G} = \text{dropout}(G+\hat{\mathcal{G}}), \tag{15}\] \[\hat{\mathcal{G}} = \text{FFN}(\text{norm}(\hat{\mathcal{G}}+\bar{G})). \tag{16}\]
### _Loss Function_
Once we have a series of predicted values \(\mathcal{H}\) of the correlation coefficient of text and pixel, we can define the optimized loss function based on Binary Cross Entropy (BCE) and Dice loss according to the true value \(Y\)
\[\mathcal{L}(\mathcal{H},Y)=\sum_{i=1}^{I}\lambda_{bce}\mathcal{L}_{bce}( \mathcal{H}_{i},Y)+\lambda_{dice}\mathcal{L}_{dice}(\mathcal{H}_{i},Y), \tag{17}\]
where \(\lambda_{bce}\) and \(\lambda_{dice}\) are the weight coefficients of the loss which are both set to \(1\) in our experiments. Specifically, BCE loss is the average loss of all text-pixel pairs
\[\mathcal{L}_{bce}(\mathcal{H}_{i},Y)=\frac{1}{nhw}\sum_{j=1}^{n}\sum_{k=1}^{ hw}\text{CE}(Y(j,k),\mathcal{H}_{i}(j,k)), \tag{18}\]
where CE is the cross entropy loss.
In general, the goal of BCE loss is to compute a binary classification loss for all pixels, but this loss does not consider the problem of class imbalance. To alleviate this problem, we introduce the Dice loss as an additional loss
\[\mathcal{L}_{dice}(\mathcal{H}_{i},Y)=\frac{1}{n}\sum_{j=1}^{n}\left(1-\frac{ 2\sum_{k=1}^{hw}\mathcal{H}_{i}(j,k)Y(j,k)}{\sum_{k=1}^{hw}\mathcal{H}_{i}(j,k)+Y(j,k)}\right).\]
To provide sufficient intermediate supervision during the encoding stage, we follow the setup of [5], which applies the loss \(\mathcal{L}\) to the predicted value \(H\) of all refinement stages. For the inference, we obtain grounded results from the previous round of response maps with a threshold of 0.5.
### _Discussion on Multi-round Feature Aggregation_
In our task, we are given an embedded representation \(t_{j}\) of \(n\) words, and the goal is to assign all \(m\) pixels \(x_{i}\) in the image to these \(n\) noun phrases. For convenience, we assume that \(x_{i}\) and \(t_{j}\) are located in a common vector space. We then iteratively optimize the representations of \(x_{i}\)s and \(t_{j}\)s through the objective function \(\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{n}u_{ij}^{2}\|x_{i}-t_{j}\|^{2}\), where \(u_{ij}\) represents the probabilty that \(x_{i}\) belongs to \(t_{j}\).
#### Iii-F1 Solving \(x_{i}\)'s for known \(t_{j}\)'s
We can define the following loss function
\[\min_{u,x} \mathcal{L}(u,x)=\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{n}u_{ij}^{2} \|x_{i}-t_{j}\|^{2}, \tag{19}\] \[s.t. \sum_{i=1}^{m}u_{ij}=k,\quad u_{ij}\in\{0,1\},\quad k<n. \tag{20}\]
We add the constraint \(\sum_{i=1}^{m}u_{ij}=k\) and \(k<n\) to avoid trivial solutions. Obviously, if \(m=n\), then \(x_{j}=t_{j}\).
Assume that for each target point \(t_{j}\), its \(k\) closest points are \(x_{r(j,1)},x_{r(j,2)},\cdots,x_{r(j,k)}\). If we fix \(x_{i}\) to find the optimal point of \(u_{ij}\), we have
\[u_{ij}=\begin{cases}1,&i\in\{r(j,1),r(j,2),\cdots,r(j,k)\},\\ 0,&\text{otherwise}.\end{cases} \tag{21}\]
Next, if we fix \(u_{ij}\), we then calculate the gradient of \(\mathcal{L}(u,x)\) with respect to \(x\) to get
\[\frac{\partial\mathcal{L}}{\partial x_{i}}=\sum_{j=1}^{n}u_{ij}^{2}x_{i}-\sum_ {j=1}^{n}u_{ij}^{2}t_{j}. \tag{22}\]
Hence, we get the update formula of \(x_{i}\)
\[x_{i}=x_{i}-\alpha\frac{\partial\mathcal{L}}{\partial x_{i}}=(1-\alpha\sum_{j=1 }^{n}u_{ij}^{2})x_{i}+\sum_{j=1}^{n}\alpha u_{ij}^{2}t_{j}, \tag{23}\]
where \(\alpha\) (\(0<\alpha<1\)) is a step size. Note that the above equation is a batch processing method for \(n\) targets, and its online update method for target \(t_{j}\) can be given as
\[x_{i}=\left(1-\alpha u_{ij}^{2}\right)x_{i}+\alpha u_{ij}^{2}t_{j}, \tag{24}\]
which can be further simplified to
\[x_{i}=\begin{cases}(1-\alpha)x_{i}+\alpha t_{j},\quad x_{i}\in\text{topk}(t_{ j}),\\ x_{i},\quad\text{otherwise}.\end{cases} \tag{25}\]
Here \(x_{i}\in\text{topk}(t_{j})\) means \(i\in r(j,1),r(j,2),\cdots,r(j,k)\). Note that \(u_{ij}\) is fixed, the optimization problem is a strictly convex optimization problem about \(x\). Therefore, setting an appropriate step size can ensure that the function value decreases after each gradient descent.
In line 21 of Alg. 1, we update \(x_{i}\) with the following formula
\[x_{i}=\begin{cases}f(x_{i})+t_{j},\quad x_{i}\in\text{topk}(t_{j}),\\ x_{i},\quad\text{otherwise}.\end{cases} \tag{26}\]
Here \(f(x_{i})\) represents the encoded representation of \(x_{i}\). At the same time, when we solve the nearest \(k\) points from \(t_{j}\), we exploit the predicted correlation coefficient instead of the Euclidean distance.
#### Iii-C2 Solving \(t_{i}\)'s for known \(x_{i}\)'s
Drawing the idea of fuzzy K-means, we define the following loss function
\[\min_{u,t} \mathcal{L}(u,t)=\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{n}u_{ij}^{2} \|x_{i}-t_{j}\|^{2}, \tag{27}\] \[s.t. \sum_{j=1}^{n}u_{ij}=1,\quad i=1,2,\cdots,m, \tag{28}\]
whose Lagrangian function is
\[\mathcal{J}(u,t,\lambda)=\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{n}u_{ij}^{2}\|x_ {i}-t_{j}\|^{2}+\sum_{i=1}^{m}\lambda_{i}\left(\sum_{j=1}^{n}u_{ij}-1\right).\]
According to KKT, we obtain the optimal \(u_{ij}\) and \(t_{j}\) satisfying
\[u_{ij} = \frac{1/\|x_{i}-t_{j}\|^{2}}{\sum_{k=1}^{n}1/\|x_{i}-t_{k}\|^{2}}, \tag{29}\] \[t_{j} = \frac{\sum_{i=1}^{n}u_{ij}^{2}x_{i}}{\sum_{i=1}^{n}u_{ij}^{2}}. \tag{30}\]
The above updating formulas show that \(t_{j}\) is the weighted average of \(x_{i}\), and the weight of each item is inversely proportional to the distance, or proportional to the similarity.
In line 23 of our algorithm, we apply a multi-head attention mechanism to represent each \(t_{j}\) as an adaptive weighting sum of top-k \(x_{i}\), where the weight of \(x_{i}\) with respect to \(t_{j}\) is expressed as a normalized dot product.
## IV Experiments
### _Dataset and Evaluation Criteria_
We compare the performance of our proposed method with the other methods on the benchmark PNG dataset, which matches noun phrase annotations in the Localized Narrative dataset [29] with panoptic segmentation annotations in the MS COCO dataset [30]. As the only publicly available benchmark in PNG, this dataset contains 726,445 noun phrases matched to segments involving 659,298 unique segments, and it covers 47.5% of the segmentation annotations in the MS COCO panoptic segmentation dataset and 45.1% of the noun phrases in the Localized Narrative dataset. On average, each title in the dataset has 5.1 noun phrases. The train and validation splits contains 133,103 and 8533 localized narratives, respectively.
We adopt the average recall as the evaluation metric for model performance following the previous practice. It calculates the recall for different intersection over (IoU) thresholds between the segmentation result and the ground truth, then draws curves based on different thresholds. The area under the curve represents the average recall value, i.e., for plural noun phrases. All ground truth annotations are merged into a single segment to compute the IoU result.
### _Implementation Details_
Our backbone configuration is consistent with the PPMN baseline model [5], where we utilize official pre-trained [31] ResNet101 model (with 3x schedule) on the MS COCO dataset [30] as the image backbone. For the text input, we use the pre-trained "base-uncased" BERT model [32] to convert each word in the narrative captions into a 768-dimensional vector. The longest caption contains 230 characters, with up to 30 different noun phrases that need be localized. We do not update the image and text pre-trained backbone models during training.
Furthermore, we only apply image size augmentation to the input image, which is resized to a resolution between 800 and 1,333 pixels while maintaining the aspect ratio. We implement our proposed model using PyTorch and train it with a batch size 10 for 20 epochs on three NVIDIA 3090 GPUs. The Adam optimizer is used with a fixed learning rate of \(10^{-4}\). During inference, we obtain segmentation results following the configuration of the two-stage model [1], which averages the matching graphs of all words in each noun phrase.
### _Experimental Results_
To validate the effectiveness of the context information we introduced, we compare the performance of our proposed model with other methods on the PNG dataset. The main results are shown in Table II. We also compare the recall curves of these methods in Fig. 4. It is worth noting that our best model do not update the image and text backbones during training, and the results obtained using the same training strategy are labeled with PPMN\(\dagger\).
Compared to the current state-of-the-art methods on the PNG dataset, our proposed model achieves an average \(3.5\%\) (from 2.7% to 3.9%) improvements in average recall across various metrics. Specifically, our method achieves 3.5/3.1/3.9/3.6/2.7 improvements in whole/thing/thing/singular/plural categories, validating the effectiveness of our proposed method.
Fig. 4 depicts recall values for different classes at different IOU thresholds. In Fig. 3(a), when the IoU threshold is larger than 0.3, our method (blue curve) consistently outperforms the baseline model (green curve), showing that the image context information can indeed benefit the segmentation results. Furthermore, in Fig. 3(b), we can see that our method exhibits significant performance gains in object categories compared to the baseline model, even approaching the accuracy of the two-stage method. This further demonstrates that context information could enhance the representation ability of the aggregated text feature, leading to better segmentation results. In Fig. 3(c), we further investigate the detailed performance of our method on object categories (stuff and singular): our method still improves the segmentation results on both categories, which indicates that the essential context information may benefit all object categories' results.
### _Ablation Studies_
To validate the effectiveness of our proposed method on different components, we conduct ablation experiments on the PNG task and compare the results under different parameter settings.
#### Iv-D1 Number of deformable encoder layer
In Table III, we show how various deformable encoder layers affect the model's performance. The results show that combining multi-scale image context information can improve segmentation
performance, whereas too many encoder layers may lead to performance degradation.
#### V-A2 Number of rounds for multi-round feature aggregation
We also examine the performance of different rounds of feature aggregation on the model in Table IV where the results show that introducing multi-round feature aggregation suddenly improves model performance. We also observe that the performance of the Singular and Stuff categories gradually improves as the number of stages increases. However, there are some fluctuations in the performance of the plurals category. Since we find some incomplete annotations in the PNG dataset for the plurals category during training (as shown in Fig. 5), we believe it is reasonable for the model to be slightly unstable in testing on this category.
#### V-A3 Number of sample points for multi-round feature aggregation module
We conduct further studies to evaluate our proposed multi-round feature aggregation module by examining the impact of different numbers of sampling points on model performance. The results are reported in Table V. Since the context information convers a large extent on image during top-k image refinement stage, as further shown in Fig. 8, our model can perform well even with a small number of sample points, This indicates a small set for the context information refined top-\(k\) image features may be enough to cover the object information. We observe that increasing the number of sampling points improves the stuff category more. We attribute this to the mask of stuff category which usually covers more space in the image. Hence increasing the sampling points may help to preserve the semantics of different parts of the ground truth mask information.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
\begin{tabular}{c} **Sampling** \\ **Points (\(k\))** \\ \end{tabular} & \multicolumn{4}{c|}{**Average Recall**} \\ \cline{2-6} & _overall_ & _singulars_ & _plurals_ & _things_ & _stuff_ \\ \hline
10 & 62.8 & 63.4 & 56.9 & 60.4 & 66.1 \\ \hline
50 & **62.9** & **63.6** & **56.7** & **60.4** & **66.4** \\ \hline
100 & 62.9 & 63.6 & 56.7 & 60.3 & 66.4 \\ \hline
400 & 62.7 & 63.3 & 56.5 & 60.2 & 66.2 \\ \hline \end{tabular}
\end{table} TABLE V: Ablation on the number of sampled image points.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
\begin{tabular}{c} **Method** \\ **model** \\ \end{tabular} & \multicolumn{4}{c|}{**Average Recall**} \\ \cline{2-6} & _overall_ & _singulars_ & _plurals_ & _things_ & _stuff_ \\ \hline PNG [1] & 55.4 & 56.2 & 48.8 & 56.2 & 54.3 \\ \hline PPMN\(\dagger\)[5] & 56.7 & 57.4 & 49.8 & 53.4 & 61.1 \\ \hline EPNG [6] & 58.0 & 58.6 & 52.1 & 54.8 & 62.4 \\ \hline PPMN [5] & 59.4 & 60.0 & 54.0 & 57.2 & 62.5 \\ \hline DRMN(Our) & **62.9** (+3.5) & **63.6** (+3.6) & **56.7** (+2.7) & **60.3** (+3.1) & **66.4** (+3.9) \\ \hline \end{tabular}
\end{table} TABLE II: Results of our method for the panoptic narrative base task, compared with state-of-the-art methods.
Fig. 4: Average recall curves for the PNG dataset: (a) overall performance compared to other state-of-the-art methods, (b) curves for things and categories of things, and (c) curves for singular and plural noun phrases.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
\begin{tabular}{c} **Number of** \\ **Rounds** \\ \end{tabular} & \multicolumn{4}{c|}{**Average Recall**} \\ \cline{2-6} & _overall_ & _singulars_ & _plurals_ & _things_ & _stuff_ \\ \hline
0 & 61.5 & 62.2 & 55.4 & 58.7 & 65.4 \\ \hline
1 & 62.4 & 62.9 & 57.1 & 60.1 & 65.6 \\ \hline
2 & **62.9** & **63.6** & **56.7** & **60.3** & **66.4** \\ \hline
3 & 62.6 & 63.3 & 55.8 & 60.2 & 65.9 \\ \hline \end{tabular}
\end{table} TABLE III: Ablation study on the number of encoder layers.
### _Qualitative Analysis_
We illustrate the qualitative results of our proposed model for text paragraphs in Fig. 5. It is observed that comparing to the baseline model, our model predicts more complete segmentation results ("doors", "windows" and "refrigerator" in the fist row, "person", "group of people" in the second row), indicating that refinement of top-\(k\) sampled image features benefits to cluster more related pixels. It is worth mentioning that the model is even able to locate more complete ground truth annotation (the "few bowls" result in the first row, the "two persons" result in the third row).
To visualize how the context information benefits segmentation, we show the instructive results during the multi-round feature aggregation process in Fig. 6. In the first and third rows, the model gradually filters out irrelevant results in each round of refinement, improving segmentation results despite irrelevant or similar objects in the initial matching. The example in the second row shows how a segmented object goes from a low response matching result to an almost correct matching during refinement. Compared to the example in Fig. 2, such results validate that context information does matter in alleviate the phrase-to-pixel mis-match and thus improve the performance in PNG.
We further visualize the top-\(k\) image locations most similar to the text during each round of refinement and corresponding weights in the cross-attention mechanism in Fig. 7. As seen in the figure, our proposed method generally puts the weight on the most relevant objects and gradually filters out the impact of irrelevant objects.
To better visualize what context information is introduced by deformable attention during the aggregation stage, we further visualize the 50 most important offset points obtained by sampling the top-\(k\) most relevant points in the last round of the deformable attention mechanism for different layers in Fig. 8. We can find that in the primary layers containing relatively detailed low-level information, the offset point is around the target object, which may refine more detailed information for the target object. In the latter layers that contain relatively high-level information, the offset points attend to the general context around the target object. The above observations indicate our proposed refinement method effectively concentrates on both detailed and general context information for the top-\(k\) sampled image points.
## V Conclusion
In this paper, we propose a novel one-stage model named Deformable-Attention Refined Matching Network (DRMN) for Panoptic Narrative Grounding (PNG) task. Built upon the end-to-end one-stage model architecture, we integrate the essential context information of multi-scale image features
Fig. 5: Qualitative results for Panoptic Narrative Grounding. The segmentation masks in the image correspond one-to-one to the colors mentioned in the text.
Fig. 8: Visualization of the most important offset points in the deformable attention layer, according to the top-\(k\) query of phrase “few bowel”. We visualize the top-50 most important points based on the attention weights for each layer of the multi-scale feature map.
Fig. 6: Refinement results in each stage for specific visual objects. Distracting objects and target objects are highlighted in dashed yellow and red boxes, respectively.
Fig. 7: Attention weights for top-\(k\) image locations. Weights are averaged over all heads in a multi-head cross-attention layer. Lighter colors indicate greater weight and vice versa.
in the multi-modal information fusion module as an addition cue to enhance the feature discriminative ability. Furthermore, we employ a clustering framework to interpret our proposed module and validate our method through experiments on the benchmark PNG dataset. The results demonstrate that our proposed model can achieve new state-of-the-art performance, with a 3.5% improvement on the average recall metric.
|
2310.19525 | The Application of Homotopy Perturbation Method to the Solution of
Non-Linear Partial Differential Equations | In this study, a thorough investigation was conducted into the Homotopy
Perturbation Method (HPM) and its application to solve the Burger and Blasius
equations. The HPM is a mathematical technique that combines aspects of
homotopy and perturbation methods. By introducing an auxiliary parameter, the
complex problems were transformed into a series of simpler equations that could
be solved step by step. The results of the study are significant. The HPM was
found to be effective in solving the Burger and Blasius equations quickly and
accurately. It proved to be a valuable tool for handling challenging
mathematical problems. Consequently, researchers encountering seemingly
impossible math puzzles can consider employing HPM as a potential solution. | Gbenga Onifade Ebenezer | 2023-10-30T13:25:29Z | http://arxiv.org/abs/2310.19525v1 | The Application of Homotopy Perturbation Method to the Solution of Non-Linear Partial Differential Equations
###### Abstract
In this study, a thorough investigation was conducted into the Homotopy Perturbation Method (HPM) and its application to solve the Burger and Blasius equations. The HPM is a mathematical technique that combines aspects of homotopy and perturbation methods. By introducing an auxiliary parameter, the complex problems were transformed into a series of simpler equations that could be solved step by step. The results of the study are significant. The HPM was found to be effective in solving the Burger and Blasius equations quickly and accurately. It proved to be a valuable tool for handling challenging mathematical problems. Consequently, researchers encounter seemingly impossible math puzzles can consider employing HPM as a potential solution.
## 1 Introduction
In 1998, six years after Liao S.J. proposed the early 'Homotopy Analysis Method (HAM)' in his PhD dissertation, Jihuan He published the so-called 'Homotopy Perturbation Method (HPM).' Like the early HAM, HPM is based on constructing a homotopy equation.
\[(1-q)L[\Phi(x;q)-u_{0}]+qN[\phi(x;q)]=0,\] \[x\in\Omega,\quad q\in[0,1] \tag{1}\]
Which is exactly the same as the zeroth order deformation equation. Like the HAM, the \(\phi(x;q)\) is also expanded into a Maclaurin Series.
\[\Phi(x;q)=u_{0}(x)+\sum_{n=1}^{\infty}u_{n}(x)q^{n} \tag{2}\]
and approximation is achieved by setting \(q=1\), say,
\[u(x)=u_{0}(x)+\sum_{n=1}^{\infty}u(x) \tag{3}\]
The only difference between HPM and the early HAM is that the embedding parameter \(q\in[0,1]\) is treated as a'small parameter,' allowing the governing equation of \(u_{n}(x)\) to be obtained by substituting (2) into (1) and equating the coefficients of like powers of \(q\).
However, Hayat and Sayid proved in 2007 that, substituting the Maclaurin Series
\[N[\phi(x;q)]=\sum_{n=0}^{\infty}D_{n}N[\phi(x;q)]q^{n}, \tag{4}\]
where \(D_{n}=\frac{1}{2!}\frac{\partial^{n}}{\partial q^{n}}\) for \(q=0\), and by substituting series (2) into (1) and then equating the coefficients of like powers of \(q\), one obtains:
\[L[u_{n}(x)-X_{n}u_{n-1}(x)]=D_{n-1}N[\phi(x;q)] \tag{5}\]
For \(u_{n}(x)\), which is exactly the same as the high-order deformation equation (5), no matter whether
one expands the embedding parameter or not, one obtains the exact same approximations as the early HAM. Therefore, Sayid and Hayat pointed out that nothing is needed in Dr. He's approach except the new name HPM. Unfortunately, like the early HAM, the so-called HPM cannot guarantee the convergence of approximations, so it is valid only for weakly nonlinear problems with small physical parameters, as reported by many researchers. HPM has been used by Dr. He Ji Huan since he proposed the method in 1999 to solve:
1. Lighthill Equation
2. Duffing Equation
3. Non-Linear Wave Equation
4. Schrodinger Equation
In the perturbation technique, we will first propose a new perturbation technique coupled with the homotopy technique. In topology, two continuous functions from one topological space to another are called "homotopic." Formally, a homotopy between two continuous functions \(g\) and \(f\) from the topological space \(X\) to another topological space \(Y\) is defined to be a continuous function.
\[H:X\times[0,1]\longrightarrow Y\]
This function is defined such that:
\[H(x,0)=f(x)\text{ and }H(x,1)=g(x)\text{ for all }x\in X\text{.}\]
The HPM does not depend on a small parameter in the equation. In the homotopy technique in topology, a homotopy is constructed with an embedding parameter \(P\in[0,1]\), which is considered a small parameter.
## 2 Basic Idea of Homotopy Perturbation Method
Let us consider the nonlinear differential equation:
\[A(u)-F(r)=0\quad r\in\Omega \tag{6}\]
with boundary conditions:
\[B\left(u,\frac{\partial u}{\partial\eta}\right)\quad r\in\Gamma \tag{7}\]
Here, \(A\) is a general differential operator, and \(B\) is a boundary operator. \(\Gamma\) is the boundary domain of \(\Omega\), and \(F(r)\) is a known analytic function. The operator \(A\) can be divided into two parts, \(L\) and \(N\), where \(L\) is linear and \(N\) is nonlinear. Equation (6) can be written as follows:
\[L(u)+N(u)-F(r)=0 \tag{8}\]
Using the homotopy technique, we construct a homotopy:
\[v(r,p):\Omega\times[0,1]\longrightarrow R\]
which satisfies:
\[H(v,p)=(1-p)[L(v)-L(u_{0})]\\ +p[A(v)-F(r)]=0\quad p\in[0,1] \tag{9}\]
or
\[H(v,p)=L(v)-L(u_{0})+pL(u_{0})\\ +p[N(v)-F(r)]=0 \tag{10}\]
Where \(u_{0}\) is an initial approximation of Equation (6) that satisfies the boundary conditions. Obviously, from Equation (6), we will have:
\[\text{for }p=0,\;H(v,0)=L(v)-L(u_{0})=0 \tag{11}\]
\[\text{for }p=1,\;H(v,1)=A(v)-F(r)=0 \tag{12}\]
The change in the value of \(p\) from zero to unity corresponds to that of \(v(r,p)\) from \(u_{0}(r)\) to \(u(r)\). In topology, this is referred to as deformation, and the terms \(L(v)-L(u_{0})\) and \(A(v)-F(r)\) are called homotopic.
We will initially use the embedding parameter \(p\) as a small variable and assume that the solution of Equation (6) can be expressed as a power series of \(p\)
\[v=v_{0}+pv_{1}+pv_{2}+\ldots \tag{13}\]
Setting \(p\) to 1 results in the approximate solution of Equation (8):
\[u=\lim_{p\to 1}v=v_{0}+v_{1}+v_{2}+\ldots \tag{14}\]
The series (14) is generally convergent in most cases; however, the rate of convergence depends on the nonlinear operator \(A(v)\).
## Application of Homotopy and Perturbation Method
### Derivation of Blasius Equation
For two dimensional, steady state, incompressible flow with a zero pressure gradient over a flat plate, the governing equations are simplified.
\[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0 \tag{15}\]
\[u\frac{\partial u}{\partial x}+v\frac{\partial v}{\partial y}=\frac{\partial ^{2}u}{\partial y^{2}} \tag{16}\]
subject to boundary conditions:
\[y=0,\;u=0\]
\[y\rightarrow\infty,\;u=u_{\infty},\;\frac{\partial u}{\partial y}=0 \tag{17}\]
To transform (15) and (16) into ordinary differential equations, take the stream function \(\phi\) defined by
\[\phi=\sqrt{v_{x}u_{\infty}}f(\eta) \tag{18}\]
where \(f\) is a dimensionless function of the similarity variable \(\eta\):
\[\eta=\frac{y}{\sqrt{\frac{u_{x}}{u_{\infty}}}} \tag{19}\]
Now,
\[u=\frac{\partial\varphi}{\partial y}=\frac{\partial\varphi}{\partial\eta} \frac{\partial\eta}{\partial y}\]
\[=\sqrt{v_{x}u_{\infty}}f^{\prime}(\eta)\frac{1}{\sqrt{\frac{v_{x}}{u_{\infty }}}}\]
\[=u_{\infty}\frac{\partial f}{\partial\eta} \tag{20}\]
Similarly,
\[v=-\frac{\partial\varphi}{\partial x}=-\left[\frac{\partial\sqrt{v_{x}u_{ \infty}}}{\partial x}f(\eta)+\sqrt{v_{x}u_{\infty}}\frac{\partial}{\partial x} f(\eta)\right]\]
\[=-\left[f(\eta)\frac{1}{2}\sqrt{\frac{vu_{\infty}}{x}}+\sqrt{v_{x}u_{\infty}} \frac{df}{d\eta}\left(\frac{-1}{2}\right)\frac{yx^{-\frac{3}{2}}}{\sqrt{\frac {u_{\infty}}{u_{\infty}}}}\right]\]
\[=-\left[\frac{1}{2}f(\eta)\sqrt{\frac{vu_{\infty}}{x}}-\frac{1}{2}\frac{u_{ \infty}}{x}\frac{df(\eta)}{d\eta}\right]\]
\[=\frac{1}{2}\sqrt{\frac{vu_{\infty}}{x}}\left[\eta\frac{df}{d\eta}-f\right] \tag{21}\]
Now,
\[\frac{\partial u}{\partial x}=u_{\infty}\frac{d^{2}f}{d\eta^{2}}\frac{1}{ \sqrt{\frac{u_{\infty}}{u_{\infty}}}}\left(\frac{1}{2}\right)^{-\frac{3}{2}}\]
\[=-\frac{u_{\infty}}{2x}\eta\frac{d^{2}f}{d\eta^{2}} \tag{22}\]
Where
\[\frac{\partial u}{\partial x}=\frac{\partial}{\partial x}\left(\frac{\partial \varphi}{\partial y}\right)=\frac{\partial}{\partial\eta}\left(\frac{ \partial\eta}{\partial y}\cdot\frac{\partial\varphi}{\partial\eta}\right) \cdot\frac{\partial\eta}{\partial x}\]
and
\[\frac{\partial u}{\partial y}=\frac{\partial}{\partial y}\left(\frac{\partial \varphi}{\partial y}\right)=\frac{\partial}{\partial\eta}\left(\frac{ \partial\eta}{\partial y}\cdot\frac{\partial\varphi}{\partial\eta}\right) \cdot\frac{\partial\eta}{\partial y}\]
\[\Longrightarrow\frac{\partial u}{\partial y}=u_{\infty}\frac{d^{f}}{d\eta^{ 2}}\cdot\frac{1}{\sqrt{\frac{v_{x}}{u_{\infty}}}} \tag{23}\]
\[\frac{\partial^{2}u}{\partial y^{2}}=\frac{\partial}{\partial y}\left(u_{ \infty}\sqrt{\frac{v_{x}}{u_{\infty}}}\cdot\frac{d^{2}f}{d\eta^{2}}\right)\]
\[\frac{\partial^{2}u}{\partial y^{2}}=\frac{u_{\infty}}{\sqrt{\frac{v_{x}}{u_{ \infty}}}}\left(\frac{d^{3}f}{d\eta^{3}}\cdot\frac{1}{\sqrt{\frac{v_{x}}{u_{ \infty}}}}\right)\]
\[\frac{\partial^{2}u}{\partial y^{2}}=\frac{(u_{\infty})^{2}}{v_{x}}\frac{d^{3 }f}{d\eta^{3}} \tag{24}\]
Putting these values in equation (16), we get
\[u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=\frac{\partial ^{2}u}{\partial y^{2}}\]
\[u_{\infty}\frac{df}{d\eta}\left[-\frac{u_{\infty}}{2x}\eta\cdot\frac{d^{2}f}{d \eta^{2}}\right]+\frac{1}{2}\sqrt{\frac{vu_{\infty}}{x}}\left[\eta\frac{df}{d \eta}-f\right]\cdot u_{\infty}\sqrt{\frac{v_{x}}{u_{\infty}}}\cdot\]
\[\frac{d^{2}f}{d\eta^{2}}=\frac{v(u_{\infty})^{2}}{v_{x}}\frac{d^{3}f}{d\eta^{ 3}}\]
\[\Longrightarrow-\frac{(u_{\infty})^{2}}{2x}\eta\frac{df}{d\eta}\cdot\frac{d^{2}f}{d \eta^{2}}+\frac{1}{2}\frac{u_{\infty}}{x}\left[\eta\frac{df}{d\eta}-f\right]\frac {d^{2}f}{d\eta^{2}}=\frac{u_{\infty}^{2}}{x}\cdot\frac{d^{3}f}{d\eta^{3}}\]
\[\frac{d^{3}f}{d\eta^{3}}+\frac{1}{2}f\cdot\frac{d^{2}f}{d\eta^{2}}=0 \tag{25}\]
With boundary conditions:
\[\eta=0,\;\;f^{\prime}=\frac{df}{d\eta}=0\]
\[\eta\rightarrow\infty,\;\;\frac{df}{d\eta}=0 \tag{26}\]
### Solution of Blasius Equation by Homotopy Perturbation Method
So, to get a solution for equation (25) using the perturbation technique, we construct a homotopy
\[v(r,p):\Omega\times[0,1]\to R\]
which satisfies
\[H(v,p)=(1-p)[L(v)-L(u_{0})]+P[A(v)-f(r)]=0,\;\;p\in[0,1],\;\;r\in\Omega\]
or
\[H(v,p)=L(v)-L(u_{0})+PL(u_{0})+P[N(v)-f(r)]\\ =0 \tag{27}\]
where \(u_{0}\) is an initial approximation of equation (26) that satisfies the boundary condition. Now, from equation (25)
\[(1-p)\left(\frac{\partial^{3}f}{\partial\eta}-\frac{\partial^{3}f}{\partial f _{0}}\right)+p\left(\frac{\partial^{3}f}{\partial\eta^{3}}+\frac{f}{2}\frac{ \partial^{2}f}{\partial\eta^{2}}\right)=0 \tag{28}\]
Suppose that the solution of equation (28) is a series given by
\[f=F_{0}+pF_{1}+p^{2}F_{2}+\ldots \tag{29}\]
Substituting (29) into (28), we get
\[\frac{\partial^{3}F_{0}}{\partial\eta^{3}}+p\frac{\partial^{3}F_{ 1}}{\partial\eta^{3}}+p^{2}\frac{\partial^{3}F_{2}}{\partial\eta^{3}}-\frac{ \partial^{3}f_{0}}{\partial\eta^{3}}+p\frac{\partial^{3}f_{0}}{\partial\eta^{ 3}}+\\ p\left[\frac{F_{0}}{2}\left(\frac{\partial^{2}F_{0}}{\partial\eta^{2}}+p \frac{\partial^{2}F_{1}}{\partial\eta^{2}}\right)+p\frac{F_{1}}{2}\left(\frac {\partial^{2}F_{0}}{\partial\eta^{2}}+p\frac{\partial^{2}F_{1}}{\partial\eta ^{2}}\right)\right]+\ldots=0\]
Rearranging the coefficients of the terms with identical powers of \(p\), we have
\[p^{0}:\frac{\partial^{3}F_{0}}{\partial\eta^{3}}-\frac{\partial ^{3}f_{0}}{\partial\eta^{3}}=0\] \[p^{1}:\frac{\partial^{3}F_{1}}{\partial\eta^{3}}+\frac{\partial ^{3}f_{0}}{\partial\eta^{3}}+\frac{F_{0}}{2}\frac{\partial^{2}F_{0}}{\partial \eta^{2}}=0\] \[p^{2}:\frac{\partial^{3}F_{2}}{\partial\eta^{3}}+\frac{F_{1}}{2} \frac{\partial^{3}F_{0}}{\partial\eta^{3}}+\frac{F_{0}}{2}\frac{\partial^{2}F _{1}}{\partial\eta^{2}}=0\] \[p^{3}:\frac{\partial^{3}F_{3}}{\partial\eta^{3}}+\frac{F_{1}}{2} \frac{\partial^{3}F_{1}}{\partial\eta^{3}}+\frac{F_{2}}{2}\frac{\partial^{2}F _{0}}{\partial\eta^{2}}+\frac{F_{0}}{2}\frac{\partial^{2}F_{2}}{\partial\eta ^{2}}=0 \tag{30}\] \[\vdots\]
First, we take \(F_{0}=f_{0}\), and we start iterating by defining \(f_{0}\) as a Taylor series of order two at \(\eta=0\) to make it accurate near \(\eta=0\)
\[F_{0}=f_{0}\frac{f^{\prime\prime}(0)}{2}\eta^{2}+f^{\prime}(0)\eta=f(0)\]
Let us take \(f^{\prime\prime}(0)=0.332057\), and from the given boundary conditions \(f(0)=0\) and \(f^{\prime}(0)=0\). So,
\[f_{0}=\frac{0.332057}{2}\eta^{2}\]
\[f_{0}=0.1660285\eta^{2}\]
Now, using this value to solve for \(F_{1}\) from (30):
\[\frac{\partial^{3}F_{1}}{\partial\eta^{3}}+\frac{\partial^{3}f_{ 0}}{\partial\eta^{3}}+\frac{F_{0}}{2}\frac{\partial^{2}F_{0}}{\partial\eta^{2 }}=0\] \[\frac{\partial^{3}F_{1}}{\partial\eta^{3}}=-\frac{F_{0}}{2}\frac{ \partial^{2}F_{0}}{\partial\eta^{2}}\] \[=-\frac{0.1660285}{2}\eta\frac{\partial^{2}(0.1660285)}{\partial \eta^{2}}\eta^{2}\] \[\frac{\partial^{3}F_{1}}{\partial\eta^{3}}=-(0.1660285)^{2}\cdot \eta^{2}\] \[F_{1}=-(0.1660285)^{2}\cdot\frac{\eta^{5}}{345}\] \[\Longrightarrow F_{1}=f_{1}=-0.00045942\cdot\eta^{5}\]
Similarly, from (30), we can easily calculate the values of \(f_{2},f_{3},\ldots\):
\[f_{2}=0.00000249\cdot\eta^{8}\]
\[f_{3}=0.00000001\cdot\eta^{11}\]
For the assumption \(p=1\), we get
\[f(\eta)=0.1660285\eta^{2}-0.00045942\eta^{5}+\] \[0.00000249\eta^{8}-0.00000001\eta^{11}\]
### Solution of Burger's Equation by Homotopy Perturbation Method
To illustrate the modification algorithm of the HPM, consider the following nonlinear partial differential equation with a time derivative of any order:
\[D_{t}^{n}u(x,t)=L(u,u_{x},u_{xx})\\ +N(u,u_{x},u_{xx})+f(x,t),t>0 \tag{31}\]
Where \(L\) is a linear operator, \(N\) is a nonlinear operator, and \(f\) is a known analytic function, subject to the initial conditions:
\[\frac{\partial^{n}}{\partial t^{m}}u(x,0)=h_{m}(x),\ \ m=0,1,2,3,...n-1 \tag{32}\]
In view of the homotopy technique, we can construct the following homotopy:
\[\frac{\partial^{n}u}{\partial t^{n}}-L(u,u_{x},u_{xx})-f(x,t)\\ =p\left[\frac{\partial^{n}u}{\partial t^{n}}+N(u,u_{x},u_{xx})-D_ {t}^{n}u\right] \tag{33}\]
or
\[\frac{\partial^{n}u}{\partial t^{n}}-f(x,t)\\ =p\left[\frac{\partial^{n}u}{\partial t^{n}}+L(u,u_{x},u_{xx})+N (u,u_{x},u_{xx})-D_{t}^{n}\right] \tag{34}\]
Where \(p\in[0,1]\). The homotopy parameter \(p\) always changes from zero to unity. When \(p\) = 0, (33) becomes the linearized equation:
\[\frac{\partial^{n}u}{\partial t^{n}}=L(u,u_{x},u_{xx})+f(x,t) \tag{35}\]
and (34) becomes the linearized equation:
\[\frac{\partial^{n}u}{\partial t^{n}}=f(x,t) \tag{36}\]
and when \(p\) = 1, (33) and (34) turns out to be the original differential (31). The basic assumption is that the solution of (33) or (34) can be written as power series in p:
\[u=u_{0}+pu_{1}+p^{2}u_{2}+... \tag{37}\]
Finally, we approximate the solutions \(x,t)\) by:
\[u(x,t)=\sum_{i=0}^{\infty}u_{i}(x,t) \tag{38}\]
Consider the following one dimensional coupled Burger's equation
\[u_{t}-U_{x,x}-uu_{x}+(uv)_{x}=0, \tag{39}\]
\[v_{t}-Vx,x-2vx,x-2vv_{x}+(uv)_{x}=0 \tag{40}\]
With the initial conditions:
\[u(x,0)=cosx,\ \ v(x,0)=cosx \tag{41}\]
Making use of (34), the homotopy for (39) and (40) are:
\[\frac{\partial u}{\partial t}=p\Bigg{[}\frac{\partial u}{\partial t}+u_{xx}+2 uu_{x}-(uv)_{x}+D_{t}^{u}\Bigg{]} \tag{42}\]
\[\frac{\partial v}{\partial t}=p\Bigg{[}\frac{\partial v}{\partial t}+v_{xx}+2 vv_{x}-(uv)_{x}+D_{t}^{v}\Bigg{]} \tag{43}\]
As above, the basic assumption is that solution of (39) and (40) can be written as power series in\(p\)
\[u=u_{0}+pu_{1}+p^{2}u_{2}+... \tag{44}\]
\[v=v_{0}+pv_{1}+p^{2}v_{2}+... \tag{45}\]
Therefore substituting (44) and (45) and the initial condition (41) into (42) and (43) respectively and equating the terms with identical powers of \(p\), we can obtain the following set of linear partial differential equations.
\[\frac{\partial u_{0}}{\partial t}=0,\ \ u_{0}(x,0)=cosx\]
\[\frac{\partial v_{0}}{\partial t}=0,\ \ v_{0}(x,0)=cosx\]
\[\frac{\partial u_{1}}{\partial t}=\frac{\partial u_{0}}{\partial t}+(u_{0})_{ xx}-2u_{0}(u_{0})_{x}-u_{0}(v_{0})_{x}-\\ v_{0}(u_{0})_{x}-D_{t}u_{0},\ \ u_{1}(x,0)=0\]
\[\frac{\partial v_{1}}{\partial t}=\] \[\frac{\partial v_{0}}{\partial t}+(v_{0})_{xx}-2v_{0}(u_{0})_{x}-v_ {0}(u_{0})_{x}-D_{t}v_{0},\ \ v_{1}(x,0)=0\] \[\frac{\partial u_{2}}{\partial t}=\frac{\partial u_{1}}{\partial t }+(u_{1})_{xx}+2u_{0}(u_{1})_{x}+2u_{1}(u_{0})_{x}-u_{0}(v_{1})_{x}-\] \[u_{1}(v_{0})_{x}-v_{1}(u_{0})_{x}-v_{0}(u_{1})_{x}-D_{t}u_{1},\ \ u_{2}(x,0)=0\] \[\frac{\partial v_{2}}{\partial t}=\frac{\partial v_{1}}{\partial t }+(v_{1})_{xx}+2v_{0}(v_{1})_{x}+2v_{1}(v_{0})_{x}-u_{0}(v_{1})_{x}-\] \[u_{1}(v_{0})_{x}-v_{1}(u_{0})_{x}-v_{0}(u_{1})_{x}-D_{t}v_{1},\ \ v_{2}(x,0)=0\] \[\frac{\partial u_{3}}{\partial t}=\frac{\partial u_{2}}{\partial t }+(u_{2})_{xx}+2u_{0}(u_{2})_{x}+2u_{1}(u_{1})_{x}+\] \[2u_{2}(u_{0})_{x}-u_{0}(v_{2})_{x}-u_{1}(v_{1})_{x}-u_{2}(v_{0}) _{x}-u_{0}(v_{2})_{x}-\] \[v_{1}(u_{1})_{x}-v_{2}(u_{0})_{x}-D_{t}u_{2},\ \ u_{3}(x,0)=0\] \[\frac{\partial v_{3}}{\partial t}=\frac{\partial v_{2}}{\partial t }+(v_{2})_{xx}+2v_{0}(v_{2})_{x}+2v_{1}(v_{1})_{x}+\] \[2v_{2}(v_{0})_{x}-u_{0}(v_{2})_{x}-u_{1}(v_{1})_{x}-u_{2}(v_{0}) _{x}-v_{0}(u_{2})_{x}-\] \[v_{1}(u_{1})_{x}-v_{2}(u_{0})_{x}-D_{t}v_{2},\ \ v_{3}(x,0)=0\]
And so on. Consequently, the first few components of the homotopy perturbation solution for (39) and (40) are derived as follows:
\[U_{0}(x,t)=\cos x,\] \[V_{0}(x,t)=cosx,\] \[U_{1}(x,t)=cosx\cdot t,\] \[V_{1}(x,t)=cosx\cdot t,\] \[U_{2}(x,t)=cosx\cdot\frac{t}{2},\] \[V_{2}(x,t)=cosx\cdot\frac{t}{2},\] \[U_{3}(x,t)=-cosx\cdot\frac{t^{3}}{6},\]
and so on. In the same manner, the rest of the components can be obtained. The \(n\)-term approximation for equations (39) and (40) is given by
\[U(x,t)=\sum_{i=0}^{n-1}u_{i}(x,t)=\cos x\Bigg{[}1-t+\frac{t^{2}}{ 2!}\\ -\frac{t^{3}}{3!}+\ldots\Bigg{]} \tag{46}\]
\[V(x,t)=\sum_{i=0}^{n-1}v_{i}(x,t)=\cos x\Bigg{[}1-t+\frac{t^{2}}{ 2!}\\ -\frac{t^{3}}{3!}+\ldots\Bigg{]} \tag{47}\]
In closed form, this gives the solution:
\[U(x,t)=\cos xe^{-t} \tag{48}\] \[V(x,t)=\cos xe^{-t} \tag{49}\]
Which is the exact solution to the one-dimensional coupled Burger's equations (39) and (40).
## 3 Literature Review
The Homotopy Perturbation Method (HPM) has emerged as a powerful tool for solving complex differential equations. Combining perturbation theory and homotopy analysis, HPM offers an effective approach to tackle nonlinear problems. A series of influential references shed light on the application of HPM and its impact.
"Nahfey's Introduction to Perturbation Method Technique" provides a fundamental introduction to perturbation methods, laying the groundwork for understanding HPM principles and techniques.
Liao's "Homotopy Analysis Method in Non-Linear Differential Equations" explores the homotopy analysis method, which serves as a theoretical foundation for HPM.
"He's Homotopy Perturbation Technique" represents a pivotal milestone in HPM development, explaining how to break down complex problems into simpler ones. This work extends HPM's applicability to various nonlinear equations.
In "Application of Homotopy Perturbation Method to Linear and Non-Linear Schrodinger Equations," Mohhamed and Shahwar demonstrate HPM's real-world utility in quantum mechanics, particularly with Schrodinger equations.
Ganji, Babazadeh, Noori, Pirouz, and Janipor, in their research titled "Utilizing the Homotopy Perturbation Method to Address the Non-Linear Blasius
Equation in Boundary Layer Flow over a Flat Plate," employ HPM to demonstrate its effectiveness in the field of fluid dynamics.
Hameda's "Homotopy Perturbation for System of Non-Linear Coupled Equations" explores HPM's application in handling coupled nonlinear systems, highlighting its adaptability to diverse problem domains.
These works collectively provide valuable insights into the Homotopy Perturbation Method, covering its theoretical foundations, practical applications, and its ability to solve a variety of nonlinear equations. The references mentioned are sources of inspiration for further research and contribute to the ongoing development of mathematical problem-solving techniques.
## 4 Conclusion
In this project, a clear and mathematically supported conclusion drawn from the results is that HPM demonstrates rapid convergence towards exact solutions. It is noteworthy that HPM proves to be an effective, simple, and highly accurate tool for handling and solving Blasius and Burger equations, as well as various other types of nonlinear equations in a unified manner. Moreover, HPM distinguishes itself from traditional perturbation methods, which often rely on small parameter assumptions that may lead to non-physical results in many cases. Furthermore, numerical methods tend to yield inaccurate results when the equation exhibits strong time-dependence. In contrast, He's Homotopy Perturbation Method (HPM) completely overcomes these shortcomings, highlighting the convenience and effectiveness of homotopy in resolving such mathematical challenges.
|
2308.03737 | Frequency of the dark matter subhalo collisions and bifurcation sequence
arising formation of dwarf galaxies | The cold dark matter (CDM) model predicts galaxies have 100 times more dark
matter mass than stars. Nevertheless, recent observations report the existence
of dark-matter-deficient galaxies with less dark matter than expected. To solve
this problem, we investigate the physical processes of galaxy formation in
head-on collisions between gas-containing dark matter subhaloes (DMSHs).
Analytical estimation of the collision frequency between DMSHs associated with
a massive host halo indicates that collisions frequently occur within 1/10th of
the virial radius of the host halo, with a collision timescale of about 10 Myr,
and the most frequent relative velocity increases with increasing radius. Using
analytical models and numerical simulations, we show the bifurcation channel of
the formation of dark-matter-dominated and dark-matter-deficient galaxies. In
the case of low-velocity collisions, a dark-matter-dominated galaxy is formed
by the merging of two DMSHs. In the case of moderate-velocity collisions, the
two DMSHs penetrate each other. However the gas medium collides, and star
formation begins as the gas density increases, forming a dwarf galaxy without
dark matter at the collision surface. In the case of high-velocity collisions,
shock-breakout occurs due to the shock waves generated at the collision surface
reaching the gas surface, and no galaxy forms. For example, the simulation
demonstrates that a pair of DMSHs with a mass of 10^9 Msun containing gas of
0.1 solar metallicity forms a dark-matter-deficient galaxy with a stellar mass
of 10^7 Msun for a relative velocity of 200 km/s. | Koki Otaki, Masao Mori | 2023-08-07T17:36:34Z | http://arxiv.org/abs/2308.03737v1 | Frequency of the dark matter subhalo collisions and bifurcation sequence arising formation of dwarf galaxies
###### Abstract
The cold dark matter (CDM) model predicts galaxies have 100 times more dark matter mass than stars. Nevertheless, recent observations report the existence of dark-matter-deficient galaxies with less dark matter than expected. To solve this problem, we investigate the physical processes of galaxy formation in head-on collisions between gas-containing dark matter subhaloes (DMSHs). Analytical estimation of the collision frequency between DMSHs associated with a massive host halo indicates that collisions frequently occur within 1/10th of the virial radius of the host halo, with a collision timescale of about 10 Myr, and the most frequent relative velocity increases with increasing radius. Using analytical models and numerical simulations, we show the bifurcation channel of the formation of dark-matter-dominated and dark-matter-deficient galaxies. In the case of low-velocity collisions, a dark-matter-dominated galaxy is formed by the merging of two DMSHs. In the case of moderate-velocity collisions, the two DMSHs penetrate each other. However the gas medium collides, and star formation begins as the gas density increases, forming a dwarf galaxy without dark matter at the collision surface. In the case of high-velocity collisions, shock-breakout occurs due to the shock waves generated at the collision surface reaching the gas surface, and no galaxy forms. For example, the simulation demonstrates that a pair of DMSHs with a mass of \(10^{9}\,\mathrm{M}_{\odot}\) containing gas of 0.1 solar metallicity forms a dark-matter-deficient galaxy with a stellar mass of \(10^{7}\,\mathrm{M}_{\odot}\) for a relative velocity of \(200\,\mathrm{km}\,\mathrm{s}^{-1}\).
keywords: galaxies: formation - galaxies: evolution - dark matter
## 1 Introduction
Cold dark matter (CDM) drives the hierarchical structure formation in the standard galaxy formation model. In other words, cosmic structures are believed to form in a bottom-up fashion in which small dark matter haloes repeatedly collide and merge, growing into larger systems.
While the CDM model successfully reproduces statistical properties such as the large-scale distribution of galaxies in the universe, some serious inconsistencies exist between theoretical predictions and observations on a few Mpc or less (Moore et al., 1999; Klypin et al., 1999).
The number of satellite galaxies observed around the Milky Way is more than one order of magnitude less than that of dark matter subhaloes (DMSHs) predicted by the CDM model. This discrepancy is known as the missing satellite problem. It implies that there may be a huge number of extremely faint galaxies and dark matter-dominated haloes with little or no stellar component in the Local Group.
From a general perspective, the correlation between stellar components and dark matter haloes has been of profound interest and hitherto numerous studies from both theoretical and observational viewpoints. Almost all studies about the relationship between the stellar mass and the dark matter halo mass in galaxies have shown that the dark matter fraction in galaxies is expected to be more than 90% (e.g., Behroozi et al., 2013).
However, van Dokkum et al. (2018) recently reported that the satellite galaxy NGC1052-DF2, a member of the elliptical galaxy NGC1052 group, has very little dark matter component compared to the theoretical predictions. Its stellar mass is \(2\times 10^{8}\,\mathrm{M}_{\odot}\), whereas its dynamical mass is \(<3.4\times 10^{8}\,\mathrm{M}_{\odot}\) within a radius of 7.6 kpc. This radius is the position of the outermost globular clusters in this galaxy, which is greater than its effective radius of 2.2 kpc. While the derivation of dynamical masses using velocity dispersion of globular clusters indicates serious ambiguity (e.g., Hayashi and Inoue, 2018), the detailed analysis of the Jeans model shows the lack of dark matter in those galaxies (Wasserman et al., 2018).
In addition, NGC1052-DF4 in the NGC1052 group has also been discovered as a galaxy with similar properties (van Dokkum et al., 2019). Its stellar mass is \(1.5\times 10^{8}\,\mathrm{M}_{\odot}\), and its dynamical mass is estimated as \(0.4\times 10^{8}\,\mathrm{M}_{\odot}\) within \(7\,\mathrm{kpc}\) from the galaxy centre. These two galaxies are classified as ultra-diffuse galaxies (UDGs). UDGs are peculiar galaxies with extremely low surface brightness, \(\mu(g,0)>24\,\mathrm{mag\,arcsec^{-2}}\), and large effective radii, \(r_{\mathrm{e}}>1.5\,\mathrm{kpc}\), first discovered in the Coma Cluster (van Dokkum et al., 2015; Koda et al., 2015). It should be noted that the issue of uncertainty in the distance to these galaxies continues to active discussion and does not yet converge (Trujillo et al., 2019; Monelli and Trujillo, 2019). In the latest reports, Danieli et al. (2020) and Shen et al. (2021) claim that the distances of NGC1052-DF2 and NGC1052-DF4 are \(\sim 20\,\mathrm{Mpc}\), which is derived by Hubble Space Telescope Advanced Camera for Surveys imaging, and that these two galaxies are dark-matter-deficient galaxies.
Furthermore, different observations have also reported the existence of other dark-matter-deficient galaxies. Mancera Pina et al. (2019, 2020) found six H\({}_{\mathrm{I}}\)-rich UDGs which have a high baryon fraction within a radius larger than the effective radius. Mancera Pina et al. (2022) observed the AGC 114905, one of the six H\({}_{\mathrm{I}}\)-rich UDGs, at high spatial resolution using the Karl G. Jansky Very Large Array. The H\({}_{\mathrm{I}}\) rotation curve of the galaxy is fitted by the baryon contribution alone up to the observed outermost radius. This galaxy has the stellar mass of \(M_{\star}=(1.3\pm 0.3)\times 10^{8}\,\mathrm{M}_{\odot}\), and the H\({}_{\mathrm{I}}\) mass of \(M_{\mathrm{H}_{\mathrm{I}}}=(9.7\pm 1.4)\times 10^{8}\,\mathrm{M}_{\odot}\). Guo et al. (2020) reported 19 dwarf galaxies that have high baryon fraction within H\({}_{\mathrm{I}}\) radius \(r_{\mathrm{H}_{\mathrm{I}}}\), which is defined at H\({}_{\mathrm{I}}\) surface density \(=1\,\mathrm{M}_{\odot}\,\mathrm{pc^{-2}}\). This radius is larger than the effective radius. Some of these galaxies have large effective radii, which may be UDGs. As one example, the galaxy AGC 213086, with \(r_{\mathrm{H}_{\mathrm{I}}}=14.37\pm 1.023\,\mathrm{kpc}\), has a stellar mass of \(5.51^{+4.02}_{-3.23}\times 10^{8}\,\mathrm{M}_{\odot}\), an H\({}_{\mathrm{I}}\) mass of \(2.45^{+0.11}_{-0.09}\times 10^{9}\,\mathrm{M}_{\odot}\) and a dynamical mass of \(6.31^{+0.89}_{-0.77}\times 10^{7}\,\mathrm{M}_{\odot}\). They mentioned that 14 of the 19 galaxies are isolated from any group of galaxies and are located in the field. Thus far, at total of 27 dark-matter-deficient galaxies have already been identified. However, it is still an open question to reveal the formation of dark-matter-deficient galaxies in the dark-matter-dominated universe.
There are several theoretical studies on the formation of dark-matter-deficient galaxies. Ogiya (2018) investigated the formation of dark-matter-deficient galaxies by tidal interaction between a host galaxy and a satellite galaxy using \(N\)-body simulations. Assuming that the dark halo of the satellite galaxy has a cored density profile with a tightly bound and extremely radial orbit, the simulation results show that the effect of the tidal stripping successfully reproduces the observed properties of the NGC1052-DF2-like galaxies. Similarly, Yang et al. (2020) demonstrate the formation of dark-matter-deficient galaxies driven by tidal interaction within the framework of a self-interacting dark matter (SIDM). So far, tidal interaction models have brought some success as a model to explain the formation of dark-matter-dominated galaxies (see also Nusser, 2020). However, Muller et al. (2019) concluded from observations using the Jeanne Rich telescope that NGC1052-DF2 and NGC1052-DF4 have no evidence of tidal interaction. Montes et al. (2021) report that the stellar distribution of NGC1052-DF2 indicates no signatures of tidal distortion, but NGC1052-DF4 appears to have experienced tidal disruption (Montes et al., 2020). Some dark-matter-deficient galaxies inhabit the fields and are not bound to a more massive host galaxy. In other words, they seem to be free from tidal stripping.
Recently, it has been pointed out that high-velocity collisions between gas-rich dwarf galaxies are also capable of forming dark-matter-deficient galaxies (Silk, 2019; Shin et al., 2020; Lee et al., 2021; Otaik and Mori, 2022, 2023). Silk (2019) advocates that they are scaled-down versions of Bullet cluster-like events and involve high-velocity collisions of gas-rich dwarf galaxies in high-density environments. Shin et al. (2020) showed that dark-matter-deficient galaxies formed when two dwarf galaxies collide with each other at a relative velocity of \(300\,\mathrm{km\,s^{-1}}\) using self-gravitating hydrodynamics simulations. They investigated the formation of dark-matter-deficient galaxies by running several simulations with various collision parameters, disk angles, mass ratios and gas fractions, as well as relative velocities of the two dwarf galaxies. Furthermore, they utilized cosmological simulations IllustrisTNG (Naiman et al., 2018; Marinacci et al., 2018; Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2018, 2019; Pillepich et al., 2019) the occurrence of galaxy collisions that lead to the formation of dark-matter-deficient galaxies. The complexity of the physical phenomena involved in their cosmological simulations makes it difficult to understand the physical conditions for the formation of dark-matter-deficient galaxies. They conclude that no valid collision events were identified due to the numerical resolution. From the observational point of view, van Dokkum et al. (2022) have reported that the spatial distribution of NGC1052-DF2 and NGC1052-DF4 and their surrounding dwarf galaxies is due to the traces of collisions between dwarf galaxies (see also, van Dokkum et al., 2022; Buzzo et al., 2023)
On the other hand, Otaki et al. (2023) recently analysed the data set of the latest high-resolution cosmological simulation Phi-4096 presented by Ishiyama et al. (2021). They found that sub-galactic dark matter haloes frequently collide with each other at various relative velocities and concluded that galaxy collisions should considerably contribute to the formation of dark-matter-deficient galaxies. In this context, it is essential to evaluate the frequency of collisions between DMSHs associated with more massive galaxies, independent of the method and resolution of numerical simulations, and analytical estimates such as those described in this paper will yield important insights. From observational viewpoints, several recent studies of nearby galaxies have reported the discovery of faint structures indicating interactions between dwarf galaxies (Stierwalt et al., 2015). For example, Paudel et al. (2018) classified 177 dwarf galaxies of \(<10^{10}\,\mathrm{M}_{\odot}\) with features of dwarf-dwarf interactions: interacting pairs, shell and tidal tail. Poulain et al. (2022) reported 12 dwarf merger candidates detected H\({}_{\mathrm{I}}\) line. In Chhatkuli et al. (2023), an analysis of recent observational data shows that the formation of blue compact galaxies is linked to galaxy collisions between dwarf galaxies with a burst of star formation. These observational facts allow us to infer that dwarf galaxy collisions are not rare but occur relatively frequently in the nearby universe. Furthermore, taking into account the missing satellite problem mentioned before, it is easy to imagine
that dark matter sub-halo collisions are even more frequent than dwarf galaxy collisions in the dark side of the universe.
So far, theoretical studies always assume high-speed galaxy collision. However, we consider low-speed collision, in which two sub-galactic haloes will merge into one halo and form a dark-matter-dominated galaxy, is also important. It would be of great interest to understand which physical processes play an essential role in the bifurcation between dark-matter-dominated and dark-matter-deficient galaxies at their formation epoch through galaxy collision simulations under idealised conditions.
These situations motivate us to explore the head-on collision between DMSHs, investigating the physical conditions for forming dark matter-deficient-galaxies and the relationship between formation probability and collision frequency.
In this study, we focus on the head-on collision process between DMSHs and investigate the possibility of the formation of dark-matter-deficient galaxies by our original simulation code assuming a flat \(\Lambda\)CDM cosmology with \(\Omega_{\rm m}\)=\(0.315,\Omega_{\rm b}=0.048,\,h=0.674\) in the Planck Collaboration (2020). This paper is organised as follows. Section 2 assesses the frequency of mutual collisions in subhaloes associated with a more massive dark matter halo having a Navarro-Frenk-White (NFW) density profile Navarro et al. (1996, 1997). Section 3 analyses the physical conditions for bifurcation channels between forming dark-matter-dominated galaxies and dark-matter-deficient galaxies by head-on collisions of such DMSHs using a simple one-dimensional hydrodynamic model. In Section 4, we present our numerical method for DMSH collisions incorporating star formation and supernova feedback in a hybrid three-dimensional hydrodynamic and \(N\)-body model. Subsequently, Section 5 describes the results of the simulations. In Section 6, we summarise the conclusion of this paper and devote a discussion of the limitations of our model and a comparison with previous studies.
## 2 Collision frequency between dark matter subhaloes
We estimate the number of collisions between DMSHs moving within the virial radius of the host halo under dynamical equilibrium.
Here, we assume the velocity of DMSH follows to the velocity distribution function of the host halo.
The energy of a DMSH moving with velocity \(v\) in the gravitational potential \(\Phi_{\rm NFW}\) of the NFW profile generated by a host halo with mass \(M_{\rm host}\) is
\[E=\frac{1}{2}v^{2}+\Phi_{\rm NFW}(r). \tag{1}\]
Since we consider a DMSH bound to the host galaxy, \(E\) is negative at all times. An NFW potential is given by
\[\Phi_{\rm NFW}(r)=-\frac{GM_{\rm host}}{R_{200,\,\rm host}} \frac{c_{\rm host}}{\ln{(1+c_{\rm host})}-c_{\rm host}/(1+c_{\rm host})}\] \[\times\frac{\ln{(1+x)}}{x}, \tag{2}\]
where
\[x=\frac{r}{r_{\rm s,\,host}},\quad R_{200,\,\rm host}=\left(\frac{3M_{\rm host }}{4\pi\rho_{200}}\right)^{1/3}, \tag{3}\]
\(G\) is the gravitational constant,
\(c_{\rm host}=R_{200,\,\rm host}/r_{\rm s,\,\rm host}\) is the concentration, \(r_{\rm s,\,host}\) is a scale radius of a host halo, and \(\rho_{200}\) is 200 times the critical density of the universe. So far, the studies demonstrate that the concentrations \(c\) tightly correlate with the mass of host haloes \(M_{\rm host}\) such as the \(c\)-\(M\) relation (e.g., Bullock et al., 2001; Prada et al., 2012; Ishiyama and Ando, 2020; Ishiyama et al., 2021). In the following, the energy and potential are expressed using the positive values, \(\mathcal{E}=-E\) and \(\Psi=-\Phi_{\rm NFW}\), respectively.
The distribution function given a spherical density profile can be calculated by Eddington's formula (Binney and Tremaine, 2008),
\[f(\mathcal{E})=\frac{1}{\sqrt{\mathcal{E}}}\left[\frac{1}{\sqrt{\mathcal{E}}} \left(\frac{\mathrm{d}\nu}{\mathrm{d}\Psi}\right)_{\Psi=0}+\int_{0}^{\mathcal{E }}\frac{\mathrm{d}^{2}\nu}{\mathrm{d}\Psi^{2}}\frac{\mathrm{d}\Psi}{\sqrt{ \mathcal{E}-\Psi}}\right], \tag{4}\]
where \(\nu\) is the probability density distribution of an NFW profile,
\[\nu(r)=\frac{\rho_{\rm NFW}(r)}{M_{\rm host}}=\frac{g(c_{\rm host})}{4\pi R _{200,\,\rm host}^{3}}\frac{1}{x(1+x)^{2}}, \tag{5}\]
\[g(c)=\frac{c^{3}}{\ln(1+c)-c/(1+c)}. \tag{6}\]
We assumed the distribution of DMSHs follows the distribution function \(f(\mathcal{E})\) calculated from a host halo to simplify the motion of DMSHs. It should be noted that although we assumed above that the host halo and the system of DMSHs are in a state of perfect relaxation, it is not trivial. The velocity distribution function of DMSHs at position \(r\) from the centre of a host halo becomes
\[P_{r}(\mathbf{v})=\frac{f(\mathcal{E})}{\nu(r)}, \tag{7}\]
and for a system with an isotropic velocity,
\[P_{r}(v)=4\pi v^{2}\frac{f(\mathcal{E})}{\nu(r)}. \tag{8}\]
Next, we solve the two-body problem for the distribution function (Ferrer and Hunter, 2013). The velocities of the two DMSHs are \(\mathbf{v}_{1}\), and \(\mathbf{v}_{2}\). Here, the DMSH is assumed to move following the velocity distribution function of the host halo in the NFW density distribution. The velocity of the centre of mass and relative velocity are denoted as \(\mathbf{v}_{\rm cm}=(\mathbf{v}_{1}+\mathbf{v}_{2})/2\) and \(\mathbf{v}_{\rm rel}=\mathbf{v}_{1}-\mathbf{v}_{2}\), respectively. The velocity distribution function of the two DMSHs can be expressed in terms of the distribution function of the relative velocity and the velocity of the centre of mass:
\[P_{r}(\mathbf{v}_{1}) P_{r}(\mathbf{v}_{2})\mathrm{d}^{3}\mathbf{v}_{1}\mathrm{d}^{3}\mathbf{v}_{2}\] \[= P_{r}(\mathbf{v}_{\rm cm}+\mathbf{v}_{\rm rel}/2)P_{r}(\mathbf{v}_{\rm cm}- \mathbf{v}_{\rm rel}/2)\mathrm{d}^{3}\mathbf{v}_{\rm cm}\mathrm{d}^{3}\mathbf{v}_{\rm rel}. \tag{9}\]
The probability distribution of relative velocity \(P_{r,\rm rel}\) at position \(r\) is integrated only over the velocity of the centre of mass:
\[P_{r,\rm rel}(\mathbf{v}_{\rm rel})=\int P_{r}(\mathbf{v}_{\rm cm}+\mathbf{v}_{\rm rel}/2)P _{r}(\mathbf{v}_{\rm cm}-\mathbf{v}_{\rm rel}/2)\mathrm{d}^{3}\mathbf{v}_{\rm cm}, \tag{10}\]
and for the case of isotropic velocity,
\[P_{r,\rm rel}(v_{\rm rel})=\frac{8\pi^{2}v_{\rm rel}^{2}}{\nu(r)^{2}} \int_{0}^{\infty}{\rm dv}_{\rm cm}v_{\rm cm}^{2}\int_{-1}^{1}{\rm d}z\,f({\cal E }_{1})f({\cal E}_{2}), \tag{11}\] \[{\cal E}_{1}=-\frac{1}{2}(v_{\rm cm}^{2}+v_{\rm rel}^{2}/4+v_{\rm cm }v_{\rm rel}z)+\Psi(r)\] (12) \[{\cal E}_{2}=-\frac{1}{2}(v_{\rm cm}^{2}+v_{\rm rel}^{2}/4-v_{\rm cm }v_{\rm rel}z)+\Psi(r) \tag{13}\]
where \(v_{\rm cm}=[{\bf v}_{\rm cm}]\), \(v_{\rm rel}=[{\bf v}_{\rm rel}]\), and \({\bf v}_{\rm cm}\cdot{\bf v}_{\rm rel}=v_{\rm cm}v_{\rm rel}z\). Fig. 1 shows the probability distribution of relative velocity \(P_{r,\rm rel}\) corresponding to each position \(r\) from \(0.001\,R_{200\,\rm host}\) to \(2\,R_{200,\rm host}\) for \(c=7.5\), which corresponds to the host mass \(M_{\rm host}=10^{12}\,{\rm M}_{\odot}\) for the \(c\)-\(M\) relation (Prada et al., 2012). The horizontal axis corresponds to the relative velocity between two DMSHs, normalised by the circular velocity \(V_{200,\rm host}=\sqrt{GM_{200,\rm host}/R_{200,\rm host}}\).
Here, we define the expected value of relative velocities as
\[E[v_{r,\rm rel}]=\int{\rm d}v_{r,\rm rel}, \tag{14}\] \[{\rm d}v_{r,\rm rel}\equiv v_{\rm rel}P_{r,\rm rel}(v_{\rm rel}) {\rm d}v_{\rm rel}. \tag{15}\]
The small element of \({\rm d}v_{r,\rm rel}\) means an expected relative velocity between two DMSHs within \((v_{\rm rel},v_{\rm rel}+{\rm d}v_{\rm rel})\) at the distance \(r\) from the centre of a host halo.
The next step is to calculate the collision frequency between two DMSHs of the same masses moving within the radius of the host halo. We define a parameter \(\eta\) as the ratio of the virial radius of the host to the virial radius of a DMSH,
\[\eta=\frac{R_{200,\rm sub}}{R_{200,\rm host}}=\left(\frac{M_{\rm sub}}{M_{\rm host }}\right)^{1/3}, \tag{16}\]
where \(M_{\rm sub}\) is a mass of a colliding DMSH. When the collisions of two DMSHs with the cross section \(\sigma=\pi r_{\rm s,sub}^{2}=\pi\eta^{2}R_{200,\rm host}^{2}/c_{\rm sub}^{2}\) moving the relative velocity \({\rm d}v_{r,\rm rel}\) inside the volume element \({\rm d}V={\rm d}L^{3}\), the probability of one collision per area \({\rm d}L^{2}\) is \(\sigma/{\rm d}L^{2}\). In the rest frame of one DMSH, another DMSH makes \({\rm d}v_{r,\rm rel}{\rm d}t/(2{\rm d}L)\) round trips of distance \({\rm d}L\) during a time \({\rm d}t\), thus the number of collisions of two DMSHs in a volume \({\rm d}V\) can be expressed by
\[\frac{\sigma}{{\rm d}L^{2}}\cdot\frac{{\rm d}v_{r,\rm rel}{\rm d}t}{2{\rm d}L }=\frac{\sigma{\rm d}v_{r,\rm rel}{\rm d}t}{2{\rm d}V}, \tag{17}\]
where \({\rm d}V=4\pi r^{2}{\rm d}r\) for a spherical host halo. We assume that the distribution of the number density of DMSHs in the host halo is assumed to be described by the NFW function,
\[n(r)=N\nu(r)=\frac{Ng(c_{\rm host})}{4\pi R_{200,\rm host}^{3}} \frac{1}{x(1+x)^{2}}, \tag{18}\]
where \(N\) is the total number of DMSHs in the host halo. From the above, the number of collisions in a volume \({\rm d}V\) during a time \({\rm d}t\) is expressed as
\[{\rm d}k =\frac{\sigma{\rm d}v_{r,\rm rel}{\rm d}t}{2{\rm d}V}\cdot(n{\rm d }V)^{2}\,,\] \[=\frac{N^{2}\eta^{2}g(c_{\rm host})}{8R_{200,\rm host}^{2}c_{\rm sub }^{2}c_{\rm host}^{2}}\frac{v_{\rm rel}P_{r,\rm rel}}{(1+x)^{4}}{\rm d}v_{ \rm rel}\,{\rm d}t\,{\rm d}r. \tag{19}\]
The collision frequency, which depends on the distance from the centre of the host halo and the relative velocity of the DMSHs, is written by
\[\frac{c_{\rm sub}^{2}}{N^{2}\eta^{2}}\frac{{\rm d}k}{{\rm d}t\,{\rm d}r\,{ \rm d}v_{\rm rel}}=\frac{g(c_{\rm host})^{2}}{8R_{200,\rm host}^{2}c_{\rm host }^{2}}\frac{v_{\rm rel}P_{r,\rm rel}}{(1+x)^{4}}, \tag{20}\]
where it is divided by free parameters of the colliding DMSH.
The calculation result of equation (20) is shown in Fig. 2. The four panels correspond to the property of the host halo, (a) \(c=14.8\) for \(M_{\rm host}=10^{8}\,{\rm M}_{\odot}\), (b) \(c=10.5\) for \(M_{\rm host}=10^{10}\,{\rm M}_{\odot}\), (c) \(c=7.5\) for \(M_{\rm host}=10^{12}\,{\rm M}_{\odot}\), and (d) \(c=5.33\) for \(M_{\rm host}=10^{14}\,{\rm M}_{\odot}\), respectively, using the \(c\)-\(M\) relation (Prada et al., 2012). Table 1 lists also the properties of host haloes.
The colours in the diagram correspond to the magnitude of the collision frequency, \(c_{\rm sub}^{2}{\rm d}k/(N^{2}\eta^{2}{\rm d}t{\rm d}r{\rm d}v_{\rm rel})\). The horizontal axis represents the distance from the host centre, normalised by \(R_{200}\). The vertical axis corresponds to the relative velocity between DMSHs, normalised by the circular velocity \(V_{200}\). The upper sub-panel shows the dependence of collision frequency on the radius integrated by the relative velocity, \(c_{\rm sub}^{2}{\rm d}k/(N^{2}\eta^{2}{\rm d}t{\rm d}r)\). The right sub-panel shows the dependence of collision frequency on the relative velocity between two DMSHs integrated by the radius, \(c_{\rm sub}^{2}{\rm d}k/(N^{2}\eta^{2}{\rm d}t{\rm d}v_{\rm rel})\). Depending upon the concentration \(c\), the peak position and the peak relative velocity, which have the maximum collision probability \(c_{\rm sub}^{2}{\rm d}k/(N^{2}\eta^{2}{\rm d}t{\rm d}r)\) and \(c_{\rm sub}^{2}{\rm d}k/(N^{2}\eta^{2}{\rm d}t{\rm d}v_{\rm rel})\), respectively, are fitted numerically by the relative velocity,
\[\frac{r_{\rm peak}}{R_{200,\rm host}} =(9.58\pm 0.01)\times 10^{-3}\left(\frac{c}{7.5}\right)^{-1.00\pm 0.00 }, \tag{21}\] \[\frac{v_{\rm rel,\rm peak}}{V_{200,\rm host}} =(0.384\pm 0.011)\left(\frac{c}{7.5}\right)^{0.766\pm 0.015}\] \[\qquad+(1.26\pm 0.01), \tag{22}\]
for \(2\leq c\leq 30\), respectively.
In addition, we calculate the average relative velocity within the host halo as
\[\langle v_{\rm rel}\rangle=\frac{1}{A}\int v_{\rm rel}\left(\frac{c_{\rm sub}^ {2}}{N^{2}\eta^{2}}\frac{{\rm d}k}{{\rm d}v_{\rm rel}}\right){\rm d}v_{\rm rel}, \tag{23}\]
where \(A\) is a normalisation constant given by
\[A=\frac{c_{\rm sub}^{2}}{N^{2}\eta^{2}}\int\left(\frac{{\rm d}k}{{\rm d}t\,{ \rm d}r\,{\rm d}v_{\rm rel}}\right){\rm d}t\,{\rm d}r\,{\rm d}v_{\rm rel}. \tag{24}\]
It should be noted that the velocity \(\langle v_{\rm rel}\rangle\) is different from the velocity \(E[v_{r,\rm rel}]\) given by equation (14), because \(\langle v_{\rm rel}\rangle\)
Figure 1: Probability distribution of the relative velocity for \(c=7.5\) calculated by equation (11).
is integrated over the whole region of a host halo. As a result, the average relative velocity of the colliding DMSHs is derived by
\[\frac{\langle v_{\rm rel}\rangle}{V_{\rm 200,\,host}} = (0.474\pm 0.018)\left(\frac{c}{7.5}\right)^{0.0694\pm 0.0184} \tag{25}\] \[+\,(1.34\pm 0.02),\]
for \(2\leq c\leq 30\).
Fig. 3 shows the cumulative collision frequency within \(r\) for each concentration of the host halo, which is defined as
\[f_{\rm col}(<r)=\frac{N^{2}\eta^{2}}{c_{\rm sub}^{2}}\int_{0}^{r}\left(\frac{ \mathrm{d}k}{\mathrm{d}t\,\mathrm{d}r^{\prime}}\right)\mathrm{d}r^{\prime}. \tag{26}\]
Fig. 3 indicates most of the collisions occur within \(\sim 0.1\,R_{\rm 200,\,host}\). DMSHs might be tidally disrupted by the gravity of the host halo in the inner region of the peak position \(r_{\rm peak}\sim 0.01\,R_{\rm 200,\,host}\). Here, we define the collision frequency in the outer region and the total collision frequency
Figure 2: Distributions of collision frequency of equation (20) between DMSHs within a host galaxy for (a) \(c=14.8\), (b) \(c=10.5\), (c) \(c=7.50\), and (d) \(c=5.33\), respectively. Top sub-panel in each panel: dependence of collision frequency on the radius of a host halo. Right sub-panel in each panel: dependence of collision frequency on the relative velocity between two DMSHs.
within the host halo as
\[f_{\rm col,\,0.01}\equiv\frac{N^{2}\eta^{2}}{c_{\rm sub}^{2}}\int_{0.01 \,R_{200\,{\rm host}}}^{R_{200\,{\rm host}}}\left(\frac{\rm d\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the density of the shocked clouds for strong adiabatic shocks is given by
\[\rho_{1}=\frac{\gamma+1}{\gamma-1}\rho_{0}, \tag{37}\]
where \(\gamma\) is the specific heat ratio. We have used \(\gamma=5/3\). We assume that the kinetic energy of the system is roughly converted into thermal energy as \(u_{1}=(v_{\rm rel}/2)^{2}/2\) and \(v_{1}=0\) for the energy conservation. Using the RH condition for the mass conservation, we derive the shock velocity propagating within each cloud with an initial position at \(\pm r_{\rm s}\) as
\[v_{\rm shock}=\pm\frac{\gamma-1}{2}\frac{v_{\rm rel}}{2}, \tag{38}\]
respectively.
When the shock waves reach their cloud surface, most of the gas is ejected from the system. The shock-crossing time is defined by
\[t_{\rm cross}=\frac{2r_{\rm s}}{v_{\rm rel}/2+v_{\rm shock}}. \tag{39}\]
We compare it with other timescales, such as the cooling time or the free-fall time to estimate the critical relative velocity of shock-breakout. The cooling time in the shocked gaseous medium is
\[t_{\rm cool}=\frac{k_{\rm B}m_{\rm p}\mu_{1}T_{1}}{(\gamma-1)\rho_{1}\Lambda(T _{1},Z)},\quad T_{1}=\frac{(\gamma-1)\mu_{1}m_{\rm p}u_{1}}{k_{\rm B}}, \tag{40}\]
where \(k_{\rm B}\) is the Boltzmann constant, \(m_{\rm p}\) is the proton mass, \(T\) is the temperature, \(\mu\) is the mean molecular weight, and \(\Lambda(T,Z)\) is the cooling function as a function of the temperature and metallicity \(Z\).
Here, we assume \(\Lambda(T,0.1\,{\rm Z}_{\odot})\) in collisional ionization equilibrium (CIE) given by MAPPINGS V(Sutherland & Dopita, 2017; Sutherland et al., 2018).
It is obvious that effective radiative cooling, \(t_{\rm cool}<t_{\rm cross}\), promotes and enhances the formation of galaxies, while the shock-breakout, \(t_{\rm cross}<t_{\rm cool}\), prohibits or suppresses the formation of galaxies. The critical relative velocity of \(t_{\rm cool}=t_{\rm cross}\) is fitted by
\[v_{\rm crit}\simeq 691\left(\frac{M_{\rm sub}}{10^{9}\,{\rm M}_{\odot}} \right)^{0.06}\,{\rm km\,s^{-1}}. \tag{41}\]
Fig. 4 shows the results of the analytical models. The grey region is the velocity condition satisfied with the formation of the dark-matter-dominated galaxies. The blue region is the velocity condition satisfied with the formation of dark-matter-deficient galaxies. The red region is no galaxy form to occur shock-breakout. We derived the critical relative velocities for the bifurcation sequence of the formation of dark-matter-dominated galaxies and dark-matter-deficient galaxies.
## 4 Numerical model
A simple analytical model was used in the previous section to provide physical insight into DMSH collisions and galaxy formation. However, this analytical model contains several assumptions that need to be validated. We, therefore, perform a realistic \(N\)-body/hydrodynamic simulation of DMSH collisions, incorporating star formation and supernova feedback, to reveal the formation processes of dark-matter-deficient galaxies and dark-matter-dominated galaxies.
### Simulation set-up
The simulation adopts the hierarchical tree algorithm for self-gravity and the three-dimensional smoothed particle hydrodynamics (SPH: Lucy, 1977; Gingold & Monaghan, 1977) method for gas dynamics.
The acceleration of the one particle at a position \(\mathbf{r}_{i}\) in a gravitational field consisting of \(N\) particles such as dark matter, star and gas particles is obtained as
\[\frac{{\rm d}\mathbf{v}_{i}}{{\rm d}t}=-\sum_{j=1}^{N}\frac{Gm_{j}(\mathbf{r}_{i}-\mathbf{ r}_{j})}{(r_{ij}^{2}+\epsilon^{2})^{3/2}}, \tag{42}\]
where \(G\) is the gravitational constant, \(\mathbf{v}_{j}\) and \(m_{j}\) are the velocity vector and the mass of a particle at \(\mathbf{r}_{j}\), respectively,
Figure 4: Results of analytical models assuming 0.1 solar metallicity. Grey region: velocity condition satisfied with the formation of the dark-matter-dominated galaxies. Blue region: velocity condition satisfied with the formation of dark-matter-deficient galaxies. Red region: no galaxy form to occur shock-breakout. The right axis indicates the temperature divided by the mean molecular weight \(T/\mu\), which corresponds to the kinetic energy of the relative velocity \(v_{\rm rel}\).
and \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\). The softening length \(\epsilon\) is a free parameter introduced to avoid numerical divergence.
In the SPH formulation, the gas density of one particle at a position \(\mathbf{r}_{i}\) is given by
\[\rho_{i}=\sum_{j=1}^{N_{\rm neigh}}m_{j}W(r_{ij},h_{i}), \tag{43}\]
where \(W(r,h)\) is the smoothing kernel, \(h\) is the smoothing length, and \(N_{\rm neigh}=200\) is the number of neighbour particles. We adopt the Wendland (1995) \(C^{4}\) function,
\[W(r,h)=\frac{495}{32\pi h^{3}}\begin{cases}(1-q)^{6}(1+6q+\frac{35}{3}q^{2}), &\quad q\leq 1,\\ 0,&\quad q>1,\end{cases} \tag{44}\]
where \(q=r/h\), to avoid the clumping instability (Dehnen and Aly, 2012; Zhu et al., 2015). We basically followed the formulation of SPH introduced by Springel and Hernquist (2002). The smoothing length \(h_{i}\) of each particle is determined by
\[\frac{4\pi}{3}h_{i}^{3}\rho_{i}=\overline{m}N_{\rm neigh}, \tag{45}\]
where \(\overline{m}\) is the average mass of gas particles. These equations (43) and (45) need to be solved implicitly for \(\rho_{i}\) and \(h_{i}\). However, the minimum value of smoothing length \(h\) is set to gravitational softening \(\epsilon\) in order to match the spatial resolution.
The momentum equation for a gas particle is given by
\[\frac{\mathrm{d}\mathbf{v}_{i}}{\mathrm{d}t}=-\sum_{j}m_{j}\Big{[} f_{i}\frac{p_{i}}{\rho_{i}^{2}}\nabla_{i}W(r_{ij},h_{i})+f_{j}\frac{p_{ j}}{\rho_{j}^{2}}\nabla_{i}W(r_{ij},h_{j})\] \[+\Pi_{ij}\nabla_{i}\overline{W}_{ij}\Big{]} \tag{46}\]
where \(p_{i}\) is the pressure of the gas particle, \(f_{i}\) is defined by
\[f_{i}=\left(1+\frac{h_{i}}{3\rho_{i}}\frac{\partial\rho_{i}}{\partial h_{i}} \right)^{-1}, \tag{47}\]
and \(\overline{W}_{ij}\) is a symmetrised kernel,
\[\overline{W}_{ij}=\frac{1}{2}\left[W(r_{ij},h_{i})+W(r_{ij},h_{j})\right]. \tag{48}\]
Artificial viscosity \(\Pi_{ij}\) is necessary for the proper handling of shocks. For numerical stability, the entropic function \(A_{i}=p_{i}/\rho_{i}^{\gamma}\) rather than the specific internal energy \(u_{i}\) is used to calculate the thermodynamic evolution of gas particles. The entropy equation is given by
\[\frac{\mathrm{d}A_{i}}{\mathrm{d}t}=\frac{1}{2}\frac{\gamma-1}{\rho_{i}^{ \gamma-1}}\sum_{j}m_{j}\Pi_{ij}\mathbf{v}_{ij}\cdot\nabla_{i}\overline{W}_{ij}, \tag{49}\]
where \(A_{i}\) is conserved in adiabatic flow, but it is generated by artificial viscosity via shocks.
#### 4.1.1 Artificial viscosity
In this paper, we adopt Monaghan's (1997) artificial viscosity \(\widetilde{\Pi}_{ij}\) combined with Balsara's (1995) switch \(F_{i}\). The artificial viscosity is expressed as
\[\Pi_{ij}=\frac{F_{i}+F_{j}}{2}\widetilde{\Pi}_{ij}, \tag{50}\]
where
\[\widetilde{\Pi}_{ij}=\begin{cases}-\alpha\frac{v_{ij}^{\rm sig}w_{ij}}{\rho_{i }+\rho_{j}}&\quad\mathbf{v}_{ij}\cdot\mathbf{r}_{ij}<0,\\ 0&\quad\mathbf{v}_{ij}\cdot\mathbf{r}_{ij}\geq 0,\end{cases} \tag{51}\]
\[v_{ij}^{\rm sig}=c_{\rm s,i}+c_{\rm s,j}-3w_{ij}, \tag{52}\]
\[w_{ij}=\mathbf{v}_{ij}\cdot\mathbf{r}_{ij}/|\mathbf{r}_{ij}|, \tag{53}\]
and
\[F_{i}=\frac{|\nabla_{i}\cdot\mathbf{v}_{i}|}{|\nabla\cdot\mathbf{v}_{i}|+|\nabla_{i} \times\mathbf{v}_{i}|+0.0001c_{\rm s,i}/h_{i}}. \tag{54}\]
In order to handle strong shock waves generated by the galaxy collisions, we put the parameter \(\alpha=5\), which adjusts the strength of the artificial viscosity.
#### 4.1.2 Radiative cooling
As radiative cooling plays a crucial role in galaxy formation and evolution, a term for energy dissipation due to radiative cooling needs to be added to the entropy equation:
\[\left(\frac{\mathrm{d}A}{\mathrm{d}t}\right)_{\rm cool}=\frac{\gamma-1}{\rho^ {\gamma}}\left(\frac{\mathrm{d}u}{\mathrm{d}t}\right)_{\rm cool} \tag{55}\]
where
\[\left(\frac{\mathrm{d}u}{\mathrm{d}t}\right)_{\rm cool}=-\frac{n^{2}\Lambda(u,Z )}{\rho}=-\frac{\rho\Lambda(u,Z)}{\mu^{2}m_{\rm p}^{2}}. \tag{56}\]
Here, \(n\), \(\mu\) and \(m_{\rm p}\) are the number density of the gas, the mean molecular weight and the proton mass, respectively. \(\Lambda\) is the cooling function of the specific internal energy \(u\) and the metallicity \(Z\). In order to solve the cooling equation (56), we use the Exact Integration (EI) scheme (Townsend, 2009). Integrating from time \(t^{n}\) to \(t^{n+1}=t^{n}+\Delta t\), the equation (56) becomes
\[\int_{u_{\rm t}^{n}}^{u_{\rm t}+1}\frac{\mu(u)^{2}}{\Lambda(u,Z)}\mathrm{d}u=- \frac{\rho_{i}}{m_{\rm p}^{2}}\Delta t, \tag{57}\]
where we also take into account the time evolution of the mean molecular weight. Then, using the temporal evolution function which is defined by
\[Y(u)=\frac{\Lambda_{\rm ref}}{\mu_{\rm ref}^{2}u_{\rm ref}}\int_{u}^{u_{\rm ref }}\frac{\mu(u)^{2}}{\Lambda(u,Z)}\mathrm{d}u, \tag{58}\]
the cooling equation (56) becomes
\[u_{i}^{n+1}=Y^{-1}\left[Y(u_{i}^{n})+\frac{\Lambda_{\rm ref}}{\mu_{\rm ref}^{2} u_{\rm ref}}\frac{\rho_{i}}{m_{\rm p}^{2}}\Delta t\right]. \tag{59}\]
\(Y(u)\) can be obtained as a table by fitting the cooling function \(\Lambda\) with a piecewise power law. By using this integrated function \(Y(u)\), the time evolution equation can be solved taking into account the temperature dependence of the cooling rate and is not sensitive to the size of the time step.
#### 4.1.3 Star formation
We install the algorithms for the star formation and the resulting energetic "feedback" from young, massive stars. The simulation of the galaxy formation has usually an insufficient resolution to resolve these processes directly and instead adopts sub-grid physics tuned to match large-scale
observational constraints. The star formation is realised by the conversion of the SPH particle into a stellar particle with an initial mass function (IMF).
In this study, we assume that a stellar particle has Salpeter (1955) IMF. For a single SPH particle, it is converted to a stellar particle when the conditions are satisfied (Katz, 1992):
\[n_{\rm H}>10\,{\rm cm}^{-3} \tag{60}\] \[r\leq p=1-\exp\left(-\frac{C_{*}\Delta t}{t_{\rm ff}}\right) \tag{61}\]
where \(n_{\rm H}\) is the number density of hydrogen, \(r\) is the random value between 0 and 1, \(C_{*}=1\) is the constant value corresponding to the star formation efficiency and \(t_{\rm ff}\) is the local free-fall time.
#### 4.1.4 Supernova feedback
When a star with a mass of more than 8 M\({}_{\odot}\) reaches the end of its life, it undergoes a supernova explosion, releasing about \(10^{51}\) erg per star into the surrounding gas. This supernova feedback heats up the surrounding gas, leading to a decrease in the star formation rate of the galaxy. Assuming a Salpeter (1955) IMF with the upper mass of 60 M\({}_{\odot}\) and the lower mass of 0.1 M\({}_{\odot}\), the number of massive stars (\(>8\) M\({}_{\odot}\)) in a stellar particle of mass \(m_{*}\) is
\[N_{\rm SN}\simeq 73.1\left(\frac{m_{*}}{10^{4}\,{\rm M}_{\odot}}\right). \tag{62}\]
Of all the stars that undergo supernova explosions, the shortest lifetime is 5.4 Myr (60 M\({}_{\odot}\)) and the longest is 43 Myr (8 M\({}_{\odot}\)). During this period, the feedback energy is released from the star to the gas. If we assume that the energy is equally distributed to neighbour SPH particles of the star particle, the energy received by the SPH particle per time step \(\Delta t\) is
\[\Delta E=\frac{L_{\rm SN}N_{\rm SN}\Delta t}{N_{\rm neigh}} \tag{63}\]
where \(L_{\rm SN}\) is the average energy rate per star during the explosion period. SPH particles that receive energy turn off
Figure 5: Snapshots of dark matter density (top), gas density (middle) and stellar density (bottom) of collision simulation between DMSHs with \(10^{9}\) M\({}_{\odot}\) at the relative velocity of 20 km s\({}^{-1}\). All the colour bars for mass density range from \(10^{-29}\) to \(10^{-21}\) g cm\({}^{-3}\). From left to right, \(t=0\), 285, 570 and 884 Myr, respectively. A dark-matter-dominated galaxy forms in the case of this velocity. The masses of star, gas and dark matter enclosed within the bound radius \(r_{\rm bound}=16.1\) kpc are \(M_{*}=5.19\times 10^{6}\) M\({}_{\odot}\), \(M_{\rm gas}=2.88\times 10^{7}\) M\({}_{\odot}\) and \(M_{\rm DM}=1.16\times 10^{9}\) M\({}_{\odot}\) at \(t=4.7\) Gyr, respectively.
radiative cooling calculations and evolve adiabatically. This technique has first been advocated by Mori et al. (1997), and then numerous investigations have already scrutinized and sophisticated this technique (Gerritsen, 1997; Mori et al., 1999; Thacker & Couchman, 2000, and so on).
#### 4.1.5 Time stepping
The simulation time steps \(\Delta t\) share the same value throughout the system. It is determined by the CFL conditions:
\[\Delta t=\min_{i}(\Delta t_{i,\rm grav},\,\Delta t_{i,\rm hydro}), \tag{64}\] \[\Delta t_{i,\rm grav}=C_{\rm CFL}\sqrt{\frac{\epsilon}{|{\rm d}v_ {i}/{\rm d}t|}},\] (65) \[\Delta t_{i,\rm hydro}=C_{\rm CFL}\frac{h_{i}}{\max_{j}(v_{ij}^{ \rm sig})}, \tag{66}\]
where \(C_{\rm CFL}\) is the CFL constant and we set \(C_{\rm CFL}=0.3\). We adopted the second-order Runge-Kutta method for the time integration.
#### 4.1.6 Implementation
This code is parallelized by the Framework for Developing Particle Simulators (FDPS: Iwasawa et al., 2016; Namekata et al., 2018). In FDPS, the code for parallelization is separated from the code for computing interactions and time integrals. It includes functions such as domain decomposition, redistribution of particles, and gathering of particle information for interaction calculation. The FDPS libraries are implemented using OpenMP for intra-node parallelism and MPI for inter-node parallelism. Using these libraries, users can implement parallelized programs by writing sequential code for interaction calculations. The gravitational force is calculated with a tree algorithm (Barnes & Hut, 1986; Barnes, 1990) and the tree-opening angle is 0.7. Our code has already been validated by running various test problems, including the shock-tube test, the Evrard collapse and so on (Otaki & Mori, 2022).
### Initial condition
In order to study the essential process of galaxy formation for DMSHs collision, we have set up an ideal situation for
a head-on collision. Each of the two colliding DMSHs have a total mass of \(M_{\rm sub}=M_{\rm DM}+M_{\rm gas}\), containing no stellar components, and the mass ratio between dark matter and gas is 5.36. Each DMSH centres initially at
\[(x,\,y,\,z)=(\pm 5,\,0,\,0)\,{\rm kpc}, \tag{67}\]
and the initial bulk velocities of the DMSHs are
\[(v_{x},\,v_{y},\,v_{z})=(\mp v_{\rm rel}/2,\,0,\,0), \tag{68}\]
respectively. The density distribution of dark matter adopts the NFW profile. The gas is assumed under the hydrostatic equilibrium in the gravitational potential of the dark matter halo,
\[\rho_{\rm gas}(r)=\rho_{\rm gas,0}\exp\left[-\frac{\mu m_{\rm p}}{k_{\rm B}T_{ \rm vir}}\Phi_{\rm NFW}(r)\right], \tag{69}\]
where \(T_{\rm vir}\) is the virial temperature of DMSH defined as
\[T_{\rm vir}=\frac{c(c^{2}+2c-2(1+c)\ln{(1+c)})}{2((1+c)\ln{(1+c)}-c)^{2}}\frac {GM_{\rm sub}\mu m_{\rm p}}{3k_{\rm B}R_{200}}. \tag{70}\]
To generate the initial conditions of a DMSH, we use the MAGI(Miki and Umemura, 2018). After generating particle distributions using MAGI, we calculated a DMSH for several hundred million years in an isolated system of adiabatic processes to suppress density fluctuations and reach dynamical equilibrium. We run collision simulations of DMSHs with the same mass for three different cases: \(M_{\rm sub}=10^{8}\), \(10^{9}\) and \(10^{10}\,{\rm M}_{\odot}\) The number of \(N\)-body particles and SPH particles is \(\sim 10^{6}\), and all particles have the same mass. We set the gravitational softening length \(\epsilon=0.1\,{\rm kpc}\) as the spatial resolution. The cooling rate for a given metallicity of the gas is calculated by MAPPINGS V(Sutherland and Dopita, 2017; Sutherland et al., 2018) assuming the Collisional Ionisation Equilibrium (CIE).
## 5 Results of simulations
We begin by showing the simulation results for the case of the DMSH collision with a mass of \(10^{9}\,{\rm M}_{\odot}\) and a relative velocity of \(20\,{\rm km\,s^{-1}}\). Fig. 5 shows, from top to bottom, the density distribution of dark matter, gas, and star in a thin slice at \(z=0\), and the elapsed times are 0, 285, 570, and 884 Myr from left to right, respectively.
At 285 Myr, the centres of DMSHs collide with each other, compressing the gas and increasing the gas density at the centre. Accordingly, star formation is activated in the central part of the colliding DMSHs. Shock waves are simultaneously generated at the collision surface and propagate upstreams. A high-density gas layer is then formed in the \(x=0\) plane, and a large amount of the gas is ejected along this plane.
At 570 Myr, the DMSHs are gravitationally attracted to each other and merge. The gravitational contraction of the gas component in the centre of the merged DMSHs induces a burst of star formation. Subsequently, massive stars explode as core-collapse supernovae, heating the surrounding gas to a temperature of \(\sim 10^{6}\,{\rm K}\). We can observe there is an
Figure 8: Evolution of the DMSH collision for the relative velocity of \(200\,{\rm km\,s^{-1}}\). Top panel: evolution of the enclosed mass within a half mass radius of a DMSH in the initial condition. The red, blue and green lines are enclosed masses of star, gas and dark matter, respectively. Bottom panel: the history of the overall star formation rate for the relative velocity of \(200\,{\rm km\,s^{-1}}\) in the simulation box.
Figure 7: Radial profile of stellar surface density \(\Sigma_{\rm star}\) in the face-on plane. The black points are stellar surface densities of a simulated galaxy averaged for each distance from the centre of mass for the collision simulation of \(200\,{\rm km\,s^{-1}}\). The solid line represents the Sérsic curve of the effective radius \(r_{\rm e}=0.5\,{\rm kpc}\) and Sérsic index \(n_{\rm Sérsic}=0.8\). The region below the spatial resolution \(\epsilon=0.1\,{\rm kpc}\) is shaded in grey.
expanding superbubble driven by the supernova feedback at the left side of the collision surface in the middle panel. As a result, star formation is partially suppressed. After 500 Myr, the star formation rate is \(\sim 0.001\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) and the star formation is stable. As predicted by the analytical model, two DMSHs merge and form a normal dark-matter-dominated galaxy having dark matter mass \(M_{\mathrm{DM}}=1.16\times 10^{9}\,\mathrm{M}_{\odot}\), gas mass \(2.88\times 10^{7}\,\mathrm{M}_{\odot}\) and stellar mass \(5.19\times 10^{6}\,\mathrm{M}_{\odot}\) at 4.7 Gyr. These are calculated as the masses enclosed within the bound radius, defined as the maximum radius of the stellar particles binding in the system.
Fig. 6 shows the result of the DMSH collision for a relative velocity of 200 km s\({}^{-1}\). It is the same as Fig. 5, but the elapsed times are 0, 50, 100, and 265 Myr from left to right, respectively. The centres of each DMSH collide at 50 Myr. On the collision surface of the high-density gas, star formation occurs. At 100 Myr, the dark matter components in the DMSHs penetrate each other. Then, as the gravitational potential becomes shallower with a short timescale, the distribution of the stellar component expands, decreasing its density. The stars that form before the collision pass through in a similar way to the motion of dark matter. Therefore, the stellar component of the galaxies in the collision surface is formed by the collision of the gas in the DMSH. At the same time, several gravitationally bound star clusters with masses of about \(10^{5}\,\mathrm{M}_{\odot}\) are formed due to the fragmentation of the dense gas layer on the collision surface. After 265 Myr, the dark matter has completely passed through, leaving only a system of gas and stars on the collision surface. As predicted by the analytical model, dark matter components of DMSHs penetrate through, but gaseous medium collision each other, and enhancement of the gas density induces a burst of star formation on the collision surface. Then collided gaseous medium form a dark-matter-deficient galaxy composed with stellar mass \(1.34\times 10^{7}\,\mathrm{M}_{\odot}\) and gas mass \(4.02\times 10^{5}\,\mathrm{M}_{\odot}\) at 3 Gyr.
Fig. 7 shows the stellar surface density of the dark-matter-deficient galaxy. The grey area shows the length be
Figure 9: Density-temperature diagram for overall gas particles in the simulation box for the relative velocity 200 km s\({}^{-1}\). Colour represents the mass of gas particles. The white dashed line is the star formation threshold in this simulation.
low the gravity resolution \(\epsilon\). The black points are stellar surface densities calculated from the star particles projected in the face-on direction. The curve lines fitted with the Sersic profile of the stellar component represented by solid lines. The colour of these lines corresponds to each plane. The effective radii (\(r_{\rm e}\)) and Sersic indexes (\(n_{\rm Sersic}\)) for the red and green lines are (\(r_{\rm e}\), \(n_{\rm Sersic}\)) = (0.5 kpc, 0.8).
The top panel in Fig. 8 shows the evolution of the enclosed masses \(M(<r_{\rm half})\) within the half mass radius \(r_{\rm half}=7.1\) kpc from the origin \((x,y,z)=(0,0,0)\) for the relative velocity of 200 km s\({}^{-1}\). \(r_{\rm half}\) is given by the half mass of a DMSH in the initial condition. The red, blue and green lines are enclosed masses of star, gas and dark matter, respectively. Since the time of central collision 50 Myr, the enclosed mass of dark matter has been decreasing and the star form from the gas remaining on the collision surface. Beyond 100 Myr, the gas mass decreases, and at the same time the stellar mass increases. The star formation history shows the bottom panel in Fig. 8. For a relative velocity of 200 km s\({}^{-1}\), the centre of each DMSH collides at 50 Myr, at which time the star formation rate peaks. After that, the star formation rate gradually decreases with oscillations due to the alternating enhancement of star formation by radiative cooling and suppression of star formation by heating due to supernova feedback. After 300 Myr, the gas density decreases due to outflow driven by the supernova feedback, and the star formation rate transitions to a lower state of about several \(10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\).
Fig. 9 shows the evolution of the density-temperature diagram for the relative velocity of 200 km s\({}^{-1}\). Colour represents the mass of gas at the specified density and temperature in the simulation box. The dashed line is the gas density threshold for star formation represented by Equation (60). In the initial condition, the gas of the DMSHs is under the virial equilibrium with the virial temperature of \(\sim 10^{4}\) K.
At 50 Myr, the density of the gaseous medium increases due to collisions, and radiative cooling is effective at the centre of the DMSH. The low-density gas (\(<10^{-26}\) g cm\({}^{-3}\)) located on the outskirt of each halo is adiabatically compressed, and its temperature rises to over \(10^{5}\) K.
The dense gas (\(>10^{-23}\) g cm\({}^{-3}\)) has a lower temperature due to radiative cooling and also exceeds the density threshold for star formation conditions. After 100 Myr, the star formation rate is at its highest, and the supernova feedback heats the dense gas in the star-forming regions. After 265 Myr, the outflow driven by the strong feedback blows most of the gas away from the system, eventually giving rise to dark-matter-deficient galaxies.
Figure 10: Same as Figure 5, but relative velocity is 1200 km s\({}^{-1}\). From left to right, \(t=0\), 10, 50, and 200 Myr, respectively.
Fig. 10 shows the result of the DMSH collision for a relative velocity of 1200 km s\({}^{-1}\). It is the same as Fig. 5, but the elapsed times are 0, 10, 50, and 300 Myr from left to right, respectively.
At 10 Myr the DMSHs are just after the central collision, the dark matter components are passed through each other, and a few stars form in the dense gas region of the collision surface at 50 Myr. Finally, after 300 Myr, no galaxy forms because the shock-breakout occurs after the DMSHs collision and most of the gas is ejected from the system without forming stars.
Table 2 lists the initial conditions and results of our collision simulations. The effects of supernova feedback on collision-induced galaxy formation are discussed in Section 6.2. This paper adopts a model in which gas particles evolve adiabatically while receiving thermal energy from supernova explosions as a fiducial feedback model. This table provides the properties of the most massive galaxy that formed after the DMSH collision. We define \(r_{\rm bound}\) as the region of the galaxy where the stellar component is gravitationally bound, and the table represents the masses enclosed within that radius. As a result of the collision simulations, we classify the collision-induced objects as "normal dwarf", "dark-matter-deficient galaxy (DMDG)", "star cluster" or "no galaxy". We define a galaxy with a dark matter fraction of more than 50% (\(f_{\rm DM}=M_{\rm DM}/M_{\rm tot}>0.5\)) as a dark-matter-dominated galaxy and a galaxy with \(f_{\rm DM}\)\(\leq\)0.5 as a dark-matter-deficient galaxy. In particular, we defined star clusters as objects that have no dark matter at all and a stellar mass of less than \(10^{6}\) M\({}_{\odot}\).
The results of the collision simulations are summarised in Fig. 11. Filled red circles and blue squares indicate the formation of a dark-matter-deficient galaxy and a dark-matter-dominated galaxy, respectively. Open red circles are the formation of star clusters by the fragmentation at the collision surface. The crosses indicate the result of no galaxy formation at the collision surface. The solid lines are the results of analytical models; the upper line corresponds to the shock-breakout condition and the lower line to the merger condition. The dashed line is the Jeans mass calculated from the temperature \(10^{4}\) K, which is discussed in Section 6.2.
## 6 Summary and Discussion
Based on the analytical and numerical studies of the DMSH collisions, galaxy collisions play a significant role in the formation of dark-matter-deficient galaxies. We estimated the distribution of collision frequency in the host galaxy and found that most collisions occur within 0.1 \(R_{\rm 200,\,host}\). The total collision frequency and collision timescale are 74.2 Gyr\({}^{-1}\) and 13.5 Myr, respectively, for the collisions between DMSHs with \(10^{9}\) M\({}_{\odot}\) in the host galaxy with \(10^{12}\) M\({}_{\odot}\). We found the critical relative velocities for the bifurcation sequence of the formation of dark-matter-dominated galaxies and dark-matter-deficient galaxies. The higher relative velocities are required to form dark-matter-deficient galaxies in the lower metallicity environments. In a head-on collision simulation between two DMSHs with the mass of \(10^{9}\) M\({}_{\odot}\) including gaseous medium with solar metallicity, a dark-matter-deficient galaxy is formed for the relative velocity of
Figure 11: Results of simulations performed for \(Z=0.1\) Z\({}_{\odot}\). The horizontal axis is the mass of a colliding DMSH, and the dark matter mass to gas mass ratio is 5.36. The vertical axis is the relative velocity. Red filled circles, red open circles, and blue squares indicate the formation of a dark-matter-deficient galaxy, star cluster, and a dark-matter-dominated galaxy, respectively. The crosses indicate the result of no galaxy formation. The upper and lower solid lines are shock-breakout and merger conditions, respectively. The dashed lines are the Jeans criteria for the isothermal collisions of \(10^{4}\) K. These dashed lines, from left to right, correspond to cases for the initial gas mass to Jeans mass ratio \(\beta=1.0\), 0.1, 0.01, respectively.
Figure 12: Comparison between observed galaxies and simulated galaxies of the stellar mass versus the effective radius. The red filled circles, red open circles, and blue squares indicate the dark-matter-deficient galaxies, star cluster, and normal dwarf galaxies in our simulations, respectively. The green circle, square, thin diamonds and thick diamonds are observational results of dark-matter-deficient galaxies reported by van Dokkum et al. (2018), van Dokkum et al. (2019), Mancera Pina et al. (2019, 2020) and Guo et al. (2020), respectively. The downward and upward triangles represent dwarf galaxies in the catalogue of LITTLE THINGS data (Hunter et al., 2012; Oh et al., 2015) and dwarf galaxies in the Local Group (McConnachie, 2012; Battaglia and Nipoti, 2022), respectively. The dashed line indicates the average mass–size relation for dwarf satellites in the local volume given by Carlsten et al. (2021).
\(200\,\mathrm{km\,s^{-1}}\). A discussion about the detailed physical processes in off-centre collisions of DMSHs will be a future study. In the following, we compare our results with previous studies and observations. Then, we discuss some physical processes that are not or insufficiently considered in our model.
### Comparison to the previous studies
There are several theoretical studies on the formation of dark-matter-deficient galaxies. Ogiya (2018) investigated the formation of dark-matter-deficient galaxies by tidal interaction between a host galaxy and a satellite galaxy using _N_-body simulations. In tidal models, tidal tails usually appear on both sides of galaxies, depending on the degree of tidal force. However, observed dark-matter-deficient galaxies do not always have identified clear tidal tails such as NGC1052-DF2. In addition, dark-matter-deficient galaxies are not only found as satellite galaxies of a massive galaxy but also in low-density regions of intergalactic space. These indicate that tidal models alone do not explain all of the dark-matter-deficient galaxies in terms of galaxy formation scenarios. Since there are no restrictions on galaxy morphology in the collision model, as in the tidal model, there will be more situations in which the collision model can be applied. On the other hand, for dark-matter-deficient galaxies in low-density environments, further studies of collision probabilities in such environments are needed.
Shin et al. (2020) worked on an important study about the formation model of dark-matter-deficient galaxies. In Shin et al. (2020), simulations have been carried out in off-centre collisions, which we do not take into account in this paper. Our study has clarified the fundamental physical processes of the DMSH collisions and derived the critical relative velocities for the bifurcation sequence of the formation of dark-matter-dominated galaxies and dark-matter-deficient galaxies. They mentioned excessive supersonic turbulence as a reason dark-matter-deficient galaxies do not form in very high-velocity collisions. On the other hand, our simulations show that gas ejection induced by shock breakouts is essential to suppress the formation of dark-depleted galaxies. A quantitative and detailed comparison of the differences between these claims will be necessary.
Madau et al. (2020) study a scenario for the formation of globular clusters (GCs) triggered by fast collisions between DMSHs. It is interesting to note that the extrapolation of our analytical model (Fig.4) to the low-mass side may provide insight into this GC formation model. Furthermore, our simulations also show that the growth of instabilities generated at the collision surface can lead to the fragmentation of high-density regions, resulting mass of star clusters as high as those of observed globular clusters. It would be fascinating to investigate whether these meet the observational properties of globular clusters; however, this is still difficult due to the numerical resolution in our current simulations. Therefore, future high-resolution calculations are expected.
We show the galaxies formed in our collision simulations and the observed normal dwarf and dark-matter-deficient galaxies in Fig. 12. The red circles and blue squares indicate the dark-matter-deficient galaxies and normal dwarf galaxies in our simulations, respectively. The green circle, square, thin diamonds and thick diamonds are observational results of dark-matter-deficient galaxies reported by van Dokkum et al. (2018), van Dokkum et al. (2019), Mancera Pina et al. (2019, 2020) and Guo et al. (2020), respectively. The downward and upward triangles represent dwarf galaxies in the catalogue of LITTLE THINGS data (Hunter et al., 2012; Oh et al., 2015) and dwarf galaxies in the Local Group (McConnachie, 2012; Battaglia and Nipoti, 2022), respectively. Fig. 12 indicates the stellar mass-size (\(M_{*}\)-\(r_{\mathrm{e}}\)) relation for the galaxies. The dashed line indicates the average mass-size relation for dwarf satellites in the local volume given by Carlsten et al. (2021). UDGs are defined as galaxies with effective radii greater than \(1.5\,\mathrm{kpc}\). The collision simulation between \(10^{10}\,\mathrm{M_{\odot}}\) DMSHs shows the formation of a single dark-matter-deficient galaxy with \(r_{\mathrm{e}}=2.5\,\mathrm{kpc}\) at a relative velocity of \(400\,\mathrm{km\,s^{-1}}\). Therefore, this dark-matte-deficient galaxy formed by such the collision process could be observed as a UDG. On the other hand, dwarf galaxies formed at slower relative velocities show a slight offset to the minor side in the mass-radius relationship derived from observations. This might be due to the fact that the supernova feedback, which changes the gravitational potential through galactic outflows, still has only a little effect.
### Effects of supernova feedback
It is well known that subgrid models of supernova feedback have a significant impact on galaxy formation simulations. Supernova feedback heats up the ambient gas and decreases the star formation rate of the galaxy through gas outflows. Recently, various methods have been developed to give supernova feedback in a more appropriate way (e.g., Dalla Vecchia and Schaye, 2012; Shimizu et al., 2019; Oku et al., 2022). This strong feedback causes a large amount of the interstellar medium to be ejected out of the system, subsequently causing the gravitational potential of the system to become shallower. If this occurs on a timescale sufficiently shorter than the dynamical time of the system, the system expands quickly and achieves a new dynamical equilibrium state. Much work has been done on these fundamental physical processes, with analytical treatments investigated by (Dekel and Silk, 1986) and demonstrated by Mori et al. (1997, 1999) in numerical simulations. More recently, (Di Cintio et al., 2017) analysed its effects on UDGs
In the case of effective cooling, the temperature of primordial gas could be \(10^{4}\,\mathrm{K}\) in CIE. We consider the collisions of isothermal gas clouds and isothermal shock in our analytical model. The conditions are the same as the adiabatic shock-breakout condition, but the isothermal shock of \(10^{4}\,\mathrm{K}\) is generated at the collision surface. The gas density of shocked clouds are
\[\rho_{1,\,\mathrm{iso}}=\frac{\mathcal{M}^{2}+\mathcal{M}\sqrt{\mathcal{M}^{2} +4}+2}{2}\rho_{0}, \tag{71}\]
where \(\mathcal{M}=v_{\mathrm{rel}}/(2c_{\mathrm{s,\,iso}})\) is the isothermal Mach number and \(c_{\mathrm{s,\,iso}}=8.2\,\mathrm{km\,s^{-1}}\) is the isothermal sound speed for gas metallicity \(0.1\,\mathrm{Z_{\odot}}\). The Jeans instability criterion in two gas clouds is
\[2\beta M_{\mathrm{gas}}\geq M_{\mathrm{J}}\equiv\sqrt{\frac{\pi^{5}c_{\mathrm{s,\,iso}}^{6}}{36G^{3}\rho_{1,\,\mathrm{iso}}}}, \tag{72}\]
where \(\beta\) is the parameter. In Fig. 11, the dashed lines indicate the Jeans instability criteria for \(\beta=1.0,\,0.1,\,0.01\)
from left to right, respectively. In collision simulations between subhaloes with masses of \(10^{8}\,\mathrm{M}_{\odot}\), no galaxies formed on the collision surface at relative velocities of 20, 100, and 200 km s\({}^{-1}\) since gaseous medium in subhaloes are not Jeans unstable. On the other hand, for relative velocities of 400, 600, and 800 km s\({}^{-1}\), the gas fragments and star formation occur after DMSH collisions inducing the formation of star clusters with less than 1/10 of the initial gas mass.
Next, we run collision simulations for the other feedback model to compare the formation processes and the property of collision-induced galaxy. This model is implemented as the SPH particles receive feedback energy from supernova and evolve without turning off radiative cooling calculations. Since the effect of supernova feedback is weak for gas heating, the gas outflow rate is lower than in previous simulations. Based on this weak feedback model, we simulate collisions between DMSHs with the same masses (\(10^{8}\,\mathrm{M}_{\odot}\) or \(10^{9}\,\mathrm{M}_{\odot}\)) for three relative velocities (low, moderate, and high speed), respectively. All other parameters and initial conditions are the same as the strong feedback model. We summarise the result of simulations for this weak feedback model in Table 2 and Fig. 13. Compared to the strong feedback model, simulated galaxies in the weak feedback model evolve with a higher star formation rate on average, although the maximum star formation rate at the time of collision is comparable. Moreover, the low gas outflow rate due to the weak feedback results in the formation of galaxies and star clusters with massive stellar masses and smaller effective radii. In the case of a collision simulation of \(10^{9}\,\mathrm{M}_{\odot}\) with a relative velocity of 200 km s\({}^{-1}\), a DMDG is formed with a stellar mass of \(1.34\times 10^{7}\,\mathrm{M}_{\odot}\) and \(1.99\times 10^{8}\,\mathrm{M}_{\odot}\) and an effective radius of 0.51 kpc and 0.092 kpc in the strong and weak feedback models, respectively. Since the feedback model has a significant effect on the properties of galaxies formed by collisions, it is very interesting that future observations of these galaxies evaluate how effectively the supernova feedback influences the formation of these galaxies.
### Radiative cooling
Throughout this paper, we use the EI scheme (Townsend, 2009) to calculate the radiative cooling term of the energy equation in the simulations. In the EI scheme, the time evolution of the radiative cooling term in the energy equation can be solved for its temperature dependence by fitting a temporal evolution function \(Y(T)\) integrating the inverse of the cooling rate. In order to consider the thermodynamic evolution of the gas in DMSH collision simulations, we define the cooling time using the EI scheme, instead of the conventional cooling time \(t_{\mathrm{cool,\,conv}}\) given by the equation (40). In the following, the temperature is used instead of the specific internal energy. The effective cooling time is defined as
\[t_{\mathrm{cool,\,eff}}(u_{\mathrm{start}}\to u_{\mathrm{end}}) =-\frac{m_{\mathrm{p}}^{2}}{\rho}\int_{u_{\mathrm{start}}}^{u_{ \mathrm{end}}}\frac{\mu(u)^{2}}{\Lambda(u,Z)}\mathrm{d}u, \tag{73}\] \[=Y(u_{\mathrm{end}})\,t_{\mathrm{cool,\,conv}}(u_{\mathrm{start}}). \tag{74}\]
This means the cooling time given for the energy \(u_{\mathrm{start}}\) to cool down to \(u_{\mathrm{end}}\).
Fig. 14 illustrates the timescales after a collision between DMSHs with \(10^{9}\,\mathrm{M}_{\odot}\) in the analytical model (3.2) as a function of the relative velocity. The left and right panels correspond for the gas metallicity for \(Z=10^{-1}\,Z_{\odot}\) and \(Z=10^{-3}\,Z_{\odot}\), respectively. The panels display the conventional cooling time \(t_{\mathrm{cool,conv}}\) as solid red lines, while the effective cooling time \(t_{\mathrm{cool,eff}}\) is represented by dashed red lines. The free-fall time
\[t_{\mathrm{ff}}=\sqrt{\frac{3\pi}{32G\rho}}, \tag{75}\]
is depicted as almost horizontal solid lines, and the remaining black solid lines denote the shock-crossing time \(t_{\mathrm{cross}}\). These timescales are represented as a function of the gas temperature after the collision, assuming that the kinetic energy is entirely converted to internal energy. In the case of DMSHs colliding with a velocity of 100 km s\({}^{-1}\) for \(Z=10^{-1}\,Z_{\odot}\), the conventional cooling time at a temperature of \(2.4\times 10^{5}\,\mathrm{K}\) is about 0.02 Myr. In contrast, the effective cooling time for \(T_{\mathrm{end}}=10^{3}\,\mathrm{K}\) is 10 Myr. The conventional definition yields a shorter cooling time than the new definition because of the temperature dependence of the cooling rate. Consequently, the present method is significantly different from the conventional method, and it is highly effective in accurately tracking radiative cooling in the study of galaxy formation.
Another important physical process in galaxy evolution is molecular cooling. This process is effective at low metallicity of \(10^{-3}\,\mathrm{Z}_{\odot}\). To study the evolution of galaxies without the gravitational potential of dark matter, it is necessary to solve molecular cooling and non-equilibrium chemical calculations. However, in this paper, molecular cooling does not come into play since we run collision simulations between DMSHs with metal abundances of \(10^{-1}\,\mathrm{Z}_{\odot}\) For low-temperature gases below \(10^{4}\,\mathrm{K}\), ignoring the effects of dust, it is known that cooling by heavy elements is more efficient than molecular hydrogen cooling for gases containing this amount of heavy elements.
Figure 13: Same as Figure 11, but results for the weak feedback model.
### Thermal conduction
We use the analytical model to estimate the effect of thermal conduction on the physical processes of DMSH collisions. The spatial resolution is \(100\,\mathrm{pc}\) for all simulation results in this paper, but a higher resolution is needed to compare the result of simulations with the observed compact dwarf galaxies and globular clusters. Since the thermal conduction by electrons is also an important physical process in plasma physics on a small scale, we use the analytical model to estimate the effect of thermal conduction on the physical processes of galaxy collisions.
The time scale of thermal conduction is defined as
\[t_{\mathrm{cond}}=\frac{\rho k_{\mathrm{B}}l^{2}}{(\gamma-1)\mu m_{\mathrm{p} }\kappa}, \tag{76}\]
where \(l\) is the scale length of the temperature gradient. The thermal conductivity for a hydrogen plasma \(\kappa\) is given by Cowie and McKee (1977),
\[\kappa(T)=1.31\,\frac{n_{\mathrm{e}}\lambda k_{\mathrm{B}}^{3/2}T^{1/2}}{m_{ \mathrm{e}}^{1/2}}, \tag{77}\]
where \(m_{\mathrm{e}}\) is the electron mass, \(n_{\mathrm{e}}\) is the electron density and \(\lambda\) is the equation mean free path for electrons,
\[\lambda=\frac{3^{3/2}(k_{\mathrm{B}}T_{\mathrm{e}})^{2}}{4\pi^{1/2}n_{\mathrm{ e}}e^{4}\ln\Lambda}\,, \tag{78}\]
where \(T_{\mathrm{e}}\) is the electron temperature, \(e\) is the elementary charge, and the Coulomb logarithm \(\ln\Lambda\) is
\[\ln\Lambda=37.8+\ln\left[\left(\frac{T_{\mathrm{e}}}{10^{8}\,\mathrm{K}} \right)\left(\frac{n_{\mathrm{e}}}{10^{-3}\,\mathrm{cm}^{-3}}\right)^{-1} \right]. \tag{79}\]
We assume the \(T_{\mathrm{e}}=T\) since the equilibrium timescale of electrons and ions is shorter than the cooling time in this model.
Fig. 15 represents the scale length of the temperature gradient \(l\) as functions of the relative velocity \(v_{\mathrm{rel}}\), corresponding to the case in which the cooling time (equation (40)) or shock crossing time (equation (39)) is equal to the thermal conduction time, respectively. The gas temperature in the post-collision is calculated on the assumption that the kinetic energy of relative velocities between subhaloes is converted to the internal energy of gaseous medium with
Figure 14: Timescales after collision between DMSHs with \(10^{9}\,\mathrm{M}_{\odot}\) in the analytical model. The left panel and right panel are the gas metallicity for \(10^{-1}\,\mathrm{Z}_{\odot}\) and \(10^{-3}\,\mathrm{Z}_{\odot}\), respectively. The temperature divided by the mean molecular weight \(T/\mu\) corresponds to the kinetic energy of the relative velocity \(v_{\mathrm{rel}}\). The blue line is the conventional cooling time \(t_{\mathrm{conv,\,eff}}\), the red lines are the effective cooling time \(t_{\mathrm{cool,\,eff}}\) with the solid line corresponding to the timescale for \(T_{\mathrm{end}}=10^{4}\,\mathrm{K}\), and the dashed line corresponding to the timescale for \(T_{\mathrm{end}}=10^{3}\,\mathrm{K}\). The two grey lines are the shock-crossing time \(t_{\mathrm{cross}}\) and the free-fall time \(t_{\mathrm{ff}}\), respectively.
Figure 15: Scale length of the temperature gradient for the relative velocity of DMSH in our analytical model. The blue and red lines indicate that the thermal conduction timescales equal the cooling time and shock crossing time, respectively. medium with \(0.1\,\mathrm{Z}_{\odot}\). The dotted, solid and dashed lines correspond to the subhalo mass of \(10^{8}\,\mathrm{M}_{\odot}\), \(10^{9}\,\mathrm{M}_{\odot}\) and \(10^{10}\,\mathrm{M}_{\odot}\) assuming \(M_{\mathrm{DM}}/M_{\mathrm{gas}}=5.36\), respectively.
\(0.1\,{\rm Z}_{\odot}\). The dotted, solid and dashed lines correspond to the subhalo mass of \(10^{8}\,{\rm M}_{\odot}\), \(10^{9}\,{\rm M}_{\odot}\) and \(10^{10}\,{\rm M}_{\odot}\) assuming \(M_{\rm DM}/M_{\rm gas}=5.36\), respectively. It is clear that since the scale length of the temperature gradient is smaller than the spatial resolution of the simulation 100 pc, the conduction time is smaller than the cooling time and the shock crossing time. However, the scale length of the temperature gradient can be longer than the numerical resolution when performing high-resolution collision simulations, such as resolving to the several pc, the effective radius of a globular cluster.
The thermal conduction timescale is shorter than the shock crossing time at the velocity \(\sim 200\,{\rm km}\,{\rm s}^{-1}\) for the formation of dark-matter-deficient galaxies. Thermal conduction may affect the gas outflow rate and the evolution of galaxies. Therefore, the physical process of thermal conduction needs to be taken into account in high-resolution simulations of DMSH collisions, since it may affect the gas outflow rate and the galaxy evolution and formation.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \multicolumn{2}{c}{Initial conditions} & \multicolumn{6}{c}{Properties of a most massive galaxy or subhalo1 } & Collision-induced objects \\ \(M_{\rm out}\) [M\({}_{\odot}\)] & \(v_{\rm rel}\) [km s\({}^{-1}\)] & Feedback & \(M_{\rm s}\) [M\({}_{\odot}\)] & \(M_{\rm out}\) [M\({}_{\odot}\)] & \(v_{\rm total}\) [kpc] & \(v_{\rm rel}\) [kpc] & \(v_{\rm rel}\) [kpc] & \(v_{\rm total}\) [kpc] \\ \hline & 10 & S & \(1.02\times 10^{5}\) & \(5.88\times 10^{3}\) & \(1.77\times 10^{6}\) & 0.2 & 0.10 & 0.23 & Normal dwarf \\ & 10 & W & \(2.47\times 10^{7}\) & \(6.73\times 10^{5}\) & \(3.89\times 10^{7}\) & 1.3 & 0.072 & 1.8 & Normal dwarf \\ & 20 & S & - & - & - & - & - & No galaxy \\ & 100 & S & - & - & - & - & - & No galaxy \\
10\({}^{8}\) & 200 & S & - & - & - & - & - & - & No galaxy \\ & 200 & W & \(1.65\times 10^{7}\) & \(7.94\times 10^{5}\) & 0 & 15.4 & 0.01 & 4.6 & DMDG \\ & 400 & S & \(3.68\times 10^{3}\) & 0 & 0 & 3.0 & - & - & Star cluster \\ & 600 & S & \(1.05\times 10^{6}\) & 0 & 0 & 3.8 & 0.81 & 0.3 & Star cluster \\ & 800 & S & \(1.55\times 10^{4}\) & 0 & 0 & 3.3 & 0.50 & 0.99 & Star cluster \\ & 800 & W & \(7.23\times 10^{5}\) & \(6.35\times 10^{5}\) & 0 & 9.4 & 0.071 & 3.11 & Star cluster \\ \hline & 20 & S & \(5.19\times 10^{6}\) & \(2.88\times 10^{7}\) & \(1.16\times 10^{9}\) & 16.1 & 0.37 & 6.0 & Normal dwarf \\ & 20 & W & \(2.23\times 10^{5}\) & \(5.17\times 10^{6}\) & \(4.07\times 10^{8}\) & 2.4 & 0.16 & 2.2 & Normal dwarf \\
10\({}^{9}\) & 200 & S & \(1.34\times 10^{7}\) & \(4.02\times 10^{5}\) & 0 & 17.7 & 0.51 & 0.83 & DMDG \\ & 200 & W & \(1.99\times 10^{9}\) & \(9.16\times 10^{6}\) & 0 & 22.6 & 0.092 & 1.4 & DMDG \\ & 1200 & S & \(5.48\times 10^{6}\) & \(1.48\times 10^{6}\) & 0 & 21.6 & - & - & No galaxy \\ \hline & 100 & S & \(2.08\times 10^{8}\) & \(3.58\times 10^{6}\) & \(3.29\times 10^{9}\) & 51.2 & 0.80 & 11 & Normal dwarf \\
10\({}^{10}\) & 400 & S & \(2.08\times 10^{8}\) & \(6.09\times 10^{6}\) & 0 & 51.1 & 2.5 & 8.9 & DMDG \\ & 1200 & S & \(8.30\times 10^{6}\) & 0 & 0 & 22.6 & 0.15 & 4.1 & DMDG \\ \hline \end{tabular}
\end{table}
Table 2: Results of simulations total mass of subhalo \(M_{\rm sub}\) relative velocity \(v_{\rm rel}\) for the initial condition of collision simulations, feedback model (Strong or Week) as simulation set-up, enclosed stellar mass \(M_{*}\) gas mass \(M_{\rm gas}\) dark matter mass \(M_{\rm DM}\) within bound radius \(r_{\rm bound}\), effective radius of a galaxy \(r_{*}\). Sersic index \(n_{\rm lightc}\) for a most massive galaxy formed via a collision process, and types of collision-induced galaxies (normal dwarf galaxies with \(M_{\rm DM}/M_{\rm tot}>0.5\), DMDGs with \(M_{\rm DM}/M_{\rm tot}\leq 0.5\) and \(M_{*}\leq 10^{6}\)M\({}_{\odot}\) and star clusters with \(M_{\rm DM}/M_{\rm tot}\leq 0.5\) and \(M_{*}\leq 10^{6}\)M\({}_{\odot}\)).
## Acknowledgments
We would like to thank the anonymous referee for the useful suggestions. Numerical computations were performed with computational resources provided by the Multidisciplinary Cooperative Research Program in the Center for Computational Sciences, the University of Tsukuba, Oakforest-PACS operated by the Joint Center for Advanced High-Performance Computing (JCAHPC), and the FUJITSU Supercomputer PRIMEHPC FX1000 and FUJITSU Server PRIMERGY GX2570 (Wisteria/BDEC-01) at the Information Technology Center, the University of Tokyo. This work was supported by JSPS KAKENHI Grant Numbers JP22KJ0370, JP1J21J21888, JP20K04022.
## Data Availability
Data related to this work will be shared on reasonable request to the corresponding author.
|
2302.09527 | SanskritShala: A Neural Sanskrit NLP Toolkit with Web-Based Interface
for Pedagogical and Annotation Purposes | We present a neural Sanskrit Natural Language Processing (NLP) toolkit named
SanskritShala (a school of Sanskrit) to facilitate computational linguistic
analyses for several tasks such as word segmentation, morphological tagging,
dependency parsing, and compound type identification. Our systems currently
report state-of-the-art performance on available benchmark datasets for all
tasks. SanskritShala is deployed as a web-based application, which allows a
user to get real-time analysis for the given input. It is built with
easy-to-use interactive data annotation features that allow annotators to
correct the system predictions when it makes mistakes. We publicly release the
source codes of the 4 modules included in the toolkit, 7 word embedding models
that have been trained on publicly available Sanskrit corpora and multiple
annotated datasets such as word similarity, relatedness, categorization,
analogy prediction to assess intrinsic properties of word embeddings. So far as
we know, this is the first neural-based Sanskrit NLP toolkit that has a
web-based interface and a number of NLP modules. We are sure that the people
who are willing to work with Sanskrit will find it useful for pedagogical and
annotative purposes. SanskritShala is available at:
https://cnerg.iitkgp.ac.in/sanskritshala. The demo video of our platform can be
accessed at: https://youtu.be/x0X31Y9k0mw4. | Jivnesh Sandhan, Anshul Agarwal, Laxmidhar Behera, Tushar Sandhan, Pawan Goyal | 2023-02-19T09:58:55Z | http://arxiv.org/abs/2302.09527v2 | SanskritShala: A Neural Sanskrit NLP Toolkit with Web-Based Interface for Pedagogical and Annotation Purposes
###### Abstract
We present a neural Sanskrit Natural Language Processing (NLP) toolkit named SanskritShala1 to facilitate computational linguistic analyses for several tasks such as word segmentation, morphological tagging, dependency parsing, and compound type identification. Our systems currently report state-of-the-art performance on available benchmark datasets for all tasks. SanskritShala is deployed as a web-based application, which allows a user to get real-time analysis for the given input. It is built with easy-to-use interactive data annotation features that allow annotators to correct the system predictions when it makes mistakes. We publicly release the source codes of the 4 modules included in the toolkit, 7 word embedding models that have been trained on publicly available Sanskrit corpora and multiple annotated datasets such as word similarity, relatedness, categorization, analogy prediction to assess intrinsic properties of word embeddings. So far as we know, this is the first neural-based Sanskrit NLP toolkit that has a web-based interface and a number of NLP modules. We are sure that the people who are willing to work with Sanskrit will find it useful for pedagogical and annotative purposes. SanskritShala is available at: [https://cnerg.iitkgp.ac.in/sanskrishala](https://cnerg.iitkgp.ac.in/sanskrishala). The demo video of our platform can be accessed at: [https://youtu.be/x0x31V9k0mw4](https://youtu.be/x0x31V9k0mw4).
Footnote 1: It means ‘a school of Sanskrit’.
## 1 Introduction
Sanskrit is a culture-bearing and knowledge-preserving language of ancient India. Digitization has come a long way, making it easy for people to access ancient Sanskrit manuscripts Goyal et al. (2012); Adiga et al. (2021). However, we find that the utility of these digitized manuscripts is limited due to the user's lack of language expertise and various linguistic phenomena exhibited by the language. This motivates us to investigate how we can utilize natural language technologies to make Sanskrit texts more accessible.
The aim of this research is to create neural-based Sanskrit NLP systems that are accessible through a user-friendly web interface. The Sanskrit language presents a range of challenges for building deep learning solutions, such as the _sandhi_ phenomenon, a rich morphology, frequent compounding, flexible word order, and limited resources Sandhan et al. (2022); Krishna et al. (2021); Sandhan et al. (2021). To overcome these challenges, 4 preliminary tasks were identified as essential for processing Sanskrit texts: word segmentation, morphological tagging, dependency parsing, and compound type identification. The word segmentation task is complicated by the _sandhi_ phenomenon, which transforms the word boundaries Sandhan et al. (2022). The lack of robust morphological analyzers makes it challenging to extract morphological information, which is crucial for dependency parsing. Similarly, dependency information is essential for several downstream tasks such as word order linearisation Krishna et al. (2019) which helps to decode possible interpretation of the poetic composition. Additionally, the ubiquitous nature of compounding in Sanskrit is difficult due to the implicitly encoded semantic relationship between its constituents Sandhan et al. (2022). These 4 tasks can be viewed as a preliminary requirement for developing robust NLP technology for Sanskrit. Thus, we develop novel neural-based linguistically informed architectures for all 4 tasks, reporting state-of-the-art performance on Sanskrit benchmark datasets Sandhan et al. (2022);c,a). We also illustrate the efficacy of our language agnostic proposed systems in multiple low-resource languages.
In this work, we introduce a neural Sanskrit NLP toolkit named SanskritShala2 to assist computa
tional linguistic analyses involving multiple tasks such as word segmentation, morphological tagging, dependency parsing, and compound type identification. SanskritShala is also deployed as a web application that enables users to input text and gain real-time linguistic analysis from our pretrained systems. It is also equipped with user-friendly interactive data annotation capabilities that allow annotators to rectify the system when it makes errors. It provides the following benefits: (1) A user with no prior experience with deep learning can utilise it for educational purposes. (2) It can function as a semi-supervised annotation tool that requires human oversight for erroneous corrections. We publicly release the source code of the 4 modules included in the toolkit, 7 word embedding models that have been trained on publicly available Sanskrit corpora and multiple annotated datasets such as word similarity, relatedness, categorization, analogy prediction to measure the word embeddings' quality. To the best of our knowledge, this is the first neural-based Sanskrit NLP toolkit that contains a variety of NLP modules integrated with a web-based interface.
Summarily, our key contributions are as follows:
* We introduce the first neural Sanskrit NLP toolkit to facilitate automatic linguistic analyses for 4 downstream tasks (SS3).
* We release 7 pretrained Sanskrit embeddings and suit of 4 intrinsic evaluation datasets to measure the word embeddings' quality (SS4).
* We integrate SanskritShala with a user-friendly web-based interface which is helpful for pedagogical purposes and in developing annotated datasets (SS4).
* We publicly release codebase and datasets of all the modules of SanskritShala which currently mark the state-of-the-art results.3
Footnote 3: [https://github.com/Jivnesh/SanskritShala](https://github.com/Jivnesh/SanskritShala)
## 2 Related Work on Sanskrit NLP Tools
Recently, the Sanskrit Computational Linguistics (SCL) field has seen significant growth in building web-based tools to help understand Sanskrit texts. Goyal and Huet (2016) introduced the Sanskrit Heritage Reader (SHR), a lexicon-driven shallow parser that aids in the selection of segmentation solutions. Sansadhani is another web-based tool consisting of various rule-based modules. Recently, Terdalkar and Bhattacharya (2021, 2022) introduced a web-based annotation tool for knowledge-graph construction and a metrical analysis.
In short, tools for NLP can be divided into two groups: rule-based and annotation tools. Rule-based tools have limitations such as not providing a final solution, limited vocabulary coverage, and lacking user-friendly annotation features. Annotation tools, on the other hand, don't have the recommendations of rule-based systems, relying solely on annotators. To address these limitations, a web-based annotation framework called SHR++ (Krishna et al., 2020) was proposed. It combines the strengths of both types of tools by offering all possible solutions from rule-based system SHR for tasks like word segmentation and morphological tagging, allowing annotators to choose the best solution rather than starting from scratch.
Our proposal, SanskritShala, goes a step further by integrating a neural-based NLP toolkit that combines state-of-the-art neural-based pre-trained models with rule-based suggestions through a web-based interface. Each module of SanskritShala is trained to predict the solutions from the exhaustive candidate solution space generated by rule-based systems. Hence, it makes predictions in real time using neural-based models that have already been trained. Thus, a complete solution is shown to the users / annotators, which was not possible in any of the previous attempts.
Further, annotators can easily correct the mispredictions of the system with the help of user-friendly web-based interface. This would significantly reduce the overall cognitive load of annotators. To the best of our knowledge, SanskritShala is the first NLP toolkit available for a range of tasks with a user friendly annotation interface integrated with the neural-based modules.
## 3 A Neural NLP Sanskrit Toolkit
In this section, we describe SanskritShala, which is a neural Sanskrit NLP toolkit designed to aid computational linguistic analysis including various tasks, such as word segmentation, morphological tagging, dependency parsing, and compound type identification. It is also available as a web application that allows users to input text and obtain real-time linguistic analysis from our pretrained algorithms. We elucidate SanskritShala by first elaborating on its key modules.
Word Tokenizer:Earlier _lexicon-driven_ systems for Sanskrit word segmentation (SWS) rely on Sanskrit Heritage Reader Goyal and Huet (2016), SHR), a rule-based system, to obtain the exhaustive solution space for segmentation, followed by diverse approaches to find the most valid solution. However, these systems are rendered moot while stumbling out-of-vocabulary words. Later, _data-driven_ systems for SWS are built using the most recent techniques in deep learning, but can not utilize the available candidate solution space. To overcome the drawbacks of both lines of modelling, we build a **T**ransformer-based **L**inguistically-**I**nformed **S**anskrit Tokenizer (TransLIST) Sandhan et al. (2022) containing (1) a component that encodes the character-level and word-level potential candidate solutions, which tackles _sandhi_ scenario typical to SWS and is compatible with partially available candidate solution space, (2) a novel soft-masked attention for prioritizing selected set of candidates and (3) a novel path ranking module to correct the mispredictions. Figure 1(a) illustrates the TransLIST architecture, where the candidate solutions obtained from SHR are used as auxiliary information. In terms of the perfect match (PM) evaluation metric, TransLIST surpasses the existing state-of-the-art Hellwig and Nehrdich (2018) by 7.2 absolute points.
**Morphological Tagger:** Sanskrit is a morphologically-rich fusional Indian language with 40,000 possible labels for inflectional morphology Krishna et al. (2020); Gupta et al. (2020), where homonymy and syncterism are predominant Krishna et al. (2018). We train a neural-based
Figure 1: (a) Toy illustration of the TransLIST system.“disobhava”. Translation: “Become a servant.” (b) LemmaTag architecture in which multi-task learning formulation is leveraged to predict morphological tags and lemmas by employing bidirectional RNNs with character-level and word-level representations. (c) Proposed ensembled architecture for dependency parsing (d) Toy example illustrating the context-sensitive multi-task learning system: “aham pita-ambaram dharám” (Translation: “I wear a yellow cloth”) where ‘pita-ambaram’ is a compound having _Tatpurusa_ semantic class according to the context presented.
architecture (Kondratyuk et al., 2018, LemmaTag) on Sanskrit dataset (Krishnan et al., 2020). Figure 1(b) illustrates the system architecture in which multi-task learning formulation is leveraged to predict morphological tags and lemmas by employing bidirectional RNNs with character-level and word-level representations. We find that both tasks help by sharing the encoder, predicting label subcategories, and feeding the tagger output as input to the lemmatizer (Kondratyuk et al., 2018). Currently, our system trained on the Sanskrit dataset stands first on the Hackathon dataset (Krishnan et al., 2020) leaderboard.
Dependency Parser:We focus on low-resource Sanskrit dependency parsing. Numerous strategies are tailored to improve task-specific performance in low-resource scenarios. Although these strategies are well-known to the NLP community, it is not obvious to choose the best-performing ensemble of these methods for a low-resource language of interest, and not much effort has been given to gauging the usefulness of these methods. We investigate 5 low-resource strategies in our ensembled Sanskrit parser (Sandhan et al., 2022): data augmentation, multi-task learning, sequential transfer learning, pretraining, cross/mono-lingual and self-training. Figure 1(c) shows our ensembled system, which supersedes the current state-of-the-art (Krishna et al., 2020) for Sanskrit by 1.2 points absolute gain (Unlabelled Attached Score) and shows on par performance in terms of Labelled Attached Score. Our extensive multi-lingual experimentation on a variety of low-resource languages demonstrates significant improvements for languages that are not covered by pretrained language models.
Sanskrit Compound Type Identifier:SaCTI is a multi-class classification task that identifies semantic relationships between the components of a compound. Prior methods only used the lexical information from the constituents and didn't take into account the most crucial syntactic and contextual information for SaCTI. However, the SaCTI task is difficult mainly due to the implicitly encrypted context-dependent semantic relationship between the compound's constituents. Thus, we introduce a novel multi-task learning approach (Sandhan et al., 2022) (Figure 1(d)) which includes contextual information and enhances the complementary syntactic information employing morphological parsing and dependency parsing as two auxiliary tasks. SaCTI outperforms the state-of-the-art by \(7.7\) points (F1-score) absolute gain on the benchmark datasets.
## 4 Sanskrit Resources in SanskritShala
In this section, we describe 7 word embeddings pretrained on Sanskrit corpora and suit of 4 intrinsic tasks datasets to assess the quality of word embeddings, followed by the description of web interface.
Pretrained word embeddings for Sanskrit:There are two types of embedding methods: static and contextualized. Table 1 shows how they are categorized based on the smallest unit of input to the embedding model, such as character, subword, or token level. The paper focuses on two token-level word embeddings: Mikolov et al. Mikolov et al. (2013, word2vec) and Pennington et al. (2014, GloVe). Word2vec is the foundation for all subsequent embeddings and works on the local context window, while GloVe considers the global context. To address the OOV issue, subword (Wieting et al., 2016; Bojanowski et al., 2017; Heinzerling and Strube, 2018) and character-level (Kim et al., 2016; Jozefowicz et al., 2016) modeling have been proposed. We also explore two contextualized embeddings: ELMo (Peters et al., 2018) and ALBERT (Lan et al., 2020), a lighter version of BERT. We trained these 6 embedding methods on Sanskrit corpora and made the pretrained models publicly available (Sandhan et al., 2023).4 The following section describes our proposed pretraining for low-resource settings.
Footnote 4: [https://github.com/Jivnesh/SanskritShala/tree/master/EvalSan](https://github.com/Jivnesh/SanskritShala/tree/master/EvalSan)
LCM Pretraining:We propose a supervised pretraining, which automatically leverages morphological information using the pretrained encoders. In a nutshell, LCM integrates word representations from multiple encoders trained on three independent auxiliary tasks into the encoder of the neural dependency parser. LCM follows a pipeline-based approach consisting of two steps: pretraining and integration. Pretraining uses a sequence labelling
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Class** & **Input type** & **Systems** \\ \hline Static & character & charLM \\ \hline & subword & fastText \\ \hline & token & word2vec, gloVe, LCM \\ \hline Contextualized & character & ELMo \\ \hline & subword & ALBERT \\ \hline \end{tabular}
\end{table}
Table 1: Overview of Sanskrit pretrained embeddings.
Figure 4: (a) Dependency parser: Interactive module for the dependency parsing task which directly loads predicted dependency trees from our pretrain model and allows user to correct mispredictions using our interactive interface. (b) Illustration of compound identifier
Figure 3: (a) The candidate solution space generated by SHR for the word segmentation task and the predicted solution by our pretrained model is recommended for the sequence _‘prabhūtanaranägena balenopavivesa ha’_ using a yellow highlight. (b) Morphological Tagger: For each word, we show possible morphological analyses suggested by SHR as well as our system prediction in green if it falls in SHR’s candidate space, otherwise in orange.
Figure 2: The web interface of the SanskritShala. At the bottom right, a rule-based chatbot is added to navigate users on the platform to give users a user-friendly experience.
paradigm and trains encoders for three independent auxiliary tasks. Later, these pretrained encoders are combined with the encoder of the neural parser via a gating mechanism similar to Sato et al. (2017). The LCM consists of three sequence labelling-based auxiliary tasks, namely, predicting the dependency label between a modifier-modified pair (**LT**), the monolithic morphological label (**MT**), and the case attribute of each nominal **(CT)**. We encourage readers to refer Sandhan et al. (2021, LCM) for more details.
Datasets:The quality of word embedding spaces is evaluated through intrinsic and extrinsic methods. This study focuses on intrinsic evaluation, which involves assessing semantic and syntactic information in the words without testing on NLP applications. It is based on works such as Mikolov et al. (2013) and Baroni et al. (2014). These evaluations require a query inventory containing a query word and a related target word. However, such query inventories are not readily available for Sanskrit. To address this, we annotated query inventories for 4 intrinsic tasks: analogy prediction, synonym detection, relatedness, and concept categorization. The inventories were constructed using resources such as Sanskrit WordNet (Kulkarni, 2017), Amarakosa (Nair and Kulkarni, 2010), and Sanskrit Heritage Reader (Goyal and Huet, 2016; Huet and Goyal, 2013).
Web Interface:Figure 2 shows our Sanskrit-Shala toolkit that offers interactive web-based predictions for various NLP tasks. The toolkit is built using React framework, which makes it user-friendly and easy to use. One of the tasks it handles is the word segmentation task, which is built on top of the web-based application called SHR++. The SHR++ demonstration is depicted in Figure 3(a). The user inputs a Sanskrit string, which is then sent in real-time to SHR for potential word splits. The system prediction is then obtained from the pretrained word tokenizer. The human annotator is presented with the candidate solution space, with the system prediction highlighted in yellow. The toolkit also features a flask-based application for morphological tagging, which takes user input and scrapes possible morphological tags for each word using SHR. As shown in Figure 3(b), the predictions of the pretrained morphological tagger are displayed in green or orange, depending on whether they are present in the candidate solution of SHR or not. The user can also add a new tag if the actual tag is missing in the SHR solution space or the system's prediction. For the dependency parsing module, we have built a react-based front-end. The user input is passed to the pretrained model to generate a dependency structure. As illustrated in Figure 4(a), the front-end automatically loads the predicted dependency tree and allows the user to make corrections if there are any mispredictions. Additionally, Figure 4(b) shows a flask-based application for the compound type identifier, where users can give input to the system through its web interface. The final annotations can be downloaded after each individual module. We plan to maintain the progress of Sanskrit NLP and offer an overview of available datasets and existing state-of-the-art via the leaderboard for various tasks.
Interactive Chatbot:SanskritShala-bot is a rule-based chatbot that makes it easy to automate simple and repetitive user requests, like answering frequently asked questions and directing users to relevant resources. It is also easier to set up and maintain than AI-powered chatbots, which are more complicated, which makes them a good choice for us with limited resources. SanskritShala-bot is especially useful for helping many users quickly and effectively. It helps familiarize users with a platform by providing them with information and guidance on how to use it. It can answer questions about the platform's features, help users find their way around it, and explain step-by-step how to do certain tasks. This can make it easier for users to get started and leading to a better user experience.
## 5 Conclusion
We present the first neural-based Sanskrit NLP toolkit, SanskritShala which facilitates diverse linguistic analysis for tasks such as word segmentation, morphological tagging, dependency parsing and compound type identification. It is set up as a web-based application to make the toolkit easier to use for teaching and annotating. All the codebase, datasets and web-based applications are publicly available. We also release word embedding models trained on publicly available Sanskrit corpora and various annotated datasets for 4 intrinsic evaluation tasks to access the intrinsic properties of word embeddings. We strongly believe that our toolkit will benefit people who are willing to work with Sanskrit and will eventually accelerate the Sanskrit NLP research.
### Limitations
We plan to extend SanskritShala by integrating more downstream tasks such as Post-OCR correction, named entity recognition, verse recommendation, word order linearisation, and machine translation. Improving the performance of existing tasks would be important. For example, the current dependency parser is very fragile (performance drops by 50%) in the poetry domain.
### Ethics Statement
Our work involves the development of a platform for annotating Sanskrit text. We believe that this platform will be useful for people who are willing to work with Sanskrit for research and educational purposes. We have ensured that our platform is designed ethically and responsibly. We do not foresee any harmful effects of our platform on any community. However, we caution users to use the platform carefully as our pretrained models are not perfect, and errors can occur in the annotation process. All our systems are built using publicly available benchmark datasets, and we have released all our pretrained models and source codes publicly for future research. We are committed to transparency and open access in our work, and we believe that sharing our resources will benefit the wider NLP community. We also acknowledge that NLP research can have potential ethical implications, particularly in areas such as data privacy, bias and discrimination. We are committed to continuing to consider these ethical implications as we develop our platform, and we welcome feedback from the community on how we can improve our ethical practices.
## Acknowledgements
We are thankful to Oliver Hellwig for the DCS dataset and Gerard Huet for the Sanskrit Heritage Engine. We are grateful to Hackathon organizers5 who encouraged us to build the best performing morphological tagger. We thank Amrith Krishna, Uniphore for providing the SHR++ interface as a starting point for the web interface of SanskritShala. We appreciate the assistance of Bishal Santra and Suman Chakraborty, IIT Kharagpur in deploying SanskritShala on the IIT Kharagpur server. We are grateful to Hritik Sharma, IIT Kanpur for helping us build a React-based front-end for SanskritShala. We are thankful to Hrishikesh Terdalkar, IIT Kanpur for helpful discussions on deploying systems. We appreciate the anonymous reviewers' insightful suggestions for enhancing this work. We'd like to say thanks to everyone who helped us make the different neural models for SanskritShala. The work of the first author is supported by the TCS Fellowship under the Project TCS/EE/2011191P.
Footnote 5: [https://sanskritpanini.github.io/index.html](https://sanskritpanini.github.io/index.html)
|
2301.08369 | Eigenvectors of graph Laplacians: a landscape | We review the properties of eigenvectors for the graph Laplacian matrix,
aiming at predicting a specific eigenvalue/vector from the geometry of the
graph. After considering classical graphs for which the spectrum is known, we
focus on eigenvectors that have zero components and extend the pioneering
results of Merris (1998) on graph transformations that preserve a given
eigenvalue $\lambda$ or shift it in a simple way. These transformations enable
us to obtain eigenvalues/vectors combinatorially instead of numerically; in
particular we show that graphs having eigenvalues $\lambda= 1,2,\dots,6$ up to
six vertices can be obtained from a short list of graphs. For the converse
problem of a $\lambda$ subgraph $G$ of a $\lambda$ graph $G"$, we prove results
and conjecture that $G$ and $G"$ are connected by two of the simple
transformations described above. | J. -G. Caputo, A. Knippel | 2023-01-20T00:24:37Z | http://arxiv.org/abs/2301.08369v1 | # Eigenvectors of graph Laplacians: a landscape
###### Abstract
We review the properties of eigenvectors for the graph Laplacian matrix, aiming at predicting a specific eigenvalue/vector from the geometry of the graph. After considering classical graphs for which the spectrum is known, we focus on eigenvectors that have zero components and extend the pioneering results of Merris (1998) on graph transformations that preserve a given eigenvalue \(\lambda\) or shift it in a simple way. These transformations enable us to obtain eigenvalues/vectors combinatorially instead of numerically; in particular we show that graphs having eigenvalues \(\lambda=1,2,\ldots,6\) up to six vertices can be obtained from a short list of graphs. For the converse problem of a \(\lambda\) subgraph \(G\) of a \(\lambda\) graph \(G\)", we prove results and conjecture that \(G\) and \(G\)" are connected by two of the simple transformations described above.
Laboratoire de Mathematiques, INSA de Rouen Normandie,
Normandie Universite
76801 Saint-Etienne du Rouvray, France
E-mail: [email protected], [email protected]
## 1 Introduction
The graph Laplacian is an important operator for both theoretical reasons and applications [1]. As its continuous counterpart, it arises naturally from conservation laws and has many applications in physics and engineering. The graph Laplacian has real eigenvalues and eigenvectors can be chosen orthogonal. This gives rise to a Fourier like description of evolution problems on graphs; an example is the graph wave equation, a natural model for weak miscible flows on a network, see the articles [2], [3]. This simple formalism proved very useful for modeling the electrical grid [4] or describing an epidemic on a geographical network [5]. Finally, a different application of graph Laplacians is spectral clustering in data science, see the review [6].
Almost sixty years ago, Mark Kac [7] asked the question : can one Hear the Shape of a Drum? Otherwise said, does the spectrum of the Laplacian characterize the graph completely? We know now that there are isospectral graphs so that there is no unique characterization. However, one can ask a simpler question: can one predict eigenvalues or eigenvectors from the geometry of the graph? From the literature, this seems very difficult, most of the results are inequalities, see for example the beautiful review by Mohar [8] and the extensive monograph [9].
Many of the results shown by Mohar [8] are inequalities on \(\lambda_{2}\), the first non zero eigenvalue. This eigenvalue is related to the important maximum cut problem in graph theory and also others. Mohar [8] also gives some inequalities on \(\lambda_{n}\), the maximum eigenvalue, in terms of the maximum of the sum of two degrees. Another important inequality concerns the interlacing of the spectra of two graphs with same vertices, differing only by an edge. However, little is known about the bulk of the spectrum, i.e. the eigenvalues between \(\lambda_{2}\) and \(\lambda_{n}\). A very important step in that direction was Merris's pioneering article [10] where he introduced "Laplacian eigenvector principles" that allow to predict how the spectrum of a graph is affected by contracting, adding or deleting edges and/or of coalescing vertices. Also, Das [11] showed that connecting an additional vertex to all vertices of a graph increases all eigenvalues (except 0) by one.
Following these studies, in [12] we characterized graphs which possess eigenvectors of components \(\pm 1\) (bivalent) and \(0,\pm 1\) (trivalent). This is novel because we give exact results, not inequalities. Here, we continue on this direction and focus on eigenvectors that have some zero coordinates, we term these soft nodes; such soft nodes are important because there, no action can be effected on the associated mechanical system [3]. In this article, we use the important properties of graphs with soft nodes, we call these soft-graphs, to highlight eigenvalues/eigenvectors that can be obtained combinatorially (instead of numerically). We first show that eigenvalues of graph Laplacians with weights one are integers or irrationals. Then we present well known classical graphs whose spectrum is known exactly. We describe five graph transformations that preserve a given eigenvalue and two that shift the eigenvalue in a simple way. Among the transformations that preserve an eigenvalue, the link was explicitly introduced in the remarkable article by Merris (_link principle_) [10]. The articulation and the soldering were contained in the same paper and we choose to present elementary versions of these transformations. We find two new transformations that preserve an eigenvalue: the regular expansion and the replacement of a coupling by a square. We also present transformations that shift an eigenvalue in a predictable way: insertion of a soft node, addition of a soft node, insertion of a matching. The first is new, the second and third were found by Das [11] and Merris [10] respectively.
In the last part of the article we enumerate all the small graphs up to six vertices that have a given eigenvalue \(\lambda\) and explain the relations between them using
the transformations discussed previously. It is remarkable that these graphs can all be obtained from a short list of graphs. However the question is open for bigger graphs. Using the transformations mentioned above, \(\lambda\) soft graphs can be made arbitrarily large. The converse problem of a \(\lambda\) subgraph \(G\) of a \(\lambda\) graph \(G\)" is considered. We show that the matrix coupling the two Laplacians \(L(G)\) and \(L(G^{\prime})\), where \(G^{\prime}=G"-G\), is a graph Laplacian. If the remainder graph \(G^{\prime}\) is \(\lambda\), then it is formed using the articulation or link transformation. It is possible that the remainder graph \(G^{\prime}\) is not \(\lambda\) as long as it shares an eigenvector with \(G\). Then the two may be related by adding one or several soft nodes to \(G^{\prime}\). Finally, an argument shows that if \(G^{\prime}\) is not \(\lambda\) and does not share an eigenvector with \(G\), the problem has no solution. We finish the article by examining the \(\lambda\) soft graphs for \(\lambda=1,2,\ldots,6\) and insist on minimal \(\lambda\) soft graphs as generators of these families, using the transformations above.
The article is organized as follows. Section 2 introduces the main definitions. In section 3 we consider special graphs (chains, cycles, cliques, bipartite graphs) whose Laplacian spectrum is well known. The graph transformations preserving an eigenvalue are presented in section 4. Section 5 introduces graph transformations which shift eigenvalues. Finally section 6 introduces \(\lambda\) soft graphs, discusses \(\lambda\) sub-graphs and presents a classification of graphs up to six vertices.
## 2 The graph Laplacian : notation, definitions and properties
We consider a graph \(G(V,E)\) with a vertex set \(V\) of cardinality \(n\) and edge set \(E\) of cardinal \(m\) where \(n,m\) are finite. The graph is assumed connected with no loops and no multiple edges. The graph Laplacian matrix [9] is the \((n,n)\) matrix \(L(G)\) or \(L\) such that
\[L_{ij}=-1\mbox{ if edge i j exists},0\mbox{ otherwise},\ \ \ \ L_{ii}=m_{i},\mbox{ degree of i}, \tag{1}\]
where the degree of \(i\) is the number of edges connected to vertex \(i\).
The matrix \(L\) is symmetric so that it has real eigenvalues and we can always find a basis of orthogonal eigenvectors. Specifically we arrange the eigenvalues \(\lambda_{i}\) as
\[\lambda_{1}=0\leq\lambda_{2}\leq\cdots\leq\lambda_{n}. \tag{2}\]
We label the associated eigenvectors \(v^{1},v^{2},\ldots,v^{n}\).
We have the following properties
* \(v^{1}=\mathbf{1}\) the vector whose all components are \(1\).
* Let \(v^{i}_{k}\) be the \(k\) component of an eigenvector \(v^{i},\ \ i>1\). An immediate consequence of the \(v^{i}\) being orthogonal to \(v^{1}\) is \(\sum_{k}v^{i}_{k}=0\).
A number of the results we present hold when \(L_{ij}\neq-1\) and \(L_{ii}=\sum_{j\sim i}L_{ij}\), this is the generalized Laplacian. We will indicate which as we present them.
**Regular graphs**
The graph Laplacian can be written as
\[L=D-A\]
where \(A\) is the adjacency matrix and \(D\) is the diagonal matrix of the degrees.
We recall the definition of a regular graph.
**Definition 2.1** (Regular graph): _A graph is \(d\)-regular if every vertex has the same degree \(d\)._
For regular graphs \(D=d\mathrm{Id}_{n}\), where \(\mathrm{Id}_{n}\) is the identity matrix of order \(n\). For these graphs, all the properties obtained for \(L\) in the present article carry over to \(A\).
We will use the following definitions.
**Definition 2.2** (Soft node ): _A vertex \(s\) of a graph is a soft node for an eigenvalue \(\lambda\) of the graph Laplacian if there exists an eigenvector \(x\) for this eigenvalue such that \(x_{s}=0\)._
An important result due to Merris [10] is
**Theorem 2.3**: _Let \(G\) be a graph with \(n\) vertices. If \(0\neq\lambda<n\) is an eigenvalue of \(L(G)\) then any eigenvector affording \(\lambda\) has component \(0\) on every vertex of degree \(n-1\)._
**Definition 2.4** (\(k\)-partite graph): _A \(k\)-partite graph is a graph whose vertices can be partitioned into \(k\) different independent sets so that no two vertices within the same set are adjacent._
**Definition 2.5** (cycle): _A cycle is a connected graph where all vertices have degree 2._
**Definition 2.6** (chain): _A chain is a connected graph where two vertices have degree 1 and the other vertices have degree 2._
**Definition 2.7** (clique): _A clique or complete graph \(K_{n}\) is a simple graph where every two vertices are connected._
In the article we sometimes call configuration a vertex valued graph where the values correspond to an eigenvector of the graph Laplacian.
### Eigenvalues are integers or irrationals
We have the following result
**Theorem 2.8**: _If the eigenvalue \(\lambda\) is an integer, then there exist integer eigenvectors._
To see this consider the linear system
\[(L-\lambda I)X=0.\]
It can be solved using Gauss's elimination. This involves algebraic manipulations so that the result \(X\) is rational. If \(X\) is rational, then multiplying by the product of the denominators of the entries, we obtain an eigenvector with integer entries.
We now show that the eigenvalues of a graph Laplacian are either integers or irrationals. We have the following rational root lemma on the roots of polynomials with integer coefficients, see for example [13]
**Lemma 2.9**: _Rational root_
_Consider the polynomial equation_
\[a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}=0\]
_where the coefficients \(a_{i}\) are integers. Then, any rational solution \(x=p/q\), where \(p,q\) are relatively prime is such that \(p\) divides \(a_{0}\) and \(q\) divides \(a_{n}\)._
A consequence of this is
**Theorem 2.10**: _The eigenvalues of a graph Laplacian are either integers or irrationals._
_Proof._ Consider the equation associated to the characteristic polynomial associated to the graph Laplacian, it has the form
\[a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots+a_{1}x,\]
because the graph is connected so that there is only one \(0\) eigenvalue. Assume that the eigenvalue is of the form \(x=p/q\) with \(p,q\) are relatively prime integers. Then from the lemma above, \(p\) divides \(a_{0}\) and \(q\) divides \(a_{n}\). Since \(a_{n}=\pm 1\), \(q=1\) so that \(x=p\) is an integer. \(\Box\)
The fact that some graphs have integer spectrum was discussed by Grone and Merris [14]. Many of their results are inequalities for \(\lambda_{2}\) and \(\lambda_{n-1}\). Our results complement their approach.
Special graphs
### Cliques and stars
The clique \(K_{n}\) has eigenvalue \(n\) with multiplicity \(n-1\) and eigenvalue \(0\). The eigenvectors for eigenvalue \(n\) can be chosen as \(v^{k}=e^{1}-e^{k},\ \ k=2,\ldots,n\). To see this note that
\[L=nI_{n}-\mathbf{1},\]
where \(I_{n}\) is the identity matrix of order \(n\) and \(\mathbf{1}\) is the \((n,n)\) matrix where all elements are \(1\).
A star of \(n\) vertices \(S_{n}\) is a tree such that one vertex, say vertex \(1\), is connected to all the others. For a star \(S_{n}\), the eigenvalues and eigenvectors are
* \(\lambda=1\) multiplicity \(n-2\), eigenvector \(e^{2}-e^{k},\ \ k=3,\ldots,n\)
* \(\lambda=n\) multiplicity \(1\), eigenvector \((n+1)e^{1}-\sum_{k=2}^{n}e^{k}\)
* \(\lambda=0\) multiplicity \(1\), eigenvector \(\hat{1}\)
### Bipartite and multipartite graphs
Consider a bipartite graph \(K_{n_{1},n_{2}}\). The Laplacian is
\[L=\begin{pmatrix}n_{2}&0&\ldots&0&-1&\ldots&&-1\\ 0&n_{2}&0&\ldots&-1&\ldots&&-1\\ \ldots&\ldots&\ldots&\ldots&\ldots&&\ldots&&\\ 0&\ldots&0&n_{2}&-1&\ldots&&-1\\ -1&\ldots&&-1&n_{1}&0&\ldots&0\\ -1&\ldots&&-1&0&n_{1}&\ldots&0\\ \ldots&&&&\ldots&\ldots&\ldots\\ -1&\ldots&&-1&0&0&\ldots&n_{1}\end{pmatrix}, \tag{3}\]
where the top left bloc has size \(n_{1}\times n_{1}\), and the bottom right bloc \(n_{2}\times n_{2}\). The eigenvalues with their multiplicities denoted as exponents are
\[0^{1},\ \ n_{1}^{n_{2}-1},\ \ n_{2}^{n_{1}-1},\ \ (n1+n2)^{1}.\]
Eigenvectors for \(n_{1}\) can be chosen as \(e^{n_{1}+1}-e^{i}\ \ (i=n_{1}+2,\ldots,n_{1}+n_{2})\). The eigenvector for \(n=n_{1}+n_{2}\) is \((1/n_{1},\ldots,1/n_{1},-1/n_{2},\ldots,-1/n_{2})^{T}\).
Similarly, the spectrum of a multipartite graph \(K_{n_{1},n_{2},\ldots n_{p}}\) is
\[0^{1},\ \ (n-n_{1})^{n_{1}-1},\ \ (n-n_{2})^{n_{2}-1},\ldots,\ \ (n-n_{p})^{n_{p}-1},\ \ n^{p}.\]
The eigenvectors associated to \(n-n_{1}\) are composed of \(1\) and \(-1\) in two vertices of part \(1\) padded with zeros for the rest.
### Cycles
For a cycle, the Laplacian is a circulant matrix, therefore its spectrum is well-known. The eigenvalues are
\[\mu_{k}=4\sin^{2}\left[\frac{(k-1)\pi}{n}\right],\ \ k=1,\ldots,n. \tag{4}\]
They are associated to the complex eigenvectors \(v^{k}\) whose components are
\[v^{k}_{j}=\exp\left[\frac{i(j-1)(k-1)2\pi}{n}\right]\ \,j=1,\ldots n. \tag{5}\]
The real eigenvectors \(w^{k},\ x^{k}\) are,
\[w^{k}=(0,\ \sin(a_{k}),\ \sin(2a_{k}),\ \ldots,\ \sin((n-1)a_{k}))^{T}, \tag{6}\] \[x^{k}=(1,\ \cos(a_{k}),\ \cos(2a_{k}),\ \ldots,\ \cos((n-1)a_{k}))^{T},\] (7) \[a_{k}=\frac{2(k-1)\pi}{n} \tag{8}\]
Ordering the eigenvalues, we have
\[\lambda_{1}=\mu_{1}=0, \tag{9}\] \[\lambda_{2}=\lambda_{3}=\mu_{2},\] (10) \[\lambda_{2k}=\lambda_{2k+1}=\mu_{k+1},\] (11) \[\ldots \tag{12}\]
For \(n=2p+1\)
\[\lambda_{2p}=\lambda_{2p+1}=\mu_{p+1}\]
For \(n=2p\)
\[\lambda_{2p}=\mu_{p}=4\]
is an eigenvalue of multiplicity 1; an eigenvector is \((1,-1,\ldots,1,-1)^{T}\). In all other cases, the eigenvalues have multiplicity two so that all vertices are soft nodes.
Remark that the maximum number of 0s is \(n/2\). To see this, note that if two adjacent vertices have value 0 then their neighbors in the cycle must have 0 as well and we only have 0s, but the null vector is not an eigenvector. This means that we have at most \(n/2\) 0s. This bound is reached for \(n\) even.
### Chains
For chains \(C_{n}\), there are only single eigenvalues, they are [15]
\[\lambda_{k}=4\sin^{2}(\frac{\pi(k-1)}{2n})\ \,k=1,\ldots,n. \tag{13}\]
The eigenvector \(v^{k}\) has components
\[v^{k}_{j}=\cos\left(\frac{\pi(k-1)}{n}(j-\frac{1}{2})\right)\ \,j=1,\ldots n. \tag{14}\]
Obviously the cosine is zero if and only if:
\[(k-1)(2j-1)=n(1+2m), \tag{15}\]
where \(m\) is an integer. There is no solution for \(n=2^{\alpha}\), for \(\alpha\) a positive integer. Apart from this case, there is always at least one soft node. If \(n\) is a prime number, the middle vertex \(j=(n+1)/2\) is the only soft node. For \(k\) odd, all vertices \(j\) such that \(2j-1\) divides \(n\) have a zero value, including the middle vertex.
For \(n\) odd, chains and cycles share \((n-1)/2\) eigenvalues and eigenvectors. To see this consider a chain with \(n=2p+1\). All \(k=2q+1\) give a chain eigenvalue \(\lambda_{k}=4\sin^{2}(\frac{\pi q}{2p+1})\) that is also a cycle eigenvalue. The eigenvector components \(v^{q}_{j}\) are such that \(v^{q}_{1}=v^{q}_{2p+1}\).
## 4 Transformations preserving eigenvalues
In this section, we present four main transformations of graphs such that one eigenvalue is preserved. These are the link between two vertices, the articulation, the soldering and the contraction/expansion. The first three transformations are in the literature in a general form; we choose to present them in their most elementary form.
Furthermore, these transformations will all be unary, they act on a single graph. Binary transformations can be reduced to unary transformations for non connected graphs.
Using these transformations we can generate new graphs that have a soft node, starting from minimal graphs having soft nodes.
### Link between two equal vertices
An important theorem due to Merris [10] connects equal component vertices.
**Theorem 4.1**: **Link** _between two vertices : Let \(\lambda\) be an eigenvalue of \(L(G)\) for an eigenvector \(x\). If \(x_{i}=x_{j}\) then \(\lambda\) is an eigenvalue of \(L(G^{\prime})\) for \(x\) where the graph \(G^{\prime}\) is obtained from \(G\) by deleting or adding the edge \(e=ij\)._
This transformation preserves the eigenvalue and eigenvector. It applies to multiple graphs. Fig. 1 shows examples of the transformation.
We have the following corollary of the theorem.
**Theorem 4.2**: _Let \(\lambda\) be an eigenvalue of two graphs \(G_{1}\) and \(G_{2}\) for respective eigenvectors \(x^{1},\ x^{2}\) with two vertices \(i,j\), such that \(x^{1}_{i}\neq 0\) or \(x^{2}_{j}\neq 0\). Then the graph \(G(V_{1}\cup V_{2},E_{1}\cup E_{2}\cup ij)\) affords the eigenvector \(y=x^{2}_{j}\left(\begin{matrix}x^{1}\\ 0\end{matrix}\right)+x^{1}_{i}\left(\begin{matrix}0\\ x^{2}\end{matrix}\right)\) for \(\lambda\)._
This allows to generate many more graphs that have an eigenvalue \(\lambda\).
### Articulation
An elementary transformation inspired by Merris's principle of reduction and extension [10] is to add a soft node to an existing soft node. This does not change the eigenvalue. We have the following result.
**Theorem 4.3**: **Articulation (A)** _: Assume a graph \(G(V,E)\) with \(n\) vertices where \(x\) is an eigenvector such that \(x_{i}=0\) for an eigenvalue \(\lambda\). Then, the extension \(x^{\prime}\) of \(x\) such that \(x^{\prime}_{1:n}=x_{1:n}\) and \(x^{\prime}_{n+1}=0\) is an eigenvector for \(\lambda\) for the Laplacian \(L(G^{\prime})\) where \(G^{\prime}(V^{\prime},E^{\prime})\) such that \(V^{\prime}=V\cup(n+1)\) and \(E^{\prime}=E\cup i(n+1)\)._
Figure 1: Example of the transform : link between two equal vertices.
he general case presented by Merris [10] amounts to applying several times this elementary transformation.
The transformation is valid for graphs with arbitrary weights and the extended edges can have arbitrary weights.
Fig. 2 illustrates this property on the two graphs labeled 5.6 and 5.23 in the classification given in [1]. An immediate consequence of this elementary transform is that any soft node can be extended into an arbitrarily large graph of soft nodes while preserving the eigenvalue and extending the eigenvector in a trivial way. Fig. 3 shows two graphs that have the same eigenvalue \(\lambda=1\) and that are connected by the articulation transform.
### Soldering
A consequence of the contraction principle of Merris [10] is that coalescing two soft nodes of a graph leaves invariant the eigenvalue. This is especially important because we can "solder" two graphs at a soft node.
Figure 3: Two graphs connected by the articulation transform.
Figure 2: Example of the articulation property. The large dot corresponds to a soft node.
**Theorem 4.4**: **Soldering** _: Let \(x\) be an eigenvector affording \(\lambda\) for a graph \(G\). Let \(i\) and \(j\) be two soft nodes without common neighbors. Let \(G^{\prime}\) be the graph obtained from \(G\) by contracting \(i\) and \(j\) and \(x^{\prime}\) be the vector obtained from \(x\) by deleting its \(j\)th component. Then \(x^{\prime}\) is an eigenvector of \(L(G^{\prime})\) for \(\lambda\)._
This transformation is valid for graphs with arbitrary weights.
### Regular expansion of a graph
We have the following theorem.
**Theorem 4.5**: _Let \(x\) be an eigenvector of a graph \(G\) for \(\lambda\) and let \(i\) be a vertex connected only to \(p\) soft nodes. Let \(G^{\prime}\) be the graph obtained from \(G\) by replacing \(i\) by a \(d\)-regular graph whose \(k\) vertices are all connected to the \(p\) soft nodes. Then \(\lambda=p\) and an eigenvector \(x^{\prime}\) of \(G^{\prime}\) is formed by assigning to the new vertices, the value \(x^{\prime}_{j}=x_{i}/k\)._
_Proof._ Without loss of generality, we can assume that \(i=n\) and that the \(p\) soft
Figure 4: Examples of the soldering transform.
nodes are \(n-p+1,\ldots,n-1\). We have
\[\begin{pmatrix}\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&\cdots&0&-1&\cdots&-1&p\end{pmatrix}\begin{pmatrix}\cdots\\ 0\\ 0\\ x_{n}\end{pmatrix}=\lambda\begin{pmatrix}\cdots\\ 0\\ 0\\ x_{n}\end{pmatrix}\]
The \(n\)th line reads
\[px_{n}=\lambda x_{n}\]
so that \(\lambda=p\). The \(n-1\)th line reads
\[\alpha+(-1)x_{n}=px_{n-1}=0\]
where \(\alpha\) is the sum of the other terms.
Let us detail the eigenvector relation for the Laplacian for \(G^{\prime}\). Consider any new vertex \(j\) linked to the \(p\) soft nodes and to \(d\) new nodes. The corresponding line of the eigenvector relation for the Laplacian for \(G^{\prime}\) reads
\[(d+p)x^{\prime}_{j}+\sum_{i\sim j,i\geq n}(-1)x^{\prime}_{i}=\lambda^{\prime }x^{\prime}_{j}.\]
This implies
\[(d+p-\lambda^{\prime})x^{\prime}_{j}=\sum_{i\sim j,i\geq n}x^{\prime}_{i}.\]
An obvious solution is
\[\lambda^{\prime}=\lambda=p,\ \ \ x^{\prime}_{i}=x^{\prime}_{n}\ \ \forall i\geq n+1.\]
The value \(x^{\prime}_{n}\) is obtained by examining line \(n-1\). We have
\[\alpha+\sum_{i=n}^{n-k-1}(-1)x^{\prime}_{i}=0\]
so that
\[x^{\prime}_{n}=\frac{x_{n}}{k}.\]
In fact, we can get all solutions by satisfying the two conditions
\[\forall j\geq n\ \ dx^{\prime}_{j}=\sum_{i\sim j}x^{\prime}_{i},\ \ x_{n}=\sum_{i\geq n}x^{\prime}_{i}. \tag{16}\]
\(\Box\)
Fig. 5 shows examples of expansion from a single soft node for different values of \(d\). Here the eigenvalue is \(1\). Fig. 6 shows examples of expansion from two soft nodes. The eigenvalue is \(2\). For \(d=2\), the values at the edges at the bold edges are such that their sum is equal to \(1\). For \(d=2\), the values at the
triangle are all equal to \(t\), the same holds for the square with a value \(s\). These values verify \(3t+4s=1\).
Figure 5: Examples of expansion from a single soft node.
Figure 6: Examples of expansion from two soft nodes. For \(d=2\), the values at the triangle are all equal to \(t\), the same holds for the square with a value \(s\). These values verify \(3t+4s=1\).
### Replace coupling by square
We have the following transformation that leaves the eigenvalue unchanged [12].
**Theorem 4.6**: _(Replace an edge by a soft square)_
_Let \(x\) be an eigenvector of the Laplacian of a graph \(G\) for an eigenvalue \(\lambda\). Let \(G^{\prime}\) be the graph obtained from \(G\) by deleting a joint \(ij\) such that \(x_{i}=-x_{j}\) and adding two soft vertices \(k,l\in V(G^{\prime})\) for the extension \(x^{\prime}\) of \(x\) (i.e. \(x^{\prime}_{m}=x_{m}\) for \(m\in V(G)\) and \(x^{\prime}_{k}=x^{\prime}_{l}=0\)) and the four edges \(ik,kj,il,lj\). Then, \(x^{\prime}\) is an eigenvector of the Laplacian of \(G^{\prime}\) for the eigenvalue \(\lambda\)._
This result was proved in [12] for a graph with weights \(1\). Here we generalize it to a graph with arbitrary weights.
_Proof._
The eigenvalue relation at vertex \(i\) reads
\[(d_{i}-\lambda)x_{i}=\sum_{m\sim i,m\neq j}w_{i,m}x_{m}+w_{i,j}x_{j}\]
Since \(x_{i}=-x_{j}\), this implies
\[(d_{i}+w_{i,j}-\lambda)x_{i}=\sum_{m\sim i,m\neq j}w_{i,m}x_{m}.\]
Introducing the two new vertices \(k,l\) such that \(x^{\prime}_{k}=x^{\prime}_{l}=0\) connected to \(i\) by edges of weights \(w_{i,k}=\alpha w_{i,j}\), \(w_{i,l}=(1-\alpha)w_{i,j}\), the relation above leads to
\[(d_{i}+w_{i,k}+w_{i,l}-\lambda)x^{\prime}_{i}=\sum_{m\sim i}w_{i,m}x^{\prime}_ {m}+w_{i,k}x^{\prime}_{k}+w_{i,l}x^{\prime}_{l},\]
which shows that \(x^{\prime}\) is eigenvector of the new graph.
\(\Box\)
See Fig. 7 for an illustration of the theorem.
Figure 7: Replacement of coupling by a square, in both cases the eigenvalue is \(\lambda=2\).
Transversality : change of eigenvalue
Here we present operators that change the eigenvalue of a graph Laplacian in a predictable way. The operators shift the eigenvalue \(\lambda\) to \(\lambda+1\) for the first two and \(\lambda+2\) for the third one. At the end of the section we introduce the eigenvalue of a product graph.
### Inserting soft nodes
**Theorem 5.1**: _Let \(x\) be an eigenvector of a graph \(G\) with weights 1 for \(\lambda\). Assume we can pair the non zero components of \(x\) as \(\{i,j\}\) where \(x_{i}=-x_{j}\) non zero. Let \(G^{\prime}\) be the graph obtained from \(G\) by including \(k\) soft nodes between each pair \(\{i,j\}\). The vector \(x^{\prime}\) so obtained is an eigenvector of the Laplacian of \(G^{\prime}\) for eigenvalue \(\lambda+k\)._
_Proof._ Let \(i,j\in V(G)\) be a pair such that \(x_{i}=-x_{j}\). The eigenvector equation reads
\[d_{i}x_{i}-\sum_{m\sim i}x_{m}=\lambda x_{i}.\]
Introducing \(k\) new vertices \(x^{\prime}_{p}=0,\ \ p=1,\ldots k\) we can write the relation as
\[(d_{i}+k)x^{\prime}_{i}-\sum_{m\sim i}x^{\prime}_{m}=(\lambda+k)x^{\prime}_{i}.\]
This shows that \(x^{\prime}\) is an eigenvector for the new graph. \(\Box\)
Fig. 8 shows an example of the action of inserting a soft node.
Figure 8: Example of the action of inserting a soft node.
When the graph is weighed, the result is still valid. Consider that we add only one soft vertex connected to \(i\) by a weight \(w_{i,k}\). The eigenvalue of the new graph is \(\lambda+w_{i,k}\).
This can transform a graph with an integer eigenvalue to a graph with an irrational eigenvalue.
### Addition of a soft node
Connecting a soft node to all the vertices of a graph augments all the non zero eigenvalues by \(1\). This result was found by Das [11]. We recover it here and present it for completeness.
**Theorem 5.2**: **Addition of a soft node** _: Let \(G(V,E)\) be a graph affording an eigenvalue \(\lambda\neq 0\) for an eigenvector \(x\). Then the new graph \(G^{\prime}\) obtained by adding a node connected to all the nodes of \(G\) has eigenvalue \(\lambda+1\) for the eigenvector \(x^{\prime}\) obtained by extending \(x\) by a zero component._
See Fig. 9 for examples.
Proof.: Assume \(\lambda\) to be an eigenvalue with eigenvector \(v\) for the Laplacian \(L(G)\) of a graph \(G\) with \(n\) vertices. Now add an extra vertex \(n+1\) connected to all vertices of \(G\) and form \(L(G\cup\{n+1\})\). We have the following identity
\[\begin{pmatrix}L(G)+I_{n}&|&-1\\ &|&-1\\ &|&-1\\ -\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr&|&-1\\ -\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr&|&-1\\ -\omit\span\omit\span\omit\span\omit\span\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr&-1-1,\cdots-1&|&n\end{pmatrix} \begin{pmatrix}v\\ -\omit\span\omit\span\omit\span\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr 0\end{pmatrix}=(\lambda+1)\begin{pmatrix}v\\ -\omit\span\omit\span\omit\span\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr 0\end{pmatrix}\]
which proves the statement.
Important examples are the ones formed with the special graphs considered above. There, adding a vertex to an \(n-1\) graph, one knows explicitly \(n-1\) eigenvectors and eigenvalues.
Figure 9: Examples of the addition of a soft node.
The theorem 3.2 by Das [11] can be seen as a direct consequence of adding a soft node and an articulation to a graph.
### Inserting a matching
First we define perfect and alternate perfect matchings.
**Definition 5.3** (Perfect matching): _A perfect matching of a graph \(G\) is a matching (i.e., an independent edge set) in which every vertex of the graph is incident to exactly one edge of the matching._
**Definition 5.4** (Alternate perfect matching): _An alternate perfect matching for a vector \(v\) on the nodes of a graph \(G\) is a perfect matching for the nonzero nodes such that edges \(e_{ij}\) of the matching satisfy \(v_{i}=-v_{j}\ \ (\neq 0)\)._
We have the following result [12] inspired by the alternating principle of Merris [10].
**Theorem 5.5** (Add/Delete an alternate perfect matching): _Let \(v\) be an eigenvector of \(L(G)\) affording an eigenvalue \(\lambda\). Let \(G^{\prime}\) be the graph obtained from \(G\) by adding (resp. deleting) an alternate perfect matching for \(v\). Then, \(v\) is an eigenvector of \(L(G^{\prime})\) affording the eigenvalue \(\lambda+2\) (resp. \(\lambda-2\))._
This is a second operator which shifts eigenvalues by \(\pm 2\). Examples are given in Fig. 10.
### Cartesian product
The cartesian product \(G\Box H\) of two graphs \(G=(V,E)\) and \(H=(W,F)\) has set of vertices \(V\times W=\{(v,w),v\in V,\ w\in W\}\). It's set of edges is \(\{\{(v_{1},w_{1}),(v_{2},w_{2})\}\}\) such that \(v_{1}\)\(v_{2}\in V\) and \(w_{1}w_{2}\in W\). We have the following result, see Merris [10].
**Theorem 5.6**: _If \(x\) is an eigenvector of \(G\) affording \(\mu\) and \(y\) is an eigenvector of \(H\) affording \(\nu\), then the Kronecker product of \(x\) andy, \(x\otimes y\) is an eigenvector of \(G\Box H\) for the eigenvalue \(\mu+\nu\)._
Fig. 11 illustrates the theorem.
Figure 10: Examples of inserting a matching.
Important examples are the ones formed with the special graphs considered above. There, one knows explicitly the eigenvectors and eigenvalues. For example, the cartesian product \(C_{n}\times C_{m}\) of two chains \(C_{n}\) and \(C_{m}\) with \(n\) and \(m\) nodes respectively has eigenvalues
\[\lambda_{i,j}=\lambda_{i}+\lambda_{j},\]
where \(\lambda_{i}\) (resp. \(\lambda_{j}\)) is an eigenvalue for \(C_{n}\) (resp. \(C_{m}\)). The eigenvectors are
\[v^{i,j}=\cos[\frac{\pi(i-1)}{n}(p-\frac{1}{2})]\cos[\frac{\pi(j-1)}{m}(q-\frac {1}{2})],\]
where \(i,p\in\{1,\ldots,n\},\quad j,q\in\{1,\ldots,m\}\).
### Graph complement
We recall the definition of the complement of a graph \(G\).
**Definition 5.7** (Complement of a graph ): _Given a graph \(G(V,E)\) with \(n\) vertices, its complement \(G^{c}\) is the graph \(G^{c}(V,E^{c})\) where \(E^{c}\) is the complement of \(E\) in the set of edges of the complete graph \(K_{n}\)._
We have the following property, see for example [1].
**Theorem 5.8**: _If \(x\) is an eigenvector of a graph \(G\) with \(n\) vertices affording \(\lambda\neq 0\), then \(x\) is an eigenvector of \(G_{c}\) affording \(n-\lambda\)._
An example is shown in Fig. 12. The eigenvalues and eigenvectors are given in table 1.
Figure 11: Cartesian product of two chains 3 (left) and of a cycle 4 and a chain 3 (right).
Many times, \(G_{c}\) is not connected. An example where \(G_{c}\) is connected is the cycle 6....
## 6 \(\lambda\)-soft graphs
### Definitions and properties
We introduce the notions of \(\lambda\), \(\lambda\) soft and \(\lambda\) soft minimal graphs. The transformations of the previous section will enable us to prove the relation between these two types of graphs.
**Definition 6.1**: _A graph \(G\) affording an eigenvector \(X\) for an eigenvalue \(\lambda\) is \(\lambda\)._
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
6.35 & 5.2361 & 5. & 4 & 3 & 0.7639 & 0 \\ \hline
6.101 & 0.7639 & 1 & 2 & 3 & 5.2361 & 0 \\ & & & & & & \\ \hline & 0.51167 & 0.70711 & 0. & 0.18257 & -0.19544 & 0.40825 \\ & -0.31623 & 0. & 0.70711 & -0.36515 & -0.31623 & 0.40825 \\ & -0.31623 & 0. & -0.70711 & -0.36515 & -0.31623 & 0.40825 \\ & 0.51167 & -0.70711 & 0. & 0.18257 & -0.19544 & 0.40825 \\ & -0.51167 & 0. & 0. & 0.73030 & 0.19544 & 0.40825 \\ & 0.12079 & 0. & 0. & -0.36515 & 0.82790 & 0.40825 \\ \hline \end{tabular}
\end{table}
Table 1: _Eigenvalues (top lines) and eigenvectors for the two complementary graphs 6.35 and 6.101 shown in Fig. 12_
Figure 12: Graph 6.35 (left) in the classification [1] and its complement 6.101 (right).
**Definition 6.2**: _A \(\lambda\) graph \(G\) affording an eigenvector \(X\) for the eigenvalue \(\lambda\) is \(\lambda\) soft if one of the entries of \(X\) is zero._
**Definition 6.3**: _A graph \(G\) affording an eigenvector \(X\) for an eigenvalue \(\lambda\) is \(\lambda\) minimal if it is \(\lambda\) and minimal in the sense of inclusion._
Clearly, for a given \(\lambda\), there is at least one \(\lambda\) minimal graph. As an example the 1 soft minimal graph is shown below.
### \(\lambda\) subgraph
In the following section, we study the properties of a \(\lambda\) subgraph \(G\) included in a \(\lambda\) graph \(G"(V",E")\). Consider two graphs \(G(V,E)\) with \(n\) vertices and \(G^{\prime}(V"-V,E^{\prime})\) such that \(E\) only connects elements of \(V\) and \(E^{\prime}\) only connects elements of \(V^{\prime}\). Assume two graphs \(G(n)\) and \(G^{\prime}(n^{\prime})\) are included in a large graph \(G"\) and are such that \(G(V,E)\) Assume \(p\) vertices of \(G\) are linked to \(p^{\prime}\) vertices of \(G^{\prime}\). We label the \(p\) vertices of \(G\), \(n-p+1,\ldots,n\) and the \(p^{\prime}\) vertices of \(G^{\prime}\), \(1,\ldots,p^{\prime}\). We have
\[LX=\lambda X, \tag{17}\] \[L"X"=\lambda X", \tag{18}\]
where \(L"\) is the graph Laplacian for the large graph \(G"\); \(L"\) can be written as
\[L"=\begin{pmatrix}L&0\\ 0&L^{\prime}\end{pmatrix}+\begin{pmatrix}0&0&0&0\\ 0&a&-b&0\\ 0&-b^{T}&c&0\\ 0&0&0&0\end{pmatrix}.\]
A first result is
**Theorem 6.4**: _The square matrix \(\delta=\begin{pmatrix}a&-b\\ -b^{T}&c\end{pmatrix}\) is a graph Laplacian._
_Proof._ The submatrices \(a,b,c\) have respective sizes \(a(p,p),\ b(p,p^{\prime}),\ c(p^{\prime},p^{\prime}),\ a\) and \(c\) are diagonal and verify
\[a_{ii}=\sum_{j=1}^{p^{\prime}}b_{ij},\ \ c_{ii}=\sum_{j=1}^{p}b_{ji}. \tag{19}\]
In other words
\[a\hat{1}_{p}=b\hat{1}_{p^{\prime}},\ \ \ c\hat{1}_{p^{\prime}}=b^{T}\hat{1}_{p},\]
where \(\hat{1}_{p}\) is a \(p\) column vector of \(1\). \(\Box\)
At this point, we did not assume any relation between the eigenvectors \(X\) for \(G\) and \(X"\) for \(G"\). We have the following
**Theorem 6.5**: _The eigenvalue relations (17,18) imply either \(X=X"(1:n)\) or \(X(1:n-p)=0\)._
_Proof._ For \(p=1\) and \(\lambda\) a single eigenvalue, \(\mbox{rank}(L-\lambda I)=n-1\) so either \(X=X"(1:n)\) or \(X(1:n-1)=0\).
We admit the result for \(p>1\). \(\Box\)
We can then assume that the eigenvectors of \(L"\) have the form
\[L"\begin{pmatrix}X\\ X^{\prime}\end{pmatrix}=\lambda\begin{pmatrix}X\\ X^{\prime}\end{pmatrix},\]
where \(LX=\lambda X\). Substituting \(L"\), we get
\[\lambda\begin{pmatrix}X\\ X^{\prime}\end{pmatrix}=\begin{pmatrix}L&0\\ 0&L^{\prime}\end{pmatrix}\begin{pmatrix}X\\ X^{\prime}\end{pmatrix}+\begin{pmatrix}0&0&0&0\\ 0&a&-b&0\\ 0&-b^{T}&c&0\\ 0&0&0&0\end{pmatrix}\begin{pmatrix}X\\ X^{\prime}\end{pmatrix}.\]
Using the relation (17) we obtain
\[\begin{pmatrix}0&0\\ 0&a\end{pmatrix}X+\begin{pmatrix}0&0\\ -b&0\end{pmatrix}X^{\prime}=0, \tag{20}\] \[L^{\prime}X^{\prime}-\begin{pmatrix}0&b^{T}\\ 0&0\end{pmatrix}X+\begin{pmatrix}c&0\\ 0&0\end{pmatrix}X^{\prime}=\lambda X^{\prime}. \tag{21}\]
There are \(p\) non trivial equations in the first matrix equation and \(p^{\prime}\) in the second one. Using an array notation (like in Fortran), the system above can be written as
\[aX(n-p+1:n)-bX^{\prime}(1:p^{\prime})=0, \tag{22}\] \[-b^{T}X(n-p+1:n)+cX^{\prime}(1:p^{\prime})+(L^{\prime}X^{\prime} )(1:p^{\prime})=\lambda X^{\prime}(1:p^{\prime}),\] (23) \[(L^{\prime}X^{\prime})(p^{\prime}+1:n^{\prime})=\lambda X^{\prime }(p^{\prime}+1:n^{\prime}), \tag{24}\]
Extracting \(X\) from the first equation, we obtain
\[X(n-p+1:n)=a^{-1}bX^{\prime}(1:p^{\prime}), \tag{25}\]
and substituting in the second equation yields the closed system in \(X^{\prime}\)
\[(-b^{T}a^{-1}b+c)X^{\prime}(1:p^{\prime})+(L^{\prime}X^{\prime}) (1:p^{\prime})=\lambda X^{\prime}(1:p^{\prime}), \tag{26}\] \[(L^{\prime}X^{\prime})(p^{\prime}+1:n^{\prime})=\lambda X^{\prime }(p^{\prime}+1:n^{\prime}), \tag{27}\]
where we used the fact that the matrix \(a\) of the degrees of the connections is invertible by construction.
**Theorem 6.6**: _The matrix_
\[\Delta\equiv-b^{T}a^{-1}b+c,\]
_is a generalized graph Laplacian: it is a Laplacian of a weighted graph. Its entries are rationals and not necessarily integers._
_Proof._ To prove this, note first that \(\Delta\) is obviously symmetric. We have
\[\Delta\hat{1}_{p^{\prime}}=-b^{T}a^{-1}b\hat{1}_{p^{\prime}}+c\hat{1}_{p^{ \prime}}=-b^{T}a^{-1}a\hat{1}_{p}+b^{T}\hat{1}_{p}=0.\]
This shows that the each diagonal element of \(\Delta\) is equal to the sum of it's corresponding row so that \(\Delta\) is a graph Laplacian. \(\square\) From theorem (2.10), the eigenvalues of \(\Delta\) are integers or irrationals and correspond to eigenvectors with integer or irrational components.
We then write equations (26,27) as
\[(\bar{\Delta}+L^{\prime})X^{\prime}=\lambda X^{\prime}, \tag{28}\]
where
\[\bar{\Delta}=\begin{pmatrix}\Delta&0\\ 0&0\end{pmatrix}\]
This is an eigenvalue relation for the graph Laplacian \((\bar{\Delta}+L^{\prime})\). Four cases occur.
* \(\lambda=0\) then \(X^{\prime}\) is a vector of equal components and \(X\) also.
* \(\lambda\neq 0\) is an eigenvalue of \(L^{\prime}\). Then one has the following **Theorem 6.7**: _Assume a graph \(G"\) is \(\lambda\) for an eigenvector \(X"=(X,X^{\prime})^{T}\) and contains a \(\lambda\) graph \(G\) for the eigenvector \(X\). Consider the graph \(G^{\prime}\) with vertices \(V(G")-V(G)\) and the corresponding edges in \(G"\). If \(G^{\prime}\) is \(\lambda\) then \(G"\) is obtained from \(G\) using the articulation or link transformations._
_Proof._ Since \(\lambda\neq 0\) is an eigenvalue of \(L^{\prime}\), we can choose \(X^{\prime}\) an eigenvector for \(\lambda\) so that \(L^{\prime}X^{\prime}=\lambda X^{\prime}\), then \(\Delta X^{\prime}=0\).
A first possibility is \(X^{\prime}=0\), this corresponds to an articulation between \(G\) and \(G^{\prime}\).
If \(X^{\prime}\neq 0\), \(L^{\prime}X^{\prime}=\lambda X^{\prime}\), implies that \(X^{\prime}\) is not a vector of equal components so that \(X^{\prime}\notin\operatorname{Null}(\Delta)\). The only possibility for \(\Delta X^{\prime}=0\) is \(\Delta=0\) so that
\[c=b^{T}a^{-1}b.\]
The term \((b^{T}a^{-1}b)_{ij}\) is
\[(b^{T}a^{-1}b)_{ij}=\sum_{k=1}^{p}\frac{b_{ki}b_{kj}}{a_{kk}}.\]
Since the matrix \(c\) is diagonal, we have \[\sum_{k=1}^{p}\frac{b_{ki}b_{kj}}{a_{kk}}=0,\forall i\neq j\] Then \(b_{ki}b_{kj}=0\) so that a vertex \(k\) from \(G\) is only connected to one other vertex \(i\) or \(j\) from \(G^{\prime}\). Then \(p=p^{\prime}\). This implies \(a_{ii}=c_{ii}=1,\forall i\in\{1,,\dots,p\}\). The graphs \(G\) and \(G^{\prime}\) are then connected by a number of edges between vertices of same value. \(\Box\)
3. \(\lambda\neq 0\) is not an eigenvalue of \(L^{\prime}\) and \(L^{\prime}\) and \(\bar{\Delta}\) share a common eigenvector \(X^{\prime}\) for eigenvalues \(\lambda^{\prime}\) and \(\lambda-\lambda^{\prime}>0\). For \(\lambda-\lambda^{\prime}=1\), a possibility is to connect a soft node of \(G\) to \(G^{\prime}\). For \(\lambda-\lambda^{\prime}=p\) integer, a possibility is to connect \(p\) soft nodes of \(G\) to \(G^{\prime}\). We conjecture that there are no other possibilities.
4. \(\lambda\neq 0\) is not an eigenvalue of \(L^{\prime}\) and \(L^{\prime}\) and \(\bar{\Delta}\) have different eigenvectors. Then there is no solution to the eigenvalue problem (28). To see this, assume the eigenvalues and eigenvectors of \(L^{\prime}\) and \(\bar{\Delta}\) are respectively \(\nu_{i},V^{i}\), \(\mu_{i},W^{i}\) so that \[L^{\prime}V^{i}=\nu_{i}V^{i},\ \ \bar{\Delta}W^{i}=\mu_{i}W^{i},\ \ i=1,2,\dots n\] The eigenvectors can be chosen orthonormal and we have \[QV=WQ\] where \(Q=(q_{k}^{j})\) is an orthogonal matrix, \(V\) and \(W\) are the matrices whose columns are respectively \(V^{i}\) and \(W^{i}\). We write \[W^{j}=\sum_{k}q_{k}^{j}V^{k}.\] Assuming \(X^{\prime}\) exists, we can expand it as \(X^{\prime}=\sum_{i}\alpha_{i}V^{i}\) Plugging this expansion intro the relation \((\bar{\Delta}+L^{\prime})X^{\prime}=\lambda X^{\prime}\) yields \[\sum_{i}\left(\alpha_{i}\nu_{i}V^{i}+\alpha_{i}\sum_{j}q_{j}^{i}\mu_{j}\sum_{k }q_{k}^{j}V^{k}\right)=\sum_{i}\lambda\alpha_{i}\nu_{i}V^{i}\] Projecting on a vector \(V^{m}\) we get \[\alpha_{m}\nu_{m}+\alpha_{m}\sum_{j}q_{j}^{m}\mu_{j}q_{m}^{j}=\lambda\alpha_{m }\nu_{m}\] A first solution is \(\alpha_{m}=0,\forall m\) so that \(X^{\prime}=0\), an articulation. If \(\alpha_{m}\neq 0\) then we get the set of linear equations linking the \(\nu_{i}\) to the \(\mu_{i}\). \[\sum_{j}q_{j}^{m}\mu_{j}q_{m}^{j}=(\lambda-1)\nu_{m},\ \ m=1,\dots n\] Since \(Q\) is a general orthogonal matrix, the terms \(q_{j}^{m}\) are irrational in general. Therefore we conjecture that there are no solutions.
### Examples of \(\lambda\) subgraphs
Using simple examples, we illustrate the different scenarios considered above. We first consider theorem (6.7), see Fig. 13.
Consider the configuration on the left of Fig. 13. We have
\[L=\begin{pmatrix}1&-1&0\\ 0&1&1\\ -1&-1&2\end{pmatrix},\ \ \ \ L^{\prime}=\begin{pmatrix}1&-1&0&0\\ -1&3&-1&-1\\ 0&-1&1&0\\ 0&-1&0&1\end{pmatrix}. \tag{29}\]
Note that \(L\) and \(L^{\prime}\) have \(1\) as eigenvalue. Here \(p=1,p^{\prime}=3\) and
\[a=3,b=(1,1,1)^{T},c=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\]
so that
\[\Delta=\begin{pmatrix}\frac{2}{3}&-\frac{1}{3}&-\frac{1}{3}\\ -\frac{1}{3}&\frac{2}{3}&-\frac{1}{3}\\ -\frac{1}{3}&-\frac{1}{3}&\frac{2}{3}\end{pmatrix}.\]
The matrices \(\bar{\Delta}\) and \(L^{\prime}\) have different eigenvectors for the same eigenvalue \(1\). Choosing \(X^{\prime}\) an eigenvector of \(L^{\prime}\) for the eigenvalue \(1\) yields \(\bar{\Delta}X^{\prime}=0\). The only solution is \(X^{\prime}=0\), this is an articulation.
Figure 13: Two configurations where a graph \(G\) is included in a larger graph \(G\)” for the eigenvalue \(1\).
For the configuration on the right of Fig. 13 we have \(p=p^{\prime}=3\).
\[a=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\quad b=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\end{pmatrix},\quad c=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\]
so that \(\Delta=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}.\) We have
\[LX=1X, \tag{30}\] \[L"(X,X^{\prime})^{T}=1(X,X^{\prime})^{T}, \tag{31}\]
where \(X=(X_{1},X_{2},X_{3})^{T}\) In this configuration, \(X^{\prime}\) is an eigenvector of \(L^{\prime}\) for the eigenvalue \(1\). and we have Link connections between \(G\) and \(G^{\prime}\).
Finally, we show an example of case (iii) where \(G,G"\) are \(2\) soft and \(G^{\prime}\) is \(1\) soft.
We have to solve \((\bar{\Delta}+L^{\prime})X^{\prime}=2X^{\prime}\) where
\[L=\begin{pmatrix}2&-1&0&-1\\ -1&2&-1&0\\ 0&-1&2&-1\\ -1&0&-1&2\end{pmatrix},\,\,\,L^{\prime}=\begin{pmatrix}1&0&-1&0\\ 0&1&-1&0\\ -1&-1&3&-1\\ 0&0&-1&1\end{pmatrix},\,\,\bar{\Delta}=\begin{pmatrix}0.5&-0.5&0\\ -0.5&0.5&0\\ 0&0&0\end{pmatrix}.\]
Note that the eigenvector \(X^{\prime}=(1,-1,0,0)^{T}\) is shared by \(L^{\prime}\) and \(\bar{\Delta}\) so that \((\bar{\Delta}+L^{\prime})X^{\prime}=2X^{\prime}\).
The transformations introduced in the two previous sections enable us to link the different members of a given class. To summarize, we have
* Articulation : one can connect any graph \(G_{2}\) to the soft nodes of a given graph \(G_{1}\) and keep the eigenvalue. The new graph \(G_{1}\cup G_{2}\) has soft nodes everywhere in \(G_{2}\).
Figure 14: An example of case (iii) for eigenvalue \(\lambda=2\).
* Link : introducing a link between equal nodes does not change the eigenvalue and eigenvector.
* Contraction of a d-regular graph linked to a soft node. To have minimal graphs in the sense of Link we need to take \(d=0\).
* Soldering : one can connect two graphs by contracting one or several soft nodes of each graph.
In the next subsections we present a classification of small size \(\lambda\) soft graphs for different \(\lambda\)s.
### \(1\)-soft graphs
Fig. 15 shows some of the \(1\)s graphs generated by expansion. Note the variety of possibilities.
Figure 15: \(1_{s}\) graphs: graphs generated by expansion.
[MISSING_PAGE_POST]
Fig. 291 shows some of the 1s graphs generated by articulation. The \(1,
Fig. 17 shows the 1s graphs with at most 6 vertices. Notice how they are linked by articulation (A), expansion/contraction (C) and links and can all be obtained from the graph 5.3 (chain 3). The connection \(Ch3\) - 28 is a contraction of two \(Ch3\) chains. Connecting two 3 chains \(Ch3\) with an Link transformation we obtain a chain 6 \(Ch6\). One can also go from \(Ch6\) to 23 by soldering the two soft nodes.
### \(2\)-soft graphs
Fig. 18 shows some of the 2s graphs generated by expansion of the 5.7 graph.
Figure 17: 1-soft graphs. The soft nodes are in boldface. We only present symmetric expansions so that links are possible.
Similarly Fig. 19 shows some of the 2s graphs generated by articulation from the same graph.
[MISSING_PAGE_POST]
Fig. 17 shows all 2s graphs with at most 6 vertices.104. Notice how all graphs can be generated from 5.5 and 5.1.
Fig. 18: \(2_{s}\) graphs: graphs generated by expansion.
### \(3\)-soft graphs
Fig. 21 shows a \(3\)s graph generated by expansion of graph 5.22.
Figure 21: \(3_{s}\) graphs: graphs generated by expansion.
Figure 20: \(2\)-soft graphs
Fig. 22 shows some 3s graphs generated by articulation on graphs 5.2 and 5.22.
Figure 22: \(3_{s}\) graphs: graphs generated by articulation
Fig. 23 shows all 3s graphs with at most 6 vertices. Notice how they are generated by graphs 5.2, 5.22 and 5.3. Graph 5.20 is the soldering of two graphs 5.2.
### \(4\)-soft graphs
Fig. 24 shows some 4s graphs generated by articulation on the graph 5.3.
Fig. 25 shows some 4s graphs generated by a
Figure 23: 3-soft graphs.
Fig. 25 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 26 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 27 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 28 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 29 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 30 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 31 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 32 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 33 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 34 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 35 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 36 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 37 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 38 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 39 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 40 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 41 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 42 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 43 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 44 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 45 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 46 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 47 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 48 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 49 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 50 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 51 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 52 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 53 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 54 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 55 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 56 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 57 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 58 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 59 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 60 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 61 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 62 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 63 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 64 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 65 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 66 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 67 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 68 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 69 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 61 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 62 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 63 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5 (2 configurations) and 6.93. The graph 5.7 is included to show its connection to 6.93 (replacing a matching by a square).
Fig. 64 shows the 4s graphs with at most 6 vertices. Notice how they are generated from graphs 5.5
### \(5\)-soft graphs
Fig. 26 shows \(5\)s graphs with at most \(6\) vertices. Notice how they stem from graphs 6.70, 5.13 and two configurations of 5.15.
Figure 25: \(4\)-soft graphs.
### \(6\)-soft graphs
Fig. 27 shows \(6\)s graphs with at most \(6\) vertices. Notice how these graphs stem from graphs \(6.9\), \(6.37\), \(6.2\) (two configurations) and \(6.16\).
### x-soft graphs, x non integer
As proven above, the only eigenvalues that are non integer are irrational. For these, there can be soft nodes. Among the 5 node graphs, we found irrational eigenvalues for the chain 5 and the cycle 5. In addition, there are the following
Remarks
The graph 5.16 is 3 soft. The graphs 5.21 and 5.24 are not part of an integer soft class. They are
\begin{table}
\begin{tabular}{|c|c|c|} \hline nb. in & eigenvalue & eigenvector \\ classification & & \\ \hline
5.16 & \(\lambda_{2}=3-\sqrt{2}\) & \((-0.27,-0.65,0,0.65,0.27)^{T}\) \\ \hline
5.16 & \(\lambda_{4}=3+\sqrt{2}\) & \((0.65,-0.27,0,0.27,-0.65)^{T}\) \\ \hline
5.21 & \(\lambda_{4}=(7+\sqrt{5})/2\) & \((-0.6,0.6,0.37,0,-0.37)^{T}\) \\ \hline
5.21 & \(\lambda_{5}=(7-\sqrt{5})/2\) & \((-0.37,0.37,-0.6,0,0.6)^{T}\) \\ \hline
5.24 & \(\lambda_{2}=(5-\sqrt{13})/2\) & \((-0.67,-0.2,0.2,0.67,0)^{T}\) \\ \hline
5.24 & \(\lambda_{5}=(5+\sqrt{13})/2\) & \((-0.2,0.67,-0.67,0.2,0)^{T}\) \\ \hline
5.30 (chain 5) & \(\lambda_{4}=(3+\sqrt{5})/2\) & \((-0.6,0.6,0.37,0,-0.37)^{T}\) \\ \hline
5.30 (chain 5) & \(\lambda_{5}=(3-\sqrt{5})/2\) & \((-0.37,0.37,-0.6,0,0.6)^{T}\) \\ \hline \end{tabular}
\end{table}
Table 2: _Non trivial graphs with soft nodes and non integer eigenvalues._
Figure 27: 6-soft graphs.
* Graph 5.16 is a chain 4 with a soft node added.
* Graph 5.21 is obtained from chain 5 (graph 5.30) by inserting a soft node.
### Minimal \(\lambda\) soft graphs
We computed the minimal \(\lambda\) soft graphs for \(\lambda=1,\ldots,6\). These are presented in Fig. 29.
Note that there is a unique minimal \(\lambda\)-soft graph for \(\lambda=1\) and 2. There
Figure 28: The graphs 5.16, 5.21 and 5.24 with their soft node
Figure 29: The minimal \(\lambda\) soft graphs for \(\lambda=1,2,3,4,5\) and 6.
are two minimal 3-soft graphs and 4-soft graphs. There are four minimal 5-soft graphs. The first two are generated by respectively inserting a soft node and adding a soft node to the minimal 4-soft graph. The third and fourth ones are obtained respectively by adding three soft nodes to the 2 clique and adding a soft node to the 4 star.
Three systematic ways to generate minimal \(\lambda+1\)-soft graphs are (i) inserting a zero to a \(\lambda\)-soft graph,
(ii) adding a zero to a\(\lambda\)-soft graph and
(iii) adding a matching to a \(\lambda-1\)-soft graph. One can therefore generate systematically minimal 7-soft, 8-soft.. graphs.
## 7 Conclusion
We reviewed families of graphs whose spectrum is known and presented transformations that preserve an eigenvalue. The link, articulation and soldering were contained in Merris [10] and we found two new transformations : the regular expansion and the replacement of a coupling by a square. We also showed transformations that shift an eigenvalue : insertion of a soft node (+1), addition of a soft node (+1), insertion of a matching (+2). The first is new and the second and third were found by Das [11] and Merris [10] respectively.
From this appears a landscape of graphs formed by families of \(\lambda\)-graphs connected by these transformations. These structures remain to be understood. We presented the connections between small graphs with up to six vertices. Is it possible to obtain all the \(\lambda\) graphs using a series of elementary transformations? Or just part of these?
We answered partially the question: can one predict eigenvalues/eigenvectors from the geometry of a graph? by examining the situation of a a \(\lambda\) subgraph \(G\) of a \(\lambda\) graph \(G\)". We showed that if the remainder graph \(G^{\prime}\) is \(\lambda\), it is an articulation or a link of \(G\). If not and if \(G\) and \(G^{\prime}\) share an eigenvector, the two may be related by adding one or several soft nodes to \(G^{\prime}\).
A number of the graphs we studied have irrational eigenvalues and we can define \(\lambda\) graphs for these as well because the transformations apply. However we did not find any connection between \(\lambda\) graphs and \(\mu\) graphs if \(\lambda\) is an integer and \(\mu\) an irrational.
|
2310.11179 | A diffusive wetting model for water entry/exit based on the
weakly-compressible SPH method | This paper proposes a diffusive wetting model for the weakly-compressible
smoothed particle hydrodynamics (WCSPH) method to simulate individual water
entry/exit as well as the complete process from water entry to exit. The model
is composed of a physically consistent diffusive wetting equation to describe
the wetting evolution at the fluid-solid interface, a wetting-coupled
identification approach to determine the type of fluid particles by taking into
account the wetting degree of the contacted solid, and a numerical
regularization on the fluid particles at fully wetted fluid-solid interface.
The accuracy, efficiency, and versatility of the present model are validated
through qualitative and quantitative comparisons with experiments, including
the 3-D water entry of a sphere, the 2-D water entry/exit of a cylinder, and
the complete process from water entry to exit of a 2-D cylinder. | Shuoguo Zhang, Yu Fan, Chi Zhang, Nikolaus Adams, Xiangyu Hu | 2023-10-17T11:54:13Z | http://arxiv.org/abs/2310.11179v1 | # A diffusive wetting model for water entry/exit based on the weakly-compressible SPH method
###### Abstract
This paper proposes a diffusive wetting model for the weakly-compressible smoothed particle hydrodynamics (WCSPH) method to simulate individual water entry/exit as well as the complete process from water entry to exit. The model is composed of a physically consistent diffusive wetting equation to describe the wetting evolution at the fluid-solid interface, a wetting-coupled identification approach to determine the type of fluid particles by taking into account the wetting degree of the contacted solid, and a numerical regularization on the fluid particles at fully wetted fluid-solid interface. The accuracy, efficiency, and versatility of the present model are validated through qualitative and quantitative comparisons with experiments, including the 3-D water entry of a sphere, the 2-D water entry/exit of a cylinder, and the complete process from water entry to exit of a 2-D cylinder.
Water entry/exit, Diffusive wetting, Surface wettability, Surface particle identification, Weakly-compressible SPH
## 1 Introduction
Water entry and exit have been studied for decades and are of great significance for marine engineering, naval hydrodynamic applications, and more (Zhang _et al._, 2017\(a\); Watson _et al._, 2021). For water entry, in the classical large-scale hydrodynamics perspective based on Von Karman (1929) and Wagner (1931), the inertial effect dominates the impact on the free surface. Therefore, factors such as gravity, surface wettability, and air-cushion effect can generally be neglected when predicting the hydrodynamic impacting force, object trajectory, and induced flow behavior at the initial stage of high-speed impact (Oliver, 2002_a_). However, as demonstrated by numerous studies (Worthington & Cole, 1897; May, 1951; Cheny & Walters, 1996; Cossali _et al._, 2004; Ogawa _et al._, 2006), this simplification is not valid at the later stage, especially when the impacting velocity is not sufficiently high (Kim & Park, 2019; Yoo _et al._, 2022). To reveal the unforeseen mechanisms in the physics of impact, Duez _et al._ (2007) experimentally investigated the relationship between the splashing behavior and surface wettability and their dependence on impacting velocity. They found that the threshold velocity for air entrainment is determined by the surface wettability (represented by the static contact angle in the experiment). Such a mechanism has been further validated and confirmed by experimental studies, in which wettability is modified using different surface treatments (Gekle _et al._, 2009; Aristoff & Bush, 2009; Gekle & Gordillo, 2010; Ueda & Iguchi, 2012; Zhao _et al._, 2014; Diaz _et al._, 2017; Watson _et al._, 2018; Li _et al._, 2019; Speirs _et al._, 2019; Watson _et al._, 2021). Compared to the extensive literature on water entry, water exit has been much less investigated (Zhu _et al._, 2006). For buoyancy-driven water exit, although no proper theory has been developed (Moshari _et al._, 2014), two typical phenomena have been observed in experiments: flow separation and
free-surface breaking before and after the object breaches the water surface, respectively (Zhu _et al._, 2006; Zhang _et al._, 2017_a_). It is unclear whether these phenomena are also influenced by wettability, as in water entry.
Despite the well-established correlation between the splashing behavior and surface wettability in experiments, the difficulties in modeling surface wettability make it rarely dealt with in practical numerical simulations of water entry (Yoo _et al._, 2022). Firstly, because surface wettability is generally governed by many different physical characteristics (e.g., surface tension, viscous resistance, surface roughness...), the high complexity and expensive cost of accounting for all relevant characteristics render the direct numerical simulation (DNS) impractical, which is similar to the dilemma of turbulence simulation. In particular, while certain physical characteristics, such as viscosity, can be quantified, small-scale but irremissible characteristics, such as surface roughness, is not feasible to resolve even in large-scale numerical simulations. Secondly, although focusing solely on a few dominant physical characteristics offers a cost-efficient way to characterize surface wettability, this limited consideration will still result in significant discrepancies with the experiment. For instance, Yoo _et al._ (2022) employed a DNS model of surface tension to handle the surface wettability but predicted a much lower threshold velocity for cavity formation than that of Duez _et al._ (2007). Furthermore, the dominant characteristics often differ depending on the conditions of water entry, so the model built in this compromised way is often limited to specific cases. Compared to the above mentioned limitations of water entry models, the main difficulty in modeling water exit is the lack of mature theoretical support (Zhu _et al._, 2006). Therefore, the existing water exit models are mainly developed with ideal conditions, such as the inviscid and irrotational flow (Korobkin, 2013). Furthermore, as Oliver (2002_b_) points out that "...the leading order outer problem is linearly stable if and only if the turnover curve is advancing, i.e., the time reversal of
the entry problem is linearly unstable.", simply treating water exit as a reversed entry problem, i.e., to apply the water entry model to water exit mechanically, is also ill-posed. Existing numerical simulations of water exit in the literature (Moyo & Greenhow, 2000; Zhu _et al._, 2006; Liu _et al._, 2014; Zhang _et al._, 2017\(a\); Lyu _et al._, 2021), are not able to accurately reproduce the flow separation and spontaneous free-surface breaking in the experiment. Some researchers have also tried by using larger numerical viscosities (Sun _et al._, 2015; Zhang _et al._, 2017\(a\); Lyu _et al._, 2021), but the apparent qualitative deviation of the simulation from the experiment has not been efficiently improved. Furthermore, all these open issues in modeling water entry/exit make it currently impossible to simulate the complete process from water entry to exit effectively in one model. Although some attempts have been made in the literature (Sun _et al._, 2015; Lyu _et al._, 2021; De Rosis & Tafuni, 2022), the state-of-the-art simulations fail to capture not only the typical phenomena of subsequent water exit, but also the hydrodynamic behaviors of water entry at low-speed impacts.
In this paper, we propose a diffusive wetting model for the WCSPH method to simulate individual water entry/exit as well as the complete process from water entry to exit. Through a diffusive wetting equation, this model utilizes the wetting rate, i.e., the diffusion coefficient, to comprehensively characterize the surface wettability without introducing complex physical characteristics. The resulting progress variable of solid particles quantitatively expresses the physical wetting degree of the solid. Together with a wetting-coupled particle identification and a numerical regularization approach, this model enables the manifestation of the effect of wetting on hydrodynamic behaviors in the numerical simulation. Moreover, by considering the solid surface in the water exit as the result of diffusive wetting, the proposed model is not only valid for the water exit separately but also for both water entry and exit as a complete process.
The remainder of this paper is organized as follows. First, Section 2 briefly overviews the Riemann-based WCSPH method and introduces the coupling between rigid-body and SPH fluid dynamics. In Section 3, the proposed diffusive wetting model is detailed. The accuracy, efficiency, and versatility of the present model are qualitatively and quantitatively validated with several benchmark tests in Sections 4 and 5, including the 3-D water entry of a sphere, the 2-D water entry/exit of a cylinder, and the complete process from water entry to exit of a 2-D cylinder. Finally, concluding remarks are given in Section 6. The code accompanying this work is implemented in the open-source SPH library (SPHinXsys) (Zhang _et al._, 2021_b_) and is available at [https://www.sphinxsys.org](https://www.sphinxsys.org).
## 2 WCSPH method
### Governing equations
Within the Lagrangian framework, the governing equations for an incompressible flow, which is assumed to be isothermal, consist of the continuity and momentum-conservation equations of
\[\frac{d\rho}{dt}=-\rho\nabla\cdot\mathbf{v}, \tag{1}\]
and
\[\frac{d\mathbf{v}}{dt}=-\frac{1}{\rho}\nabla p+\nu\nabla^{2}\mathbf{v}+\mathbf{ g}, \tag{2}\]
where \(\rho\) is the density, \(t\) the time, \(\mathbf{v}\) the velocity, \(p\) the pressure, \(\nu\) the kinematic viscosity and \(\mathbf{g}\) the gravitational acceleration.
With the weakly-compressible assumption, the system of Eq. 1 and Eq. 2 is closed by an artificial isothermal equation of state (EoS), which estimates the pressure from the density as
\[p=c_{0}^{2}(\rho-\rho_{0}), \tag{3}\]
where \(c_{0}\) denotes the artificial speed of sound and \(\rho_{0}\) the initial reference density. To restrict the variation in density around 1% (Morris _et al._ 1997), an artificial sound speed \(c_{0}=10U_{max}\) is utilized, with \(U_{max}\) indicating the maximum anticipated flow speed.
### Riemann-based WCSPH method
To address the numerical spurious pressure fluctuations in the free-surface flow with violent impact, both the continuity and momentum-conservation equations of Eq.(1) and Eq.(2) are discretized by using the Riemann-based WCSPH method (Vila 1999), in respect to particle \(i\), as following
\[\frac{d\rho_{i}}{dt}=2\rho_{i}\sum_{j}\frac{m_{j}}{\rho_{j}}(\mathbf{v}_{i}- \mathbf{v}^{*})\cdot\nabla W_{ij}, \tag{4}\]
and
\[\frac{d\mathbf{v}_{i}}{dt}=-2\sum_{j}m_{j}(\frac{P^{*}}{\rho_{i}\rho_{j}}) \nabla W_{ij}+2\sum_{j}m_{j}\frac{\eta\mathbf{v}_{ij}}{\rho_{i}\rho_{j}r_{ij} }\frac{\partial W_{ij}}{\partial r_{ij}}+\mathbf{g}, \tag{5}\]
where \(m\) is the mass of particle, \(\eta\) the dynamic viscosity, and subscript \(j\) the neighbor particles. Also, \(\nabla W_{ij}\) denotes the gradient of the kernel function \(W(|\mathbf{r}_{ij}|,h)\), with \(\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}\) and \(h\) the smooth length. Furthermore, \(\mathbf{v}^{*}=U^{*}\mathbf{e}_{ij}+(\overline{\mathbf{v}}_{ij}-\overline{U} \mathbf{e}_{ij})\), where \(\mathbf{e}_{ij}=\mathbf{r}_{ij}/r_{ij}\), \(\mathbf{v}_{ij}=\mathbf{v}_{i}-\mathbf{v}_{j}\) and \(\overline{\mathbf{v}}_{ij}=(\mathbf{v}_{i}+\mathbf{v}_{j})/2\) are the relative and average velocities between particles \(i\) and \(j\), respectively.
Herein, the Riemann solutions \(U^{*}\) and \(P^{*}\) of the inter-particle one-dimensional Riemann problem constructed along the unit vector \(-\mathbf{e}_{ij}\) pointing from particles \(i\) to \(j\) are
given by
\[\begin{cases} U^{*}=\overline{U}+\frac{P_{L}-P_{R}}{2\overline{\rho}c_{0}}\\ P^{*}=\overline{P}+\frac{1}{2}\overline{\rho}c_{0}(U_{L}-U_{R})\\ (\rho_{L},U_{L},P_{L})=(\rho_{i},-\mathbf{v}_{i}\cdot\mathbf{e}_{ij},p_{i})\\ (\rho_{R},U_{R},P_{R})=(\rho_{j},-\mathbf{v}_{j}\cdot\mathbf{e}_{ij},p_{j})\\ \end{cases}, \tag{6}\]
where \(\overline{U}=(U_{L}+U_{R})/2\), \(\overline{P}=(P_{L}+P_{R})/2\), and \(\overline{\rho}=(\rho_{L}+\rho_{R})/2\) are inter-particle averages, \(L\) and \(R\) the initial left and right states of the Riemann problem. The utilization of the original intermediate pressure \(P^{*}\) in Eq.(6) may lead to an excessive dissipation. To mitigate this issue, a supplementary low dissipation Riemann solver (Zhang _et al._, 2017_c_), which incorporates a modification on \(P^{*}\) while maintaining the intermediate velocity \(U^{*}\) in Eq.(6) unconstrained, reads
\[P^{*}=\overline{P}+\frac{1}{2}\beta\overline{\rho}(U_{L}-U_{R}), \tag{7}\]
where \(\beta=min\big{(}3\max(U_{L}-U_{R},0),c_{0}\big{)}\), representing the limiter, is employed in this work.
Furthermore, to tackle the issue of accumulated density error during long-term simulations (Zhang _et al._, 2021_b_) and ensure the numerical stability in free-surface flows, a density reinitialization method proposed by Rezavand _et al._ (2022) is employed, which reinitializes the density field prior to each update in the discretized continuity equation of Eq.(4), as expressed in Eq.(8). Such a scheme has proven effective in mitigating the aforementioned density error and improving the overall accuracy of the numerical scheme.
\[\rho_{i}=\rho_{0}\frac{\sum W_{ij}}{\sum W_{ij}^{0}}+\max(0,(\rho_{i}-\rho_{0} \frac{\sum W_{ij}}{\sum W_{ij}^{0}}))\frac{\rho_{0}}{\rho_{i}}, \tag{8}\]
where the superscript \(0\) represents the reference value in the initial configuration. Note
that the assumption of smooth pressure distribution on free-surface particles is applied here due to the weakly compressible assumption.
### Coupling rigid-body and SPH fluid dynamics
In practical scenarios, the motion of an object in water entry/exit cannot be simply described as an ideal rotation-free linear motion along the vertical direction, particularly in the later phases of falling and rising. Hence, the present model investigates water entry/exit under practical conditions by allowing a rigid solid body to freely fall and rise without any additional artificial constraints, i.e., 3 degrees of freedom (DOF) in the 2-D case and 6 DOF in the 3-D case. To accurately model the interaction between fluid and solid, the coupling of the rigid-body dynamics (Sherman _et al._, 2011) and the SPH fluid dynamics is employed herein.
In detail, SPH firstly conducts a computation of the aggregate force \(F\) exerted upon the solid object. This encompasses the fluid pressure force denoted as \(F_{total}^{f:p}\), the fluid viscous force designated as \(F_{total}^{f:\nu}\), in addition to the gravity \(G\)
\[F=F_{total}^{f:p}+F_{total}^{f:\nu}+G, \tag{9}\]
where the three terms in the right hand of Eq.(9) are respectively defined as
\[\begin{cases}F_{total}^{f:p}=\sum_{i}f_{i}^{f:p}=-2\sum_{i}\sum_{j}V_{i}V_{j} \frac{p_{j}\rho_{i}^{d}+p_{i}^{d}\rho_{j}}{\rho_{j}+\rho_{i}^{d}}\nabla W_{ij }\\ \\ F_{total}^{f:\nu}=\sum_{i}f_{i}^{f:\nu}=2\sum_{i}\sum_{j}\nu V_{i}V_{j}\frac{ \mathbf{v}_{i}^{d}-\mathbf{v}_{j}}{r_{ij}}\frac{\partial W_{ij}}{\partial r_{ ij}}\\ \\ G=\sum_{i}m_{i}\mathbf{g}\end{cases}, \tag{10}\]
where the subscripts \(i\) and \(j\) in present subsection specifically denote solid and fluid particles, respectively. The no-slip boundary condition is imposed at the fluid-structure interface. Following the fluid-solid coupling scheme in Ref. (Zhang _et al._, 2021_a_), the
imaginary pressure \(p_{i}^{d}\), density \(\rho_{i}^{d}\) and velocity \(\mathbf{v}_{i}^{d}\) in Eq. (2.10) are approximated as
\[\begin{cases}p_{i}^{d}=p_{j}+\rho_{j}r_{ij}max(0,(\mathbf{g}-\frac{d\mathbf{v}_{i }}{dt})\cdot\frac{\mathbf{r}_{ij}}{r_{ij}})\\ \rho_{i}^{d}=\rho_{0}(\frac{p_{i}^{d}}{\rho_{0}c_{0}^{2}}+1)\\ \mathbf{v}_{i}^{d}=2\mathbf{v}_{i}-\mathbf{v}_{j}\end{cases}. \tag{2.11}\]
Then, the torque \(\tau\) acting on the center of mass \(\mathbf{r}_{cm}\) of the falling object is evaluated as
\[\tau=\sum_{i}(\mathbf{r}_{i}-\mathbf{r}_{cm})\times(f_{i}^{f:p}+f_{i}^{f:\nu} +m_{i}\mathbf{g}). \tag{2.12}\]
With the force \(F\) and torque \(\tau\) in hand, the rigid-body dynamics is obtained by solving the Newton-Euler equation
\[\begin{pmatrix}F\\ \tau\end{pmatrix}=\begin{pmatrix}M\mathbf{I}&0\\ 0&\mathbf{I}_{cm}\end{pmatrix}\begin{pmatrix}a_{cm}\\ \alpha\end{pmatrix}+\begin{pmatrix}0\\ \omega\times\mathbf{I}_{cm}\omega\end{pmatrix}, \tag{2.13}\]
where \(M=\sum_{i}m_{i}\) is the mass of solid object, \(\mathbf{I}\) the identity matrix, \(\mathbf{I}_{cm}\) the moment of inertia about the center of mass, \(a_{cm}\) the acceleration of center of mass, \(\alpha\) the angular acceleration, and \(\omega\) the angular velocity. All these kinematic values computed by the rigid-body dynamics will be subsequently transmitted to the SPH to iteratively update the physical quantities of solid particles, including position and velocity, etc (Zhang _et al._, 2021_c_).
## 3 Diffusive wetting model
### Diffusive wetting equation
Different from the already wetted surface of a solid object in the typical water exit, the wetting of solid-fluid interface in water entry evolves dynamically. This evolution includes
the wetting spreading on the solid surface and the wetting progressing at the solid-fluid interface, which can be considered as a diffusive process before moisture saturation. Consequently, the fully wetted solid surface in water exit can be regarded as the final state of the diffusive wetting process. Additionally, the wetting rate typically varies with the surface wettability in practice, making it comprehensively characterize the wetting process.
Referring to Fick's second law of diffusion (Fick 1855), a diffusive wetting equation without chemical reactions is proposed as a coarse-grained model here to describe this wetting behavior as
\[\frac{\partial\varphi}{\partial t}=\gamma\nabla^{2}\varphi, \tag{1}\]
where the moisture concentration \(\varphi=\varphi(x,t)\) is a function that depends on location \(x\) and time \(t\), the diffusive wetting coefficient \(\gamma\) represents the physical wetting rate with the unit of \(m^{2}/s\). Due to the lack of relevant experimental data, the experimental-measured \(\gamma\) for each case herein is estimated with the numerical experiment.
In general, in Eq.(1), \(\varphi\) represents the absolute moisture, defined as the mass of water per unit volume of the solid, with the unit of \(kg/m^{3}\). However, in the present SPH model, where a homogeneous solid without any fluid particles penetrated is considered, it is not an easy task to directly measure the moisture content in the unit volume of the solid and predict the concentration based on absolute moisture. To conveniently quantify the concentration, the relative moisture \(\varphi^{*}=\varphi/\varphi_{\infty}\) expressed as a percentage is referred to, where \(\varphi_{\infty}\) is the saturated absolute moisture, and then the Eq.(1) is rewritten as
\[\frac{\partial\varphi^{*}}{\partial t}=\gamma\nabla^{2}\varphi^{*}, \tag{2}\]
where \(\varphi^{*}\in[0,1]\) represents different wetting degrees, for example, \(\varphi^{*}\)=1 denotes the
fully wetted state and \(\varphi^{*}\)=0 represents the dry state. Then the modified diffusive wetting equation Eq.(3.2) could be discretized by the SPH method as (Cleary, 1998; Tang _et al._, 2023)
\[\frac{d\varphi_{i}^{*}}{dt}=2\gamma\sum_{j}\frac{m_{j}\varphi_{ij}^{*}}{\rho_{j} r_{ij}}\frac{\partial W_{ij}}{\partial r_{ij}}, \tag{3.3}\]
where \(\varphi_{i}^{*}\) is the relative moisture of the solid particle, and \(\varphi_{ij}^{*}=\varphi_{i}^{*}-\varphi_{j}^{*}\), where \(\varphi_{j}^{*}\equiv 1\), the difference between the solid particle and its neighbouring fluid particle.
Note that, the present model only captures the wetting evolution occurring on the outermost-layer solid particles to assess the wetting degree of the solid object, which is solely contributed by the surrounding fluid particles. Also noted that, the present SPH model employs a cut-off radius of \(R=2h=2.6dx\), where \(dx\) represents the initial particle spacing. This implies that two layers of surface solid particles are actually involved in the diffusive wetting, as shown in Figure 1. In practice, though, the relative moisture of outermost-layer solid particles will increase more rapidly due to the contribution from more neighboring fluid particles, unlike the slower increase in the relative moisture of the second-layer solid particles. This ensures the feasibility of using the relative moisture of the outermost-layer solid particles as the determinant for assessment, irrespective of the relative moisture of the second-layer solid particles.
Furthermore, the microscopic physical thickness of the solid surface undergoing diffusive wetting should remain unchanged across different resolutions. However, the corresponding numerical thickness, i.e., \(dx\), as shown in Figure 1, will decrease as the resolution increases. To ensure physical consistency with Eq. (3.2), a mapping from physical to numerical distances is introduced. This mapping induces a modified numerical scheme using the chain rule, as illustrated in Figure 1. Thus the discretized diffusive wetting
equation Eq. (3.3) becomes
\[\begin{cases}\frac{d\varphi_{i}^{*}}{dt}=2\gamma^{*}\sum_{j}\frac{m_{j}\varphi_{ ij}^{*}}{\rho_{j}r_{ij}}\frac{\partial W_{ij}}{\partial r_{ij}}\\ \frac{\gamma}{\gamma^{*}}=\frac{1}{(dx)^{2}}\end{cases}, \tag{3.4}\]
where the value \(1\) is dimensional with the unit of \(m^{2}\).
By employing this mapping rule, the resolution independence is achieved, which enables the physically consistent diffusive wetting process for arbitrary resolutions. Figure 2 illustrates the dynamic wetting results of the solid object in typical water entry scenarios, where different diffusive wetting coefficients \(\gamma=\gamma^{*}/(dx)^{2}\) are applied in Eq.(3.4). As the flow progresses, the adjacent dry solid particles get wetted, causing a gradual increase in the relative moisture. Among the three wetting conditions, a larger diffusive wetting coefficient leads to a higher overall relative moisture of the solid surface at the same instant.
### Treatments on various wetting states
In physics, when the fluid comes into contact with the solid surface, the imbalance between adhesion and cohesion acting upon the contacted water molecules will initiate the
Figure 1: The mapping rule from numerical to physical distances in the diffusive wetting.
process of wetting and redistribution. This process continues until the solid is fully wetted, at which point the force imbalance eventually disappears, together with redistributed near-surface water molecules. In the present coarse-grained SPH model, this molecular redistribution process is analogized by the different level of numerical regularization of SPH particles.
Currently, there are two mainstream numerical regularization algorithms in SPH, i.e., the particle shifting technique (PST) (Lind _et al._, 2012; Skillen _et al._, 2013; Khayyer _et al._, 2017) and the transport-velocity formulation (TVF) (Adami _et al._, 2013; Zhang _et al._, 2017_b_), applied to regularize the SPH particle distribution. Herein, the TVF scheme is utilized, and the particle advection velocity \(\widetilde{\mathbf{v}}\) is expressed as follows
\[\widetilde{\mathbf{v}}_{i}(t+\delta t)=\mathbf{v}_{i}(t)+\delta t \left(\frac{\widetilde{d}\mathbf{v}_{i}}{dt}-p_{max}\sum_{j}\frac{2m_{j}}{\rho _{i}\rho_{j}}\frac{\partial W_{ij}}{\partial r_{ij}}\mathbf{e_{ij}}\right). \tag{3.5}\]
Here, the global background \(p_{max}\) is chosen as \(p_{max}=\alpha\rho_{0}\mathbf{v}_{max}^{2}\) with the empirical coefficient \(\alpha=7.0\), where \(\mathbf{v}_{max}\) is the maximum particle velocity at each advection time step. Note that the numerical regularization can effectively eliminate the unphysical voids induced by the tensile instability in the SPH method, which guarantees that the real negative pressure in physics could work well.
Since in the free-surface flow the numerical regularization is only carried out for inner fluid particles away from free surface, the implementation depends on particle identification, which classifies fluid particles into inner and free-surface particles. If one mimics the free-surface particles with the water molecules at the solid-fluid interface before wetting and the inner fluid particles with the water molecules near the fully wetted solid surface, the above implementation of numerical regularization can be used together with the diffusive wetting model. Specifically, the numerical regularization is
only carried out on the fluid particle near fully wetted solid surface, which relies on the free-surface identification algorithm detailed in the next section.
### The coupling of particle identification and diffusive wetting
To identify whether a fluid particle is near fully wetted solid surface, the present model primarily adopts the spatio-temporal free-surface identification approach (Zhang _et al._, 2023). Note that, since a relationship between the particle identification rule and surface wettability is not provided in the original algorithm, a free-surface particle is immediately identified as inner one once it comes into contact with solid surface.
In order to take into account the surface wettability, a wetting-coupling mechanism is introduced here to the original identification approach. It utilizes the relative moisture \(\varphi^{*}\) of adjacent solid particles as an additional criterion for particle identification. In brief, apart from satisfying the position divergence threshold required by the original identification, the transforming from free-surface to inner particles must also meet an additional condition, viz, being in contact with at least one fully wetted solid particle. The corresponding algorithm is summarized in Algorithm 1.
Since the finite wetting rate in Eq. (3.4) leads to different a delay required for a solid particle to be fully wetted, consequently, the transforming of inner particle from free-surface one will be also delayed. Figure 2 depicts the particle identification at the same instant, delayed by the various surface wettabilities. Subsequently, if the numerical regularization is carried out on the transformed fluid particles, as shown in Figure 2, by the TVF scheme, different hydrodynamic behaviors are obtained, as shown in the right panels of Fig. 3. In comparison, if the TVF scheme is implemented based on the original particle identification approach, the hydrodynamic behaviors are independent of surface wettability, as shown in the left panels of Figure 3.
Note that, for a typical water exit problem, the submerged cylinder is already fully
wetted with \(\varphi^{*}=1\). Therefore, all the fluid particles near the solid surface are identified as inner ones. Also note that, the present identification approach specifically allows for the modeling of a complete process from water entry to exit, as will be shown in Sec. 4.3, where the particle identification is fully coupled with the dynamical diffusive wetting through the entire process.
## 4 Qualitative validations
### 3-D water entry of a sphere
The 3-D water entry of a freely falling sphere Duez _et al._ (2007) is simulated to qualitatively validate the diffusive wetting model to generate various splash patterns according to the surface wettabilities. Figure 4 briefly depicts the schematic, where the sphere has a radius of \(D=0.02m\), an initial relative moisture of \(\varphi^{*}=0\), and a density equivalent to that of glass, i.e., \(2500kg/m^{3}\). The sphere is released at various heights
Figure 2: The delay effect of the wetting-coupled spatio-temporal identification approach under three diffusive wetting conditions. Here, the TVF scheme (Adami _et al._, 2013; Zhang _et al._, 2017_b_) is not applied. A half-buoyant cylinder with the diameter \(D=0.11m\) is released from \(0.3m\) above the free surface. The time instants from top to bottom are \(t=0.23s\), \(0.25s\) and \(0.27s\). The uniform particle spacing is \(dx=D/25\). The water dynamic viscosity \(\mu\) is \(8.90\times 10^{-4}\mathrm{Pa}\cdot s\). Fluid particle type: red free-surface particles and blue inner particles.
above the free surface, resulting in different impacting speed of \(u_{impet}=1.4m/s\), \(5m/s\), and \(9m/s\). The artificial sound speed \(c_{0}\) is defined as \(10u_{impet}\). A cuboid fluid domain with dimensions of length \(L=3D\), width \(W=3D\), and height \(H=3.5D\) is chosen. The dynamic viscosity of water \(\mu\) is \(8.90\times 10^{-4}Pa\cdot s\), and its density is \(1000kg/m^{3}\). The gravity acceleration is \(g=9.81m/s^{2}\). In all cases, an initial uniform particle spacing of \(dx=D/40\) is adopted. Additionally, to conveniently observe the presence or absence of a uniform particle spacing, the resulting fluid is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{im}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{im}=1.4m/s\). The initial uniform particle spacing is \(10u_{impet}=1.4m/s\), and the initial uniform particle spacing is \(10u_{impet}=1.
air entrainment, i.e. cavity formation as the water surface closes above the top surface of the sphere, the mid-surface of the fluid domain is clipped, as shown in Figure 4.
Referring to the air entrainment observed during the splashing processes by Duez _et al._ (2007), as shown in Fig. 5, we choose 4 wetting rates for the 7 tested points. These rates correspond to 4 qualitatively defined static contact angles representing the superhydrophobic, hydrophobic, hydrophilic, and super-hydrophilic wetting properties of the solid surface, i.e., \(\gamma=\gamma^{*}/(dx)^{2}=0\), \(25m^{2}/s\), \(75m^{2}/s\), and \(\infty\). Figure 6 gives the air entrainment obtained from the numerical simulations corresponding to the experimental setups in as shown in Fig. 5. It is observed that the air entrainment predicted from the simulations agree well with experimental observations. Specifically, one can find that a super-hydrophobic sphere makes splash with air entrainment or cavity (which collapses eventually under the increased ambient pressure) at all impacting velocities. However, for a less hydrophobic sphere, there is less or no air entrainment for the same impact speed. With the same wetting properties, the splash becomes more evident with a larger volume of air entrainment as the impact speed increases. When the sphere is hydrophilic,
Figure 4: Schematic of the 3D water entry of a sphere with the clipped mid-surface.
with the corresponding static contact angle less than \(90^{o}\), much higher impact speed is required to produce an air entrainment. Therefore, if the impact speed is moderate, the ascending splash follows the sphere and quickly accumulates at the pole without air entrainment, as shown in Figure 6.
### 2-D water exit of a cylinder
Following the classical water exit experiment conducted by Greenhow & Lin (1983), the 2-D water exit of a cylinder is considered herein to validate the model's ability to capture flow separation and free-surface breaking. The schematic of the problem is shown in Figure 7, where a neutrally buoyant cylinder with a diameter of \(D=0.11m\) is initially located below the free surface at a distance of \(1.5D\). The submerged cylinder is wetted with an initial relative moisture of \(\varphi^{*}=1\). The dimensions of the water tank are \(5D\) in height and \(10D\) in width. The water dynamic viscosity \(\mu\) and density are \(8.90\times 10^{-4}Pa\cdot s\) and \(1000kg/m^{3}\), respectively. The artificial sound speed \(c_{0}\) is calculated
Figure 5: Experimental results of air entrainment as a function of the sphere’s threshold velocity \(U^{*}\) and static contact angle \(\theta_{0}\), reproduced from (Duez _et al._, 2007). No air entrainment occurs at the configuration point below the red dotted line, while air cavities of different volumes form above the threshold velocity. Among the 12 configuration points chosen for the present validation, the 7 solid circles represent the ones that are actually tested, while the remaining points, indicated by hollow circles, can be inferred without further investigation.
as \(20\sqrt{5gD}\), where \(g=9.81m/s^{2}\) is the gravity. The cylinder is extracted from the water by a constant force equal to its weight in the experiment and rises by its buoyancy. To determine the value of the force, we use the following equation:
\[\frac{d\rho}{dt}=\frac{1}{2}\left(\frac{\rho}{\rho_{0}}\right)^{2}+\frac{1}{2} \left(\frac{\rho}{\rho_{0}}\right)^{2}+\frac{1}{2}\left(\frac{\rho}{\rho_{0}} \right)^{2} \tag{10}\]
Figure 6: The numerical verification for the air entrainment prediction of Duez _et al._ (2007). Note that, the 7 snap-shots corresponding to 7 simulation setups are arranged according to Fig. 5. Note that, for sphere with super-hydrophobic surface, a small volume of air entrainment is generated even at the lowest impact speed. Also note that, for super-hydrophilic surface, only a small small volume of air entrainment is generated at the highest impact speed.
ecount for this, the gravity in the numerical simulation acts only on the fluid, not the freely rising cylinder. The initial uniform particle spacing is set to \(dx=D/100\).
Figure 8 shows the quite good qualitative comparison between the experimental and numerical results at different time intervals. During the initial phase (from \(t=0.185s\) to \(0.253s\)), the water above the cylinder is lifted along with the cylinder, resulting in a rapidly downward moving and thinning of the water layer. Concurrently, a low-pressure region gradually forms on the side of the cylinder, with the area and the magnitude of the low-pressure region increasing as the cylinder moves upwards (Greenhow 1988), as shown in the left panel of Figure 9. When the cylinder is almost leaving the free surface, this phenomenon leads to a pressure inversion across the free surface (Greenhow & Lin 1983), causing Rayleigh-Taylor instability (Baker _et al._ 1987) and spontaneous free-surface breaking near the intersection of the free and cylinder surfaces, also known as "waterfall breaking" (Greenhow & Lin 1983). However, this negative pressure will cause unphysical voids to appear in the SPH simulation beforehand, so the subsequent spontaneous free-surface breaking has not been successfully captured with efficient treatments in most SPH simulations of water exit (Buruchenko & Canelas 2017; Zhang _et al._ 2017\(a\); Lyu _et al._ 2021). As can be clearly seen that, at the time instance \(t=0.270s\), the free-surface
Figure 7: Schematic of the 2-D water exit of a cylinder.
breaking is realized by the diffusive wetting model without introducing unphysical voids. This can be attributed to the wetting-coupled spatio-temporal identification approach and the particle regularization from the TVF method (Adami _et al._, 2013; Zhang _et al._, 2017_b_). Furthermore, in the right panel of Figure 9, successful capture of flow separation before the free-surface breaking is evident. At approximately \(110^{\circ}\) on the rear side of the cylinder, the flow direction of the outermost particles deviates significantly from the mean flow and cylinder surface.
In the experiment, when the free surface momentarily breaks, thin-layer water in the wake behind the cylinder breaks into droplets (Colicchio & Lugni, 2009). This remarkable phenomenon is also well reproduced in the present simulation, as shown by the pronounced scattered falling droplets in the right panels of Figure 8. In the following phase (from \(t=0.270s\) to \(0.343s\)), same to the flow behaviors in the experimental snapshots, the lifted water layer continuously moves downwards along the sides of the cylinder but separates from the bulk water due to insufficient downflow velocity. Furthermore, it is also important to highlight that as the cylinder breaches the free surface in the experiment, the region of low-pressure wake beneath the cylinder pulls a section of the free surface downward, creating a depression around the cylinder (Truscott _et al._, 2016). This depression persists throughout the subsequent phase as well, a phenomenon also evident in the present simulation. Hence, the successful reproduction of the complete water exit process, especially the typical flow separation and spontaneous free-surface breaking, suggests the capability of the present diffusive wetting model to investigate water exit.
\(t^{*}=0.195s\) and \(t=0.253s\).
\(t^{*}=0.205s\) and \(t=0.270s\).
\(t^{*}=0.205s\) and \(t=0.
Figure 8: The qualitative comparison of experimental (Greenhow & Lin 1983) (left panel) and numerical (right panel) water exit. Note the discrepancies between experimental and simulating instants may be due to the uncertainties. The particles are colored with the magnitude of velocity.
### The complete process from water entry to exit of a 2-D cylinder
Since the capacity of the present diffusive wetting model in simulating water entry/exit separately has already been well confirmed through the aforementioned cases, its potential to simulate the combined processes is further validated herein.
Here, we consider the model described in Section 4.2, with all parameters kept unchanged except that the cylinder is half-buoyant. To obtain an impact speed of \(u_{impet}=2.89m/s\), the cylinder is first lifted by \(0.48m\) above the free surface and then falls freely, as shown in Figure 10. Three cases with different wetting conditions are considered. In the first case, the cylinder is already wetted (\(\varphi^{*}=1\)) before impact. In the second case, the cylinder is dry initially with \(\varphi^{*}=0\), and the wetting process is controlled by the a finite rate (\(\gamma=\gamma^{*}/(dx)^{2}=0.27m^{2}/s\)) so that wetting is delayed and fully wetted, respectively, during the entry and exit. In the last case, the cylinder surface is super-hydrophobic so that it keeps dry during the entire process.
Figure 11 presents the snapshots with non-wetted fluid particles indicated for all the three cases. When the cylinder is already wetted before the impact, as shown in the left panel, the fluid particles near the solid surface are immediately identified as inner ones and imposed with numerical regularization. Like the super-hydrophilic sphere as shown
Figure 9: Negative pressure (left panel) and flow separation (right panel). The time instant of the pressure contour snapshot is \(t=0.224s\), while that of the flow separation snapshot is \(t=0.263s\) before the happening of free-surface breaking at \(t=0.270s\) in Figure 8.
in Figure 5, the cylinder is quickly submerged after the impact without generate much splashing. After the cylinder descends a significant depth, the buoyancy force eventually overtakes the weight and inertial, stops the cylinder at about the time instance \(t=0.702s\) and raises it up again. Under the acceleration of buoyancy force, the cylinder later leaps out of the water surface and reaches a considerable pop-out height (the maximum value above the free surface). Note that, even with the presence of agitated water surface after impact, the phenomenon of "waterfall breaking" remains evident, which is in a good agreement with the water exit described in Section 4.2.
In contrast, when the cylinder is dry initially, as shown in the middle panel, the wetting process is delayed during water entry due to the finite wetting rate. Such delay results in a gradual transformation of the near-solid-surface fluid particles into inner ones and hence the delayed imposing of the numerical regularization, which results a cavity with two almost symmetric and vigorous jets. During this process, the cylinder remains half-submerged before the retreat flows from both sides cover the cylinder surface. Note that, the maximum descent depth of the cylinder is less than that in the previous case,
Figure 10: Schematic of 2-D water entry and exit of a cylinder.
attributed to the greater energy dissipation resulting from the jets and splashes. This also explains the diminished leaping velocity and a notable reduction of the pop-up height (Truscott _et al._, 2016) when the cylinder breaches water surface again. Furthermore, during the water exit phase, since the cylinder surface is already fully wetted, "waterfall breaking" very similar to the previous case is observed.
For the last case with the super-hydrophobic cylinder, as shown in the right panel, all hydrodynamic behaviors during water entry, including the maximum descent depth, closely resemble those of the second case. which aligns with Duez _et al._ (2007)'s prediction. However, due to the non-wetted surface, the near-solid surface fluid particles are consistently identified as non-wetted without particle regularization imposed. Consequently, the subsequent "waterfall breaking", as seen in the previous two cases, does not occur; instead, it is replaced by the formation of two cavities on both sides of the cylinder. Note that the adopted wetting-coupled spatio-temporal particle identification approach in the present model ensures that these cavities during the water exit are not unphysical voids. This phenomenon is similar to the cavitation observed in hydrodynamics. Interestingly, as the water is further lifted by the rising cylinder, a unique thin layer of water resembling a hat remains consistently. The presence of this hat-like layer increases hydrodynamic resistance, leading to a quicker reduction in the upward velocity of the cylinder compared to the previous two cases. As a result, the cylinder does not exhibit a distinct leap out of the water but drains the water layer gradually.
## 5 Quantitative validations
### 2-D water entry of a cylinder
In order to increase the reliability of the diffusive wetting model in practical application, a 2-D water entry of a freely falling cylinder is modeled and then quantitatively compared
Figure 10: The \(\phi^
Figure 11: The complete process from water entry to exit in three different wetting conditions. The wetted cylinder (left panel), the dry cylinder with a certain hydrophilicity \(\gamma=\gamma^{*}/dx=0.27m^{2}/s\) (middle panel) and the superhydrophobic cylinder (right panel).
with the experiment (Colicchio & Lugni 2009). Referring to the experimental setup, the diameter and density of a stainless steel circular cylinder are given as \(0.3m\) and \(620kg/m^{3}\) respectively, while other geometrical and physical parameters are the same as that in Section 4.3. The poor hydrophilic surface initially is dry and assigned with a diffusive wetting rate of \(\gamma=\gamma^{*}/(dx)^{2}=0.17m^{2}/s\).
The left panel of Figure 12 shows the time trace of the vertical position of the cylinder center throughout the entire process, from water entry to exit. In the early stage of water entry (approximately \(t<0.2s\)), the time trace exhibits high repeatability with small run-to-run deviations, which agrees well with the experiment. During the later phase of descent (approximately \(0.2s<t<0.41s\)), the time traces under different resolutions show a slight divergence, but they remain well within the range defined by the standard deviation error bars of the experimental data (Colicchio & Lugni 2009) and show a convergent tendency. Moreover, in the present simulation with a finite and small tank size, the water wave propagation caused by the splash during water entry will be blocked by the side walls, resulting in an elevation of the free surface. Hence, compared to the experimental time trace in subsequent water exit (approximately \(0.41s<t<1.1s\)), the increased water pressure above the cylinder will slow down its ascent in the numerical simulation. In the next subsection 5.2 about water exit, the initially immersed cylinder rises up in a calm water tank without the influence of any violent wave propagation, and this deviation will be eliminated, which verifies the rationality of the above explanation well.
In the right panel of Figure 12, the unsteady hydrodynamic force Truscott _et al._ (2012) acting on the cylinder will induce the oscillation of its vertical velocity throughout the entire process. Even this, during the stage of water entry, the continuous line, representing the mean experimental velocity, approximates the fitted curve of the numerical oscillating
velocity. The vertical velocity during the water exit stage is lower than the experimental value, which align with the explained time trace of the vertical position in water exit.
### 2-D water exit of a cylinder
As the same circular cylinder is used for both experimental water entry/exit (Colicchio & Lugni 2009), the same cylinder in Section 5.1 is submerged and fully wetted with its center at a depth of \(0.46m\) from the free surface, and pushed upwards by the buoyancy force. Figure 13 depicts the time evolution of the vertical position of the cylinder obtained with 4 particle resolutions, to demonstrate the convergence analysis, and the comparison with results from the literature. It is observed that, during the initial rising phase, the present results are in good agreements with those of experiments and previous simulations. However, when the cylinder approaches water surface, large deviations become apparent, may be attributed to the different ability to handle the "waterfall breaking" and flow separation as discussed in previous sections. In particular, the results obtained by a previously SPH simulation (Buruchenko & Canelas 2017) and the Level-set method (Colicchio & Lugni 2009) show a significantly smaller pop-up height compared to the experiment, and a significantly smaller increasing slope. In contrast, the
Figure 12: Comparison about the vertical position (left panel) and velocity (right panel) of cylinder center with Experiment (Colicchio & Lugni 2009).The experimental data are plotted with the standard deviation error bars.
present results and those obtained by the VOF method (Moshari _et al._, 2014) exhibit much closer increasing slope and pop-up height compared to the experiment.
In previous studies, a sphere with a lower density than water typically vibrates during water exit ascent (Newton.I, 1687; Schmidt, 1920; Schmiedel, 1928; Preukschat, 1962; G. Kuwabara & Kono, 1983; Veldhuis _et al._, 2004), and its ascent is confined to a single vertical plane (Horowitz & Williamson, 2008). For the 2-D cylinder with a density of \(620kg/m^{3}\) in present simulation, which corresponds to the cylindrical cylinder in the experiment, Figure 14 illustrates its trajectory during the ascent. The nearly vertical ascent trajectory demonstrates that the rising of the cicurlar cylinder is also confined to a single vertical plane. To further verify the presence of similar vibrations or not, the measured vertical position data is temporally derivated. The left panel of Figure 15 shows the obtained the time trace of the vertical velocity, where an apparent periodic oscillation exists in the vertical velocity during the ascent. In the quantitative comparison of the vertical velocity with the literature, other numerical results show some deviations from the experiment during the ascent, but the wave crests of the present oscillation
Figure 13: Convergence analysis (left panel) and comparison (right panel) about the vertical position of cylinder center. Experiment (Colicchio & Lugni, 2009), DualSPHysics (Buruchenko & Canelas, 2017), VOF method (Moshari _et al._, 2014), and Level-set method (Colicchio & Lugni, 2009).
curve always fit closely to the filtered experimental curve until the moment of "waterfall breaking", as shown in the right panel of Figure 15.
## 6 Conclusion
In this study, we propose a diffusive wetting model for water entry/exit based on the WCSPH method, accounting for the influence of surface wettability on hydrodynamics. The model includes the diffusive wetting equation, which describes the wetting evolution
Figure 14: The trajectory of the rising cylinder. Left panel: the time trace of the lateral position of cylinder center. Right panel: the trajectory of cylinder center in \(X-Z\) plane.
Figure 15: Convergence analysis (left panel) and comparison (right panel) about the vertical velocity of cylinder center. Experiment (Colicchio & Lugni, 2009), DualSPHysics (Buruchenko & Canelas, 2017), VOF method (Moshari _et al._, 2014), and Level-set method (Colicchio & Lugni, 2009).
at the fluid-solid interface under different surface wettability conditions. Additionally, we introduce a wetting-coupled spatio-temporal identification approach specifically designed for interfacial fluid particles. Furthermore, we apply particle regularization to corresponding interfacial fluid particles to handle various wetting states of the solid. The proposed model enables accurate simulation of various splashing behaviors in water entry, due to the consideration of the effect of surface wettability. It also accurately realizes the flow separation and spontaneous free-surface breaking in water exit. Moreover, the model successfully integrates water entry/exit as a complete process in a single numerical simulation. Qualitative and quantitative comparisons with extensive experiments demonstrate the accuracy, efficiency, and versatility of the proposed model. As the future work, we plan to further validate the performance of the model by applying it to more complex scientific and industrial problems.
|
2310.09736 | Domain-Specific Language Model Post-Training for Indonesian Financial
NLP | BERT and IndoBERT have achieved impressive performance in several NLP tasks.
There has been several investigation on its adaption in specialized domains
especially for English language. We focus on financial domain and Indonesian
language, where we perform post-training on pre-trained IndoBERT for financial
domain using a small scale of Indonesian financial corpus. In this paper, we
construct an Indonesian self-supervised financial corpus, Indonesian financial
sentiment analysis dataset, Indonesian financial topic classification dataset,
and release a family of BERT models for financial NLP. We also evaluate the
effectiveness of domain-specific post-training on sentiment analysis and topic
classification tasks. Our findings indicate that the post-training increases
the effectiveness of a language model when it is fine-tuned to domain-specific
downstream tasks. | Ni Putu Intan Maharani, Yoga Yustiawan, Fauzy Caesar Rochim, Ayu Purwarianti | 2023-10-15T05:07:08Z | http://arxiv.org/abs/2310.09736v1 | # Domain-Specific Language Model Post-Training for Indonesian Financial NLP
###### Abstract
BERT and IndoBERT have achieved impressive performance in several NLP tasks. There has been several investigation on its adaption in specialized domains especially for English language. We focus on financial domain and Indonesian language, where we perform post-training on pre-trained IndoBERT for financial domain using a small scale of Indonesian financial corpus. In this paper, we construct an Indonesian self-supervised financial corpus, Indonesian financial sentiment analysis dataset, Indonesian financial topic classification dataset, and release a family of BERT models for financial NLP. We also evaluate the effectiveness of domain-specific post-training on sentiment analysis and topic classification tasks. Our findings indicate that the post-training increases the effectiveness of a language model when it is fine-tuned to domain-specific downstream tasks.
domain-specific language model, post-trained language model, financial NLP, sentiment analysis, topic classification
## I Introduction
Data is driving finance nowadays, and the most important data may be found in textual form, such as documents, texts, websites, forums, and other places1. Moreover, banking institutions in Indonesia are increasingly employing textual data and NLP techniques to create a financial infrastructure that can make a more data-driven and intelligent decision.
Footnote 1: [https://www.analyticssteps.com/blogs/6-applications-nlp-finance](https://www.analyticssteps.com/blogs/6-applications-nlp-finance)
In natural language processing (NLP), pretraining large neural language models using unlabeled text on either one language or multiple languages has proven to be a successful method for transfer learning. One of the notable examples is Bidirectional Encoder Representations from Transformers (BERT), which has become a standard benchmark for training NLP models for various downstream tasks. Another example is IndoBERT, the implementation of BERT specific for Indonesian language which also performs well as a building block for training task-specific NLP models for Indonesian language [1]. However, those pre-training works focus on the general domain in which the unlabeled text data are collected from Web domains, newswire, Wikipedia, and BookCorpus [1, 2]. Previous studies have explored on a specific domain such as financial [3, 4, 5], but their focus is on the implementation using English language.
To fill the gap, we perform continual pre-training or post-training on IndoBERT (BERT implementation for Indonesian) with an Indonesian self-supervised financial corpus, including financial news articles and corporate financial reports. We also evaluate its performance by applying the post-trained domain-specific models for sentiment analysis task which is a strongly domain-dependent task [4] and financial topic classification task. This paper reports about our investigation into domain-specific post-training for financial domain in Indonesian language and the following contributions: (1) Indonesian self-supervised financial corpus, consisting of texts from financial news articles and corporate financial reports. (2) Indonesian financial sentiment analysis dataset, consisting of texts from news article titles that refer to a specific financial institution including its sentiment labels (IndoFinSent) and a translated version of Financial Phrasebank [6]. (3) Indonesian financial topic classification dataset, consisting of texts from Twitter that are related to financial domain (i.e., a translated version of the Twitter Financial News dataset2). (4) A systematic evaluation on using domain-specific post-trained models for sentiment analysis and topic classification tasks in the financial domain.
Footnote 2: [https://huggingface.co/datasets/zeroshot/twitter-financial-news-topic](https://huggingface.co/datasets/zeroshot/twitter-financial-news-topic)
## II Related Works
Recently, self-supervised pre-training of contextual language models on large general domain corpora, such as ELMo [7], ULM-Fit [8], XLNet [9], GPT [10], BERT [2], and IndoBERT [1] have significantly improved performance on various natural language processing downstream tasks, including sentence classification, token classification, and question answering. IndoBERT, as the foundation of this research, is an implementation of BERT in Indonesian language. IndoBERT has similar model architecture as BERT in which it is a multi-layer bidirectional Transformer encoder [2]. However, IndoBERT is pre-trained by using Indo4B dataset that consists
of around 4B words, with around 250M sentences that covers both formal and colloquial Indonesian texts [1].
Moreover, it also has been shown that pre-training a language model using domain-specific corpora can further improve the downstream task performance than fine-tuning the generic language model. Previous studies have explored on specific domains such as biomedical [11, 12, 13], scientific domain [14], legal domain [15], and financial [3, 4, 5]. They use domain-specific corpora to pre-train the language model (mainly BERT) and evaluate its effectiveness on various downstream tasks. There are two pre-training paradigms used on those previous studies which are: (1) pre-training from scratch using domain-specific corpora; and (2) continual pre-training (post-training) from pre-trained generic language model using domain-specific corpora. In this paper, we perform continual pre-training (post-training) from pre-trained IndoBERT using our constructed financial corpus in Indonesian.
## III Indonesian Self-Supervised Financial-Domain Corpus
### _Corpus Construction_
Our corpus is primarily based on 875 financial news articles from CNBC and Bisnis.com and corporate financial reports from three largest financial institutions in Indonesia. For the financial news articles, we use BeautifulSoup to scrape relevant news article contents from CNBC especially 'Market' and 'My Money' categories and from Bisnis.com especially '_Perbankan_', '_Asuransi_, 'Multifinance', 'Personal Finance', '_Moneter_', '_Bisnis Syariah_', and 'Fintech'. For the corporate financial reports, we transforms PDF files provided in each corporate's investor relations site to text files using an online tool PDF2Go3. We tokenize the texts by sentence in this case.
Footnote 3: [https://www.pdf2go.com/pdf-to-text](https://www.pdf2go.com/pdf-to-text)
To ensure a clean corpus, we remove irrelevant texts such as (inline) advertisements and links to related articles. We also remove multiple white spaces and special characters. A random selection was reviewed to ensure high quality.
### _Corpus Statistics_
The statistics of our financial domain corpus can be found in Table I. As mentioned, the corpus is collected from online news platforms and corporate financial reports. As the pre-training of IndoBERT used 23.43 GB of text data [1], in this work, we collect a small scale of domain-specific text data with total size of 4.85 MB and 647,586 words45.
Footnote 4: [https://huggingface.co/datasets/intann/financial_news_id_v1.0](https://huggingface.co/datasets/intann/financial_news_id_v1.0)
Footnote 5: [https://huggingface.co/datasets/intann/indonesian_financial_statements](https://huggingface.co/datasets/intann/indonesian_financial_statements)
## IV Indonesian Financial Downstream Tasks Dataset
### _Financial Sentiment Analysis_
#### Iv-A1 Corpus Construction
For financial sentiment analysis task, labeled dataset is required for training and evaluating the resulted models. We use Financial Phrasebank [6], a financial sentiment analysis dataset with three sentiment classes, namely, negative (0), neutral (1), and positive (2). We then translate the dataset into Indonesian language by using Google Translate6.
Footnote 6: [https://huggingface.co/datasets/intann/indonesian_financial_phrasebank](https://huggingface.co/datasets/intann/indonesian_financial_phrasebank)
Moreover, we also construct a financial sentiment analysis dataset (IndoFinSent)7 by scraping financial news article titles that are referring to a financial institution in Indonesia from the following sources, namely, Liputan6, CNBC, Detik, Sindo, Jawapos, and Metrotv News. Each of the sentence (news title) was manually labeled by 2 annotators and cross-checked by these annotators. In sum, we collect and label 2,274 entries in our dataset. The annotation took about 38 hours.
Footnote 7: [https://huggingface.co/datasets/intann/IndoFinSent](https://huggingface.co/datasets/intann/IndoFinSent)
#### Iv-A2 Corpus Statistics
The statistics of the dataset is shown in Table II. For the translated version of Financial Phrasebank [6]8, we use the "all agree" subset consisting of 2,264 entries of sentence in financial domain with its respective sentiment labels. Most of the sentences have neutral labels (61.44%), followed by positive labels of 25.18% and negative labels of 13.38%. For IndoFinSent, it is dominated by positive label entries of 52.77%, followed by neutral and negative label entries of 26.25% and 20.98% respectively. Examples of the dataset contents can be seen on Table III.
Footnote 8: [https://huggingface.co/datasets/franoshot/twitter-financial-news-topic](https://huggingface.co/datasets/franoshot/twitter-financial-news-topic)
### _Financial Topic Classification_
#### Iv-B1 Corpus Construction
In addition to sentiment analysis task in financial domain, we also perform topic classification task in the domain. We use the Twitter Financial News dataset9 annotated with 20 topics, namely, Analyst Update, Fed & Central Banks, Company & Product News, Treasuries & Corporate Debt, Dividend, Earnings, Energy & Oil, Financialis,
Currencies, General News & Opinion, Gold & Metals & Materials, IPO, Legal & Regulation, M&A & Investments, Macro, Markets, Politics, Personnel Change, Stock Commentary, and Stock Movement. This dataset was published in English with the total of 21,105 documents. We then translate the dataset into Indonesian language by using Google Translate. We clean the dataset by removing links in each of the document. We also create the train (56.34%), validation (19.51%), and test (24.15%) split for the translated dataset.
#### Iv-A2 Corpus Statistics
As it can be seen from Table IV, the dataset is dominated with Company & Product News topic with the total of 4,397 (20.83%) entries out of the whole dataset, followed by Stock Commentary, Macro, and General News & Opinion with the total of 2,646 (12.54%), 2,237 (10.60%), 1,891 (8.96%) respectively. Other topics' presence in the dataset are ranging from 0.27% to 5.82%. Examples of the dataset contents can also be seen on Table III.
## V Methodology
In this paper, we use pre-trained IndoBERT models as the starting point of our experiment as shown in Table V.
### _Baselines_
As a baseline, we directly fine-tune generic IndoBERT for sentiment analysis and topic classification (single sentence classification) tasks. The models are fine-tuned for 2 epochs with a learning rate of 2e-5 and a weight decay of 0.01 using the translated Financial Phrasebank dataset on GPU T4. We also fine-tuned the models for different training data sizes to compare post-training effectiveness given smaller training data.
### _Domain-Specific Post-Training_
The overview of domain-specific post-training and fine-tuning is shown in Figure 1. We perform post-training or continual pre-training from pre-trained IndoBERT (base and large architectures) by using the constructed self-supervised financial corpus. All models are post-trained on GPU T4. The training takes approximately 35 and 55 minutes on base and large architectures respectively. We use a batch size of 8 and a learning rate of 2e-5 for both architectures. Both base and large models are post-trained by using masked language modeling loss with 20 epochs and a weight decay of 0.01. For the large models, we first train it for 10 epochs and then continue for
another 10 epochs due to limited resource. We set the MLM probability to be 0.15.
#### Iv-B1 Sentiment Analysis Fine-Tuning
Similar with baseline models, we also perform sentiment analysis task fine-tuning with the translated Financial Phrasebank dataset on GPU T4. We perform fine-tuning for sentiment analysis given 10% of training data to 100% of training data. This was performed to evaluate the domain-specific post-training effectiveness in transfer learning especially when there is a limited number of annotated dataset. The models are fine-tuned for 2 epochs with a learning rate of 2e-5 and a weight decay of 0.01.
Furthermore, we also perform another sentiment analysis fine-tuning for post-trained models by using IndoFinSent that is specific for one of the biggest financial institutions in Indonesia. The post-trained models, both base and large architecture, were fine-tuned with learning rate of 2e-5, batch size of 16, epoch number of 2, and weight decay of 0.01.
#### Iv-B2 Topic Classification Fine-Tuning
In addition, we perform topic classification task fine-tuning with the translated dataset10. Similar to the previous task, we also perform fine-tuning given 10% of training data to 100% of training data. The models (base only) are fine-tuned for 2 epochs with a learning rate of 2e-5 and a weight decay of 0.01. We do not fine-tune the large models for this task due to limited hardware resource.
Footnote 10: [https://huggingface.co/datasets/intanm/indonesian-financial-topic-classification-dataset](https://huggingface.co/datasets/intanm/indonesian-financial-topic-classification-dataset)
## VI Results and Analysis
In this section, we show the results of the post-trained IndoBERT for financial domain and analyze the performance of our models for sentiment analysis and topic classification downstream tasks. The code and post-trained models are available at [https://github.com/intanq/indonesian-financial-domain-lm](https://github.com/intanq/indonesian-financial-domain-lm).
_Effectiveness of Domain-Specific Post-Training and Its Impact on Sentiment Analysis Downstream Task_
We can see in Table VI, that domain-specific post-trained IndoBERT (base architecture) outperforms the baseline IndoBERT base for most of the training data percentages on financial sentiment analysis task. It is also true for post-trained IndoBERT (large architecture) (see Table VII), however, baseline model (IndoBERT large) still outperforms the post-trained ones on some train data percentages such as 40%, 50%, and 60%. Nevertheless, this shows the effectiveness of domain-specific post-training on pre-trained generic contextual language model for domain-specific downstream task and when given a smaller size of data for task-specific fine-tuning.
Interestingly, the post-trained base models have significant margins compared to the baseline models in terms of the F1 score, especially when using only 30% of training data where it has a margin of 26%. Meanwhile, the post-trained large models have a smaller margin compared to the baseline IndoBERT large. As we can see in Table VII, the performance of fine-tuning of the baseline large model itself is already high due to its larger number of parameters that affects its capacity to capture more linguistic features. From this analysis, we can see that the base model benefits more from the domain-specific post-training, enabling them to capture more relevant financial language patterns and nuances and therefore, increase the effectiveness on financial sentiment analysis downstream task by a bigger margin.
### _Inference using IndoFinSent Dataset_
As mentioned in Section IV, we also construct financial sentiment analysis dataset in Indonesian that is specific to one of the biggest financial institutions in Indonesia (IndoFinSent). The justification for creating a dataset in a target language rather than only use translated version of a dataset from a high-resource language (i.e., English) is to enable preservation of language nuances, cultural references, idiomatic expressions, and wordplay that are specific to Indonesian language. Moreover, by not only relying on translated dataset, it enables us to avoid translation errors or loss of meaning that can lead to incorrect or misleading results. We use this data to perform another fine-tuning for sentiment anaysis task on our domain-specific post-trained models.
From Table VIII, we can see that fine-tuning post-trained IndoBERT using our constructed data result in similar performance. This is an interesting finding since the fine-tuning performance of large models as shown in Table VII that uses translated Financial Phrasebank data outperforms the fine-tuned post-trained base models in general.
### _Effectiveness of Domain-Specific Post-Training on Topic Classification Downstream Task_
As we can see in Table IX, most of the models post-trained by financial news articles dataset outperform the IndoBERT baseline. It can also be observed that in lower training data percentages, the post-trained models outperform the baseline. This shows the effectiveness of domain-specific post-training in topic classification task when given a smaller size of annotated data for fine-tuning. However, it does not apply to the models post-trained by corporate financial reports and combination (i.e., financial news articles and corporate financial reports) where the baselines still outperform most of the models. Although the margin between the post-trained models and the baselines are quite close for the outperforming ones (i.e., ranging from 0% to 5%), it still shows that domain-specific post-training does impact the effectiveness of contextual language model in domain-specific downstream tasks.
### _Post-Training Corpus: Financial News Articles vs Financial Corporate Reports_
There are two types of financial texts that we use for domain-specific post-training. The language styles of the texts are mixed and formal for financial news articles and corporate financial reports respectively. This is done to enforce the diversity of the corpus in terms of language styles.
Furthermore, it can be noticed that the post-trained models using financial news articles data outperform other models
in general. This is caused by the language style similarity between the post-training corpus and downstream tasks dataset (i.e., sentiment analysis and topic classification) that are mainly from news and social media domain. Thus, the results might be different if the data used for sentiment analysis or topic classification fine-tuning has a different language style or gathered from other than news domain.
## VII Conclusion
We perform contextual language model post-training for financial domain by using masked language model training objective. Financial news articles and corporate financial reports are used as the unlabeled corpus for the post-training. In our experiment scenario, we post-train the IndoBERT model (both base and large architecture) by using either financial news articles, corporate financial reports, and combination of both. The post-trained models were evaluated by fine-tuning them to financial sentiment analysis and topic classification downstream tasks using the annotated dataset with 10 different training sizes. This aims to show the effectiveness of the domain-specific post-training especially when given a limited number of annotated dataset.
The experiments reveal interesting findings. Overall, the post-training of IndoBERT models for financial domain helps to improve the performance of the mentioned downstream tasks. For sentiment analysis task, the IndoBERT base model benefits more from this domain-specific post-training. However, it is observed that the domain-specific post-training for IndoBERT large model only improve its performance on downstream task by a small margin. This is due to its initial capacity and a larger number of parameters already allows the large model to capture more linguistic features. In addition, topic classification fine-tuning of the post-trained models also shows the effectiveness of the post-training.
## VIII Future Works
For future works, from-scratch pre-training can be performed for various language models and thus, a larger size of unlabeled text data in financial domain (Indonesian) will be required. This is also need to be supported by sufficient GPU resources. Moreover, regarding the construction of financial sentiment analysis in Indonesian, it can also be gathered from other than news domain, such as, online forums, financial reports, customer inquiries, and more.
## Acknowledgment
We would like to thank Vina Alvionita for participating in the construction of the IndoFinSent sentiment analysis dataset. We also would like to thank our other colleagues from Digital Banking Development & Operation Division, PT Bank Raystat Indonesia (Persero) Tbk for the constructive feedback to enhance this work.
|
2308.13524 | Numerical Simulations Unveil Superradiant Coherence in a Lattice of
Charged Quantum Oscillators | A system of ${N_{osc}}$ charged oscillators interacting with the
electromagnetic field, spatially confined in a 3D lattice of sub-wavelength
dimension, can condense into a superradiant coherent state if appropriate
density and frequency conditions are met. In this state, the common frequency
$\omega$ of the oscillators and the plasma frequency $\omega_p$ of the charges
are combined into a frequency $\omega'=\sqrt{\omega^2+\omega_p^2}$ that is
off-shell with respect to the wavelength of the photon modes involved,
preventing them from propagating outside the material. Unlike other atomic
cavity systems, the frequency $\omega$ in this case is not determined by the
cavity itself but is defined by the periodic electrostatic potential that
confines the charged particles in the lattice. Additionally, the
electromagnetic modes involved have wave vectors distributed in all spatial
directions, resulting in a significant increase in coupling. The analytical
study of this system can be carried out in the limit of large ${N_{osc}}$ by
searching for an approximation of the ground state via suitable coherent trial
states. Alternatively, numerical simulations can be employed for smaller
${N_{osc}}$. In the numerical approach, it is possible to go beyond the
Rotating Wave Approximation (RWA) and introduce a dissipation term for the
photon modes. This dissipation term can account for the ohmic quench in a metal
and also consider photon losses at the boundary of the material. By utilizing
numerical solutions and Monte Carlo simulations, the presence of condensation
has been confirmed, and an energy gap of a few electron volts (eV) per particle
has been observed in typical metal crystals with protons bound to tetrahedral
or octahedral sites. | L. Gamberale, G. Modanese | 2023-08-01T18:23:31Z | http://arxiv.org/abs/2308.13524v1 | # Numerical simulations Unveil Superradiant Coherence in a Lattice of Charged Quantum Oscillators
###### Abstract
A system of \(N_{osc}\) charged oscillators interacting with the electromagnetic field, spatially confined in a 3D lattice of sub-wavelength dimension, can condense into a superradiant coherent state if appropriate density and frequency conditions are met. In this state, the common frequency \(\omega\) of the oscillators and the plasma frequency \(\omega_{p}\) of the charges are combined into a frequency \(\omega^{\prime}=\sqrt{\omega^{2}+\omega_{p}^{2}}\) that is off-shell with respect to the wavelength of the photon modes involved, preventing them from propagating outside the material. Unlike other atomic cavity systems, the frequency \(\omega\) in this case is not determined by the cavity itself but is defined by the periodic electrostatic potential that confines the charged particles in the lattice. Additionally, the electromagnetic modes involved have wave vectors distributed in all spatial directions, resulting in a significant increase in coupling. The analytical study of this system can be carried out in the limit of large \(N_{osc}\) by searching for an approximation of the ground state via suitable coherent trial states. Alternatively, numerical simulations can be employed for smaller \(N_{osc}\). In the numerical approach, it is possible to go beyond the Rotating Wave Approximation (RWA) and introduce a dissipation term for the photon modes. This dissipation term can account for the ohmic quench in a metal and also consider photon losses at the boundary of the material. By utilizing numerical solutions and Monte Carlo simulations, the presence of condensation has been confirmed, and an energy gap of a few electron volts (eV) per particle has been observed in typical metal crystals with protons bound to tetrahedral or octahedral sites.
## I Introduction
The possibility to obtain and manipulate at the nanoscale level coherent and collective quantum optical effects like superradiance and coherent population trapping has been the subject of extensive research over the last two decades [1; 2; 3; 4].
Superradiance or the Dicke effect occur when an ensemble of molecules or quantum oscillators confined in a sub-wavelength region emit and absorb coherent radiation cooperatively. The coherence among the emitters is mediated by the electric radiation field. In certain systems, the field-matter coupling can be further enhanced by the presence of surface plasmons
(plasmonic Dicke effect). A brief review of recent results in this field is given in Sect. II.
In this work we study an idealized system of \(N_{osc}\) charges oscillating in a lattice, which undergoes a superradiant transition above a certain threshold density. The dynamics of this system involves bulk plasmons. It has been investigated analytically in our previous work [5], where we proved in the large \(N_{osc}\) limit the existence of an energy gap for a certain set of quantum states, coherent both in the matter and field sectors. Here we perform numerical calculations with a small number of oscillators and we obtain a confirmation of the analytical results, plus some additional insights. In particular, we test the validity of the RWA approximation used in the analytic approach, and we also test the effect of an ohmic quenching term, which is needed if one wants to apply the model to a metal.
In Sect. III we write the Hamiltonian of the model and we show the main steps leading to the linearization of the electromagnetic interaction through suitable canonical transformations of the photon field operators. This section summarizes some results of [5] in a self-consistent way.
An important characteristic of the model, which makes it suitable for certain applications discussed in [5], is that each oscillator is confined within its electrostatic "cage", limiting its oscillation amplitude and causing it to vibrate at the frequency \(\omega\), which is the same for all oscillators. In the total Hamiltonian, the frequency \(\omega\) is combined with the plasma frequency \(\omega_{p}\) of the oscillating charges to give a dressed frequency \(\omega^{\prime}\), while the photon momentum \(\mathbf{k}\) is unchanged. This implies that there exist no states in which the e.m. energy generated in the material can propagate to vacuum.
Another crucial feature is that the coherent e.m. modes which arise and oscillate with a fixed phase relation to the matter oscillators have wave vectors pointing in all possible space directions; this enhances the matter-field coupling and makes possible the formation of the energy gap.
In Sect. IV we describe the numerical calculation, performed with QuTiP reducing the Hilbert space of the system to a suitable finite vector space and computing the ground state energy \(E_{0}\) either directly, as eigenvalue of \(H\), or in the dissipative case via a Monte Carlo simulation. By plotting, as a function of the coupling \(\varepsilon=\omega_{p}/\omega^{\prime}\), the energy \(E_{0}\) and the expectation and correlation values of the matter and photons fields, a transition to a coherent state is clearly identified.
Finally, Sect. V contains our conclusions and outlook.
Plasmonic Dicke effect and superradiance with microcavities and nanoparticles
In this Section we briefly review recent theoretical and experimental results concerning the plasmonic Dicke effect, i.e., coherent superradiance of quantum emitters enhanced by interaction with surface plasmons. This effect occurs in systems confined on a sub-wavelength scale and displays clear analogies to the phenomenon we are going to analyze in this work.
Recent advances in the manipulation of cooperative e.m. emission processes at the nanoscale have been summarized by Azzam et al. [6] and earlier by Bordo [7] with reference to an analytical model of _spaser_ (surface plasmon amplification by stimulated emission of radiation), a term originally introduced by Bergman and Stockman [8; 9]. The resonator of a spaser can consist of a metal nanoparticle with size smaller than the wavelength of the involved radiation. The active medium can be for example a semiconductor crystal.
The surface plasmons of the metal, which in usual applications play the role of concentrating into a small volume the external radiation incident on the metal, in a spaser have the effect of enhancing the coupling among the quantum emitters.
In simpler terms, a spaser requires the confinement of an ensemble of active molecules to a sub-wavelength scale. These molecules emit coherent radiation in a cooperative manner [10; 11; 12; 13; 14; 15; 16]), and the emission rates are proportional to the number of molecules.
A related effect occurs for molecules located near a metal nanoparticle; in this case the e.m. coupling between molecules is amplified by surface plasmons excited at the nanoparticle surface. Pustovit et al. [17; 18] analyze a system of quantum dipoles close to a metal nanoparticle. They find that the coherent emission of photons by the dipoles is affected by two competing processes: enhancement by resonant energy transfer from excited dipoles to surface plasmons, and quenching by optically inactive excitations in the metal. Simulations indicate that under certain conditions the plasmonic Dicke effect survives non-radiative losses in the metal (see also [19]).
From another point of view, spasers represent an evolution of devices, described in [20], for coupling quantum dots through their radiation field by embedding them into semiconductor cavities [21; 22]; strong coupling between single quantum dots and cavity modes was demonstrated in [23; 24].
Finally, Greenberg and Gauthier [25] have experimentally demonstrated a collective su
perradiant instability in a cold atomic vapor pumped by weak optical fields. This results in the emission of multi-mode optical fields in the absence of an optical cavity. The phenomenon is well described by a theoretical model.
## III The general model
### Matter Hamiltonian
We briefly recall here the model proposed and described in detail in [5]. The system we are considering is a cubic lattice hosting charged particles of mass \(m\) (see Fig. 1), bound at each vertex by a potential which for small oscillations is harmonic, with elastic coefficient \(K_{el}\). The corresponding frequency is denoted by \(\omega=\sqrt{K_{el}/m}\). Physical realizations of this idealized system include crystals or metals "loaded" with light positive ions, which are typically confined in electrostatic "cages" corresponding to tetrahedral holes in the crystal structure [26].
Figure 1: Simplified illustration of the main elements of the model. The crystal lattice has side \(d\). At each vertex a charged mass \(m\) is bound by an elastic potential with corresponding frequency \(\omega\). Coordinates \({\bf x}_{1}\), \({\bf x}_{2}\) etc. denote the fixed positions of the vertices. Coordinates \({\mathbf{\xi}}_{1}\), \({\mathbf{\xi}}_{2}\) etc. denote the displacements of the masses (their amplitude is exaggerated in the figure). The light blue ripples in the background represent plasma oscillations of the system with frequency \(\omega_{p}\). The effective quadratic Hamiltonian has for each mass \(i\) a term proportional to \(\frac{1}{2}m\omega^{2}{\mathbf{\xi}}_{i}^{2}\) and one proportional to \(\frac{1}{2}m\omega_{p}^{2}{\mathbf{\xi}}_{i}^{2}\), which amount to a single oscillator with (dressed) frequency \(\omega^{\prime}=\sqrt{\omega^{2}+\omega_{p}^{2}}\). After inclusion of the e.m. field and diagonalization of the total Hamiltonian, it turns out that the e.m. modes resonant with the oscillators, one of which is symbolically represented by the green wave, have wavelength \(\lambda=2\pi c/\omega\). We suppose \(\lambda\) to be such that \(d\ll\lambda\).
We consider a finite volume \(V\) with \(N_{osc}\) positively charged particles with mass \(m\) embedded in a neutralizing negative electron density distributed in space (the _jellium crystal_, see Appendix A in [5]). Due to the simultaneous presence of the single-particle electrostatic cages and collective plasma oscillations with frequency \(\omega_{p}=\sqrt{\frac{e^{2}N_{osc}}{mV}}\), the Hamiltonian of the \(N_{osc}\) charges oscillating about their equilibrium positions can be written in second quantization as
\[H_{osc}=\omega^{\prime}\sum_{n=1}^{N_{osc}}\left[\mathbf{a}_{n}^{ \dagger}(t)\mathbf{a}_{n}(t)+\frac{3}{2}\right],\qquad\text{where}\qquad\omega^{ \prime}=\sqrt{\omega^{2}+\omega_{p}^{2}} \tag{1}\]
Here and in the following we use units in which \(\hbar=c=1\).
### Free e.m. field
The Hamiltonian of the quantized e.m. field coupled to the system comprises in principle modes of all frequencies, momentum and polarization directions, but the modes which can interact with the oscillators and can therefore be excited are possibly those with frequencies \(\omega\), \(\omega_{p}\) or \(\omega^{\prime}\).
It turns out from the analysis presented in reference [5] that, after applying an appropriate canonical transformation to the creation and destruction operators of the electromagnetic field, the modes with \(|\mathbf{k}|=\omega\) undergo a frequency renormalization to the value \(\omega^{\prime}\), while simultaneously the \(\mathbf{A}^{2}\)-term becomes reabsorbed in the re-definition of the field operators and is no longer present in the resulting Hamiltonian. Consequently, we are left with a simplified Hamiltonian \(H_{tot}\) containing only linear terms in the field operators.
Let us start by writing the pure photon part as
\[H_{phot}=\omega\sum_{p,\mathbf{\hat{k}}}\left[b_{p,\mathbf{k}}^{ \dagger}(t)b_{p,\mathbf{k}}(t)+\frac{1}{2}\right]\qquad\text{with}\qquad| \mathbf{k}|=\omega \tag{2}\]
In the given expression, the symbol \(\sum_{p,\mathbf{\hat{k}}}\) represents the summation over two photon polarizations, denoted by \(p\), and the integral over the unit vector \(\mathbf{\hat{k}}=\mathbf{k}/\omega\). This integral, in turn, is equivalent to the solid angle component of a three-dimensional integral over wave vectors \(\mathbf{k}\). The explicit expression would read as follows:
\[\sum_{p,\mathbf{\hat{k}}}\rightarrow\sum_{p=1}^{2}\int d\Omega_{ \mathbf{\hat{k}}}=\sum_{p=1}^{2}\int_{0}^{2\pi}d\phi_{\mathbf{\hat{k}}}\int_ {0}^{\pi}d\theta_{\mathbf{\hat{k}}}\sin\theta_{\mathbf{\hat{k}}}. \tag{3}\]
The vector potential can be expressed in the same notation as
\[\mathbf{A}(\mathbf{x},t)=\frac{1}{\sqrt{2\omega V}}\sum_{p,\mathbf{\hat{k}}}\left[ b_{p,\mathbf{k}}(t)e^{i\mathbf{kx}}\mathbf{\varepsilon}_{p,\mathbf{k}}+c.c.\right] \tag{4}\]
where \(\mathbf{\varepsilon}_{p,\mathbf{k}}\) are polarization vectors. The time dependence takes, in interaction representation, the usual form \(b_{p,\mathbf{k}}(t)=b_{p,\mathbf{k}}e^{-i\omega t}\). For simplicity we will omit the time dependence of the operators in the following.
We assume that the finite volume \(V\) we are considering is such that \(V\ll\lambda^{3}\), where \(\lambda=\frac{2\pi}{\omega}\). In concrete physical systems, the frequency \(\omega\) is typically on the order of the largest frequency of optical phonons, around 0.1 eV. It follows that the parameter \(N_{osc}\) can be quite large, reaching values on the order of \(10^{9}\) or even higher. In [5] we presented analytical variational calculations applicable in the regime of large \(N_{osc}\). However, in the current study, we will focus on providing numerical solutions for small values of \(N_{osc}\).
### Matter-field interaction
The field-matter interaction terms are obtained as usual through the gauge-invariant extension of the momentum \(\mathbf{p}\rightarrow\mathbf{p}+e\mathbf{A}\) in the matter Hamiltonian. This gives two terms, which in [5] are written together, while here we list them separately. The dipole interaction term has the form
\[H_{dip}=\sum_{n=1}^{N_{osc}}\frac{e}{m}\mathbf{p}_{n}\mathbf{A}(\mathbf{x}_{n}+\bm {\xi}_{n},t) \tag{5}\]
where \(\mathbf{p}_{n}\) is the momentum of the \(n\)-th oscillator (expressible through the operators \(\mathbf{a}_{n}\), \(\mathbf{a}_{n}^{\dagger}\)). Furthermore, there is a "diamagnetic" term with the square of the vector potential
\[H_{\mathbf{A}^{2}}=\sum_{n=1}^{N_{osc}}\frac{e^{2}}{2m}\mathbf{A}^{2}(\mathbf{ x}_{n}+\mathbf{\xi}_{n},t) \tag{6}\]
The total Hamiltonian is
\[H_{tot}=H_{osc}+H_{phot}+H_{dip}+H_{\mathbf{A}^{2}} \tag{7}\]
In the argument of \(\mathbf{A}\), \(\mathbf{x}_{n}\) denotes the equilibrium position of the \(n\)-th oscillator, and \(\mathbf{\xi}_{n}\) denotes the displacement with respect to this equilibrium position. In the following, however, the space dependence of \(\mathbf{A}\) will be neglected ("dipole approximation"), since we assume that the size \(V^{1/3}\) of the system is much smaller than \(\lambda\).
With a linear transformation of the photon operators of the form
\[c_{p,{\bf k}}=\frac{1}{2\sqrt{\omega\omega^{\prime}}}\left[(\omega^{\prime}+ \omega)b_{p,{\bf k}}+(\omega^{\prime}-\omega)b_{p,{\bf k}}^{\dagger}\right] \tag{8}\]
the total Hamiltonian can be simply rewritten in terms of the dressed operators \(c\) and \(c^{\dagger}\) as
\[H_{tot}=H_{osc}+\omega^{\prime}\sum_{p,\hat{\bf k}}\left(c_{p,{\bf k}}^{\dagger }c_{p,{\bf k}}+\frac{1}{2}\right)+\frac{i\omega_{p}}{2\sqrt{N_{osc}}}\sum_{n=1} ^{N_{osc}}\sum_{p,\hat{\bf k}}\left[\mathbf{a}_{n}^{\dagger}-\mathbf{a}_{n}\right][c_{ p,{\bf k}}\mathbf{\varepsilon}_{p,{\bf k}}+c.c.] \tag{9}\]
The vector potential can be expressed, in terms of the new operators, as
\[{\bf A}({\bf x},t)=\frac{1}{\sqrt{2\omega^{\prime}V}}\sum_{p,{\bf k}}\left[c_{ p,{\bf k}}e^{i{\bf k}{\bf x}}\mathbf{\varepsilon}_{p,{\bf k}}+c.c.\right] \tag{10}\]
which is the strict analogue of (4); however, it is interesting to note that the dispersion relation of the dressed photons created and destroyed by \(c^{\dagger}\) and \(c\) is \(\omega^{\prime}=\sqrt{k^{2}+\omega^{2}}\), different from the vacuum dispersion relation \(k=\omega\).
It is clear from (8) that the transformation from the operators \(b\), \(b^{\dagger}\) to \(c\), \(c^{\dagger}\) is well defined only if \(\omega\neq 0\). Therefore the harmonic potential which bounds all charges to the lattice sites with the same \(\omega\) is an essential element of the model. A similar operator transformation is employed in cavity QED [27]. In that context, \(\omega\) is defined by the optical cavity that contains the system. For linear cavities, the direction of \({\bf k}\) is also fixed. Here, there are many directions of \({\bf k}\) contributing to the interaction and in the next subsection we will apply a geometrical projection operation plus a summation over \(\hat{\bf k}\) to simplify the algebraic manipulation of the photon operators.
### Projected and \(k\)-summed photon operators
It is convenient to rewrite the Hamiltonian using operators of creation and destruction of the photons-plasmons which are projected along the 3 space directions defined by the unit vectors \(\hat{\bf e}_{i}\) (\(i=1,2,3\)) and summed over the possible directions \(\hat{\bf k}\) of momentum.
We shall call these operators \(C_{i}^{\dagger}\) and \(C_{i}\) (\(i=1,2,3\)); they are defined by linear combinations of the operators \(c_{p,{\bf k}}\) with coefficients given by scalar products between the unit vectors \(\hat{\bf e}_{i}\) and the polarization vectors \(\mathbf{\varepsilon}_{p,{\bf k}}\):
\[C_{i}=\sqrt{\frac{3}{8\pi}}\sum_{p,\hat{\bf k}}(\hat{\bf e}_{i}\mathbf{\varepsilon }_{p,{\bf k}})c_{p,{\bf k}}. \tag{11}\]
Thanks to the normalization factor \(\sqrt{\frac{3}{8\pi}}\), the states created and destroyed by these operators turn out to be correctly normalized, and the operators themselves satisfy the commutation relations
\[[C_{i},C_{j}^{\dagger}]=\delta_{ij};\qquad[C_{i},C_{j}]=[C_{i}^{\dagger},C_{j}^{ \dagger}]=0 \tag{12}\]
Using the operators \(C_{i}\), \(C_{i}^{\dagger}\) we can rewrite the vector potential and the Hamiltonian (9) (in dipole approximation) as follows:
\[\mathbf{A}=\frac{1}{\sqrt{2\omega^{\prime}V}}\sum_{i=1}^{3}(C_{i}+C_{i}^{ \dagger})\mathbf{\hat{e}}_{i} \tag{13}\]
\[H_{tot}=H_{osc}+\omega^{\prime}\sum_{i=1}^{3}\left(C_{i}^{\dagger}C_{i}+\frac {1}{2}\right)+\frac{i\omega_{p}}{2\sqrt{N_{osc}}}\sqrt{\frac{3}{8\pi}}\sum_{ n=1}^{N_{osc}}\sum_{i=1}^{3}\left[a_{n,i}^{\dagger}C_{i}-a_{n,i}C_{i}^{\dagger}+a_{n,i}^ {\dagger}C_{i}^{\dagger}-a_{n,i}C_{i}\right] \tag{14}\]
where \(a_{n,i}\) denotes the \(i\)-th spatial component of \(\mathbf{a}_{n}\).
### Summary of analytical results
In our work [5] we have studied analytically the properties of the Hamiltonian (14) in the RWA approximation, i.e., disregarding the terms of the form \(aC\) or \(a^{\dagger}C^{\dagger}\), and in the limit of large \(N_{osc}\). In particular, we have shown that a threshold value of the coupling \(\varepsilon=\omega_{p}/\omega^{\prime}\) exists, above which the ground state of the system is not the usual perturbative ground state where all oscillators have excitation number zero. For this purpose we have defined a set of trial quantum states that take the form of coherent states, both in the degrees of freedom of the charged oscillators and in those of the field oscillator.
The minimum energy is achieved when the charged oscillators all have the same phase, and the field oscillator a phase which differ by \(\pi/2\). More precisely, above the threshold value of \(\varepsilon\) the Hamiltonian is a quadratic form with negative modes proportional to \(|\alpha|^{2}\) (the squared amplitude of the coherent trial states of the charges), so that the system becomes unstable when \(|\alpha|>0\), as we shall see later in more detail.
Following a heuristic physical argument we imposed a limiting value \(\alpha_{max}\) to the coherent oscillation amplitude \(\alpha\), by assuming that each charge is confined in one of the lattice cells. In this way it is possible to compute the energy per particle \(E_{0}\) of the ground state as a function of \(\varepsilon\), up to second order in the Brillouin-Wigner approximation. The resulting
expression is
\[E_{0}^{(1)}= \omega^{\prime}|\alpha_{max}|^{2}\left(1-\frac{2\pi}{3}\varepsilon^{ 2}\right)\quad\text{first order} \tag{15a}\] \[E_{0}^{(2)}= \omega^{\prime}|\alpha_{max}|^{2}\left(1-\frac{8\pi}{3}\varepsilon^ {2}\right)\quad\text{second order}. \tag{15b}\]
The corresponding critical coupling is \(\varepsilon_{crit}^{(1)}=\sqrt{\frac{3}{2\pi}}\simeq 0.69\) to first order perturbation theory and \(\varepsilon_{crit}^{(2)}=\sqrt{\frac{3}{8\pi}}\simeq 0.35\) to second order.
Considering the case of protons located in the lattice cells of a metal, by replacing the appropriate values in Eq. (15), the energy gap per particle \(|E_{0}|\) turns out to be of the order of a few eV, much larger than the average thermal excitation energy, implying that such states are thermally stable. We recall that the plasma frequency \(\omega_{p}\) depends on the density of the oscillating charges and the bare oscillation frequency \(\omega\) depends of the electrostatic forces in the crystal; both are of the order of \(10^{13}\) Hz. In Appendix A of [5] a model of the electrostatic force called _jellium crystal_ has been developed where the frequencies, \(\omega\), \(\omega_{p}\) and \(\omega^{\prime}\) have been computed. In the case of a metal with lattice spacing \(d=2.5\)A loaded with protons in each lattice site we find \(\omega=0.35\) eV, \(\omega_{p}=0.22\) eV, \(\omega^{\prime}=0.41\) eV, \(\varepsilon=0.52\).
## IV Numerical analysis
In order to reach a better understanding of the model and to check numerically its validity in a finite-dimensional case, we have implemented a reduced version in QuTiP, the Python toolbox widely used for quantum mechanics and especially for quantum optics [28; 29].
### Setting up the Hamiltonian with QuTiP
We have written a second-quantized unperturbed Hamiltonian \(H_{0}\) which describes one harmonic oscillator with destruction/creation operators C, C.dag() representing one of the photon modes of (14), plus a small number \(N_{osc}\) of "material" oscillators with destructors/creators a1, a2,... a1.dag(), a2.dag(),... having the same frequency of the photons. The energy calculations are in units of \(\omega^{\prime}\). We then added a coupling term with a coefficient given by the 3D enhancement factor \(\sqrt{\frac{8\pi}{3}}\), as already discussed, and by the coupling constant \(0<\varepsilon<1\) whose value defines below-threshold and above-threshold conditions for the condensation to the coherent state.
After defining the total Hamiltonian, one can directly obtain with QuTiP the energy of the ground state (and also that of the excited states) as a function of the coupling \(\varepsilon\). However, there is also a dependence on the dimension of the finite vector space which approximates the full Hilbert space in describing the states of the various oscillators. We denote with \(N_{exc}\) the highest excitation level of each of the particle oscillators, and with \(N_{phot}\) the highest level of the photon oscillator. They can be varied independently, but physically it is reasonable to assume first that \(N_{phot}\) is equal to \(N_{exc}\cdot N_{ose}\), since we expect that each energy level of each oscillator emits and absorbs a photon.
By examining the computation of the ground state energy shown in Fig. 2, which displays the energy gap as a function of \(N_{exc}\), it becomes evident that as \(\varepsilon\) is tuned to a value higher than the threshold \(\varepsilon_{crit}=0.35\), the negative energy gap grows without limitations as \(N_{exc}\) increases.
When the phase transition occurs,the average occupation number of all oscillators (particles and photons) reaches a level of approximately 50% of the maximum possible excitation fixed by the chosen finite space dimension \(N_{exc}\). This clearly shows that all accessible levels are occupied, while for a correct description of the system with a finite dimension of the Hilbert space of the states one usually requires that the upper levels are essentially unexcited, so that physical quantities are independent from \(N_{exc}\).
This lack of a lower bound for the energy is consistent with the analytical model [5], where the Hamiltonian in conditions above threshold allows for coherent trial states that depend on a "classical" amplitude \(\alpha\) of the particle oscillators. In that case, \(\alpha\) is not bounded, and the energy gap is proportional to \(|\alpha|^{2}\) (see Sect. IV.2). The correlation between the numerical outcome and the selected dimension of the vector space raises concerns regarding the accuracy of the calculation. In order to tackle this problem, we will introduce an additional component into the Hamiltonian. This supplementary component has a minimal impact on the energy per particle when \(\alpha\) is small, but it effectively limits the extent of charge oscillations. By employing this approach, higher modes of excitation will remain unexcited, leading to computational results that become unaffected by the selected dimensionality once a critical threshold is exceeded. This matter will be discussed further in subsection IV.2.
We stress that the coupling constant \(\varepsilon\) arises from the existence of a significant number of high-density oscillators within the system. In the scenario where the physical system consists of only a small amount of oscillating charges distributed in the volume \(V\), the coupling
with the electromagnetic field would be minute, incapable of inducing any form of phase transition. In the present simulation, due to the limited computation power, we examine the behavior of a _restricted_ number of oscillators, investigating their interaction with the electromagnetic field and the corresponding state of minimum energy. The presence of the large number of oscillators is incorporated in the parameter \(\varepsilon>0\) which, depending on the plasma frequency \(\omega_{p}\), reflects a high density of charged oscillators.
In Fig. 3 is represented the result of the numerical calculation of the ground state energy for various values of \(\varepsilon\) together with the average occupation numbers of the photon and matter field. The numerical solutions have been obtained both with and without the RWA approximation and show that the transition to the coherent state is present in both cases,
Figure 2: Dependence on the vector space dimension \(N_{exc}\) with \(\varepsilon=1\) of (1) the average excitation number of one harmonic oscillator confined with a term \((a^{\dagger})^{4}a^{4}\); (2) the average excitation number of a photon oscillator coupled with it; (3) the ground state energy of the system. Note the convergence to finite values when \(N_{exc}\) is approximately greater than 35.
although with different threshold values for the coupling \(\varepsilon\). More specifically, below threshold the energy of the ground state is very close to zero, while above threshold it becomes negative and proportional to the number of material oscillators (see Table 1).
It is intriguing to observe that, while in the case of the RWA the system's energy is rigorously zero below the threshold \(\varepsilon_{crit}=0.69\), as analytically calculated in [5] (Fig. 3, orange line), in the absence of the RWA approximation, the energy is slightly negative even
Figure 3: Dependence on the adimensional coupling \(\varepsilon\) of the ground state energy (“energy gap”) and of various correlations and occupation numbers, showing the occurrence of condensation to a coherent state at \(\varepsilon\simeq 0.35\). The system comprises 3 material oscillators (\(N_{osc}=3\)). The dimensions of the vector spaces used for the numerical solutions are such to meet the stabilization condition (compare Fig. 2). The energy gap in the RWA approximation is also displayed for comparison. The correlations and occupation numbers shown can be explicitly written respectively as \(\langle a_{1}^{\dagger}a_{2}+a_{1}a_{2}^{\dagger}\rangle_{0}\) (correlation between two material oscillators), \(\langle a_{1}^{\dagger}C+a_{1}C^{\dagger}\rangle_{0}\) (correlation between one material oscillator and the photon oscillator), \(\langle a_{1}^{\dagger}a_{1}\rangle_{0}\) (occupation number of one material oscillator) and \(\langle C^{\dagger}C\rangle_{0}\) (occupation number of the photon oscillator).
for values of \(\varepsilon<\varepsilon_{crit}\) (Fig. 3, blue line). This phenomenon has been discussed and termed as _weak coherence_ in [30].
As a final note, we observe that the diagonalization of the Hamiltonian in the RWA in QuTiP shows a threshold for symmetry breaking at \(\varepsilon_{crit}=\varepsilon_{crit}^{(1)}\) instead of the more accurate value \(\varepsilon_{crit}^{(2)}\). The reason behind this phenomenon can be attributed to the conservation of the total number of photons and oscillator excitation levels in the interaction term within the RWA. However, the full interaction term does not exhibit this conservation. As a consequence, the second-order contribution within the RWA accurately captures the exchange of photons between _different_ oscillators. Considering that the calculation involves a very small number of oscillators, the second-order term becomes insignificant. However, this argument does not hold true for the full interaction term, as it mixes states where the sum of the number of photons and excitation levels remains constant. Consequently, the second-order term becomes significant even with a few oscillators, as observed in Fig. 3.
Figure 4: Result of a Monte Carlo simulation with dissipative term. Three material oscillators, \(N_{phot}=60\). Dissipation operator 0.4*C. See the caption of Fig. 3 for the full expressions of the correlations and occupation numbers. Note the negative correlation between the material oscillator and the photon oscillator.
### Limiting the vector space dimension with oscillator "cages"
A physically meaningful model requires the implementation in the total Hamiltonian of a reasonable limitation mechanism for the amplitude of the oscillators. In [5] we introduced for this purpose electrostatic "cages" which set an upper limit on the coherent oscillation amplitude \(\alpha\). It is therefore supposed that the oscillating charges bound on each lattice site cannot move to another cell of the lattice. A possible way of implementing this condition here numerically is to add in the Hamiltonian for each harmonic oscillator a term of the form \((a^{\dagger})^{n}a^{n}\), with \(n\) sufficiently large (e.g., \(n=4\)). This term acts in practice as a high potential barrier, leaving the lowest energy levels of the oscillator unchanged but causing a large increase in the energy of the states with large excitation number. One can check, for example, that with \(n=4\) the energy levels are (apart from the offset \(\frac{1}{2}\)) equal to \((0,1,2,3,28,125,366,...)\), while the corresponding expectation values of the operator \(\hat{x}^{2}\) are the same of a pure harmonic oscillator, namely \((\frac{1}{2},\frac{3}{2},\frac{5}{2},\frac{7}{2},\frac{9}{2},...)\), implying a strong spatial
Figure 5: Result of a Monte Carlo simulation with greater dissipation than in Fig. 4. Dissipation operator 0.6*C. Note that the negative correlation between material oscillator and photon oscillator increases, in absolute value, when dissipation increases, while the ground state energy remains the same, asymptotically \(\simeq-11\), as from the spectrum without dissipation.
confinement.
With this modification, \(H\) is bounded from below and the excitation level of the harmonic oscillators converges to a finite value as the size of the base used increases. It is important to note that when the amplitude of material oscillators is limited by a term of the type \((a^{\dagger})^{n}a^{n}\), the amplitude of photon oscillators becomes also limited, without the need for a corresponding term. See example in Fig. 2.
An alternative approach for limiting the amplitude of oscillations involves considering that the binding potential wells, responsible for confining each material particle within the lattice, possess finite depth instead of being infinitely deep. Beyond the bound states, there exists a continuous spectrum characteristic of free particles or Bloch quantum states. It is evident from a physical standpoint that excitations towards these free states do not contribute to coherence as their frequencies are not in resonance with the photon field. Hence, the states relevant for superradiance are, at least in terms of order of magnitude, roughly equal in number to the bound states. However, they are actually fewer due to the non-uniform energy spacing of states in close proximity to the continuum spectrum. The precise estimation of the number of states involved in the coherence process can be accomplished with reasonable accuracy, but we defer this calculation to a future study.
Table 1 presents an illustrative set of computed ground state energy values, showcasing the dependence on the number of material oscillators, with a fixed coupling strength of \(\varepsilon=0.7\). The data were obtained with the method H.groundstate, where for ex. for \(N_{osc}=2\) we have H=HO+Hint+H8, with
H0 = a1.dag()*a1 + a2.dag()*a2 + C.dag()*C,
Hint = sqrt(8*np.pi/3)/sqrt(Nosc)*(1j/2)*(C + C.dag())*(a1.dag()-a1+a2.dag()-a2),
H8 = a1.dag()**4 * a1**4 + a2.dag()**4 * a2**4.
The second row of Table 1 displays the chosen space dimension of the photon field \(N_{phot}\) used in the numerical calculation, according to the "stabilization" criterion illustrated in Fig. 2, but with a stabilization value \(N_{phot}=20\cdot N_{osc}\), suitable for coupling \(\varepsilon=0.7\). The value of the energy for \(N_{osc}=5\) has been computed with a Monte Carlo algorithm including a dissipation term, as described in Sect. IV.3 (see also Figs. 4, 5), since the multiple tensor products needed for the description of the state lead to a dimension of the Hamiltonian matrix which would require to allocate 156 GiB for an array with shape (102400,102400)
and data type complex128.
In the numerical solution of certain coherent systems, symmetry properties allow to reduce the dimension of the vector space; see e.g. [31]. In our case, this does not appear to be possible, and the \(N_{osc}\) harmonic oscillators must be left free to evolve independently, like in other calculations concerning quantum synchronization ([32] and refs.).
### Monte Carlo simulations in the dissipative case
As mentioned in Sect. II in relation to the plasmonic Dicke effect, it is generally important to check how the ohmic quench influences the onset of superradiance in a system, or the condensation of the system to a coherent state. In our case, this check allows us to apply the model not only to a dielectric crystal, but also to a metal. Moreover, dissipation can occur at the external boundary of the active material, because the dressed photon frequency \(\omega^{\prime}\) is greater than the frequency of photons with the same momentum in vacuum, and therefore some photons can escape the material if they can release the excess energy at the surface. Following [17], a dissipation term is added with the form
\[L_{dis}=\omega^{\prime}\Gamma C \tag{16}\]
where \(\Gamma\) is the inverse of the decay constant in units of \(\omega^{\prime}\). A numerical method implemented in QuTiP for simulating the dissipative evolution of a quantum system is the Lindblad master equation. However, when the dimension of the vector space of the states grows, the Lindblad solver called mesolve tends to fail due to a loss of convergence or to insufficient memory.
An alternative to mesolve is the Monte Carlo solver mcsolve, which exhibits more favorable scalability with dimensionality. However, it comes at the cost of introducing statistical errors and requiring long simulation runs. Figures 4 and 5 illustrate the outcomes of Monte
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \hline \(N_{osc}\) & 1 & 2 & 3 & 4 & 5 \\ \hline \(N_{phot}\) & 20 & 40 & 60 & 80 & 100 \\ \(E_{0}\) & -3.78 & -7.56 & -11.3 & -15.1 & -18 \(\pm 1\) \\ \hline \end{tabular}
\end{table}
Table 1: Ground state energy \(E_{0}\) in \(\hbar\omega\) units (third row) in dependence on the number of material oscillators \(N_{osc}\), for coupling strength \(\varepsilon=0.7\). The second row gives the maximum excitation level of the photon field used in the numerical calculation. The value of the energy for \(N_{osc}=5\) has been computed with a Monte Carlo algorithm including a dissipation term.
Carlo simulations for the scenario involving three charged oscillators (\(N_{osc}=3\)) in addition to the photon oscillator. Two different dissipation levels are specified in the captions, corresponding to above-threshold conditions (\(\varepsilon=0.7\)). The presence of coherent condensation is indicated by a negative energy gap, which, within the margin of error, matches the gap computed in Fig. 3 using the method.groundstate(). Notably, this outcome remains independent of the dissipation level. The interpretation is as follows: the dissipation term introduces an imaginary component, and once symmetry breaking occurs, it is counterbalanced by an opposite-signed imaginary component originating from the interaction term. This interaction term incorporates a phase difference between the matter and field, deviating from the analytically calculated phase shift of \(\frac{\pi}{2}\) between the oscillators and the field by an amount
\[\sqrt{\frac{8\pi}{3}}\epsilon|\mathcal{A}\alpha_{1}|\sin\theta=-\frac{1}{2} \langle a_{1}^{\dagger}C+a_{1}C^{\dagger}\rangle_{0}, \tag{17}\]
where \(\frac{\pi}{2}+\theta\) represents the phase difference between the field and the oscillator. The term \(\frac{1}{2}\langle a_{1}^{\dagger}C+a_{1}C^{\dagger}\rangle_{0}\) denotes the matter-field correlation, which is zero in the absence of dissipation due to the \(\pi/2\) phase difference between the matter and field oscillators (Fig. 3, red line). However, it becomes negative in the presence of dissipation (Fig. 4, red line) and becomes more pronounced with increasing \(\Gamma\) (Fig. 5, red line).
Also essentially independent from the dissipation level are the average occupation numbers of the oscillators and the correlations between charged oscillators like \(\langle a_{1}^{\dagger}a_{2}+a_{1}a_{2}^{\dagger}\rangle_{0}\).
## V Conclusions
In conclusion, we have successfully demonstrated, through a numerical calculation, the occurrence of a phase transition in a system comprising positively charged oscillators immersed in a neutralizing negatively charged medium and interacting with the electromagnetic field. This phase transition occurs when a density threshold of the oscillators is reached. Our previous work [5] analytically identified this phase transition using the RWA. The current study not only confirms the validity of the analytical result but also extends its applicability by considering the case without RWA.
Furthermore, we have investigated the impact of the finite dimension of the vector space employed in the numerical calculation. We found that this finite dimension has no bearing on the manifestation of the phase transition. To ensure the boundedness of the matter field's
amplitude throughout the numerical simulation, we have introduced a positive definite term into the Hamiltonian. This additional term has a negligible impact on the occurrence of the phase transition itself, while simultaneously constraining the maximum oscillation amplitude to a value that remains independent of the dimensionality of the vector space.
It is noteworthy that a crystal characterized by positively charged ions bound to lattice sites and oscillating harmonically at the same frequency exhibits captivating resemblances to systems commonly encountered in cavity quantum electrodynamics (CQED). The emergence of long-range coupling among the ions stems from their interaction with a photon field, where the energy-momentum relationship of the photons is modified due to the presence of plasma oscillations. Through the utilization of a canonical operator transformation, the system's Hamiltonian can be diagonalized, facilitating the identification of conditions that give rise to the phenomenon of superradiance. Unlike systems typically associated with the Dicke effect, such as ensembles of spins or two-level molecules, this crystal system more closely resembles a set of quantum oscillators, with their states spanning a larger vector space in general.
Furthermore, we have demonstrated that condensation to the coherent state persists even in the presence of strong dissipation. Interestingly, we find that the energy gap between the coherent ground state and the perturbative ground state of the oscillators plus field remains unaffected by dissipation. However, the correlation between the charged oscillators and the photon oscillator does exhibit a dependence on the level of dissipation.
Future investigations will address the following key aspects: (1) A more realistic characterization of the potential wells that confine the ions to lattice sites, offering a more comprehensive understanding of the system. (2) An in-depth analysis of the coherence properties of the numerically obtained ground state in comparison to the coherent trial states employed in previous analytical calculations, which utilized the RWA. While the energy gap in both cases is of similar magnitude, the coherence properties of the trial states appear to be stronger. (3) Investigation of the excited states of the system and their associated coherence properties, providing an understanding of the system's behavior beyond the ground state.
(4) A scattering theory of the coherent states.
|
2304.13829 | Controlled density transport using Perron Frobenius generators | We consider the problem of the transport of a density of states from an
initial state distribution to a desired final state distribution through a
dynamical system with actuation. In particular, we consider the case where the
control signal is a function of time, but not space; that is, the same
actuation is applied at every point in the state space. This is motivated by
several problems in fluid mechanics, such as mixing and manipulation of a
collection of particles by a global control input such as a uniform magnetic
field, as well as by more general control problems where a density function
describes an uncertainty distribution or a distribution of agents in a
multi-agent system. We formulate this problem using the generators of the
Perron-Frobenius operator associated with the drift and control vector fields
of the system. By considering finite-dimensional approximations of these
operators, the density transport problem can be expressed as a control problem
for a bilinear system in a high-dimensional, lifted state. With this system, we
frame the density control problem as a problem of driving moments of the
density function to the moments of a desired density function, where the
moments of the density can be expressed as an output which is linear in the
lifted state. This output tracking problem for the lifted bilinear system is
then solved using differential dynamic programming, an iterative trajectory
optimization scheme. | Jake Buzhardt, Phanindra Tallapragada | 2023-04-26T21:12:45Z | http://arxiv.org/abs/2304.13829v2 | # Controlled density transport using Perron Frobenius generators
###### Abstract
We consider the problem of the transport of a density of states from an initial state distribution to a desired final state distribution through a dynamical system with actuation. In particular, we consider the case where the control signal is a function of time, but not space; that is, the same actuation is applied at every point in the state space. This is motivated by several problems in fluid mechanics, such as mixing and manipulation of a collection of particles by a global control input such as a uniform magnetic field, as well as by more general control problems where a density function describes an uncertainty distribution or a distribution of agents in a multi-agent system. We formulate this problem using the generators of the Perron-Frobenius operator associated with the drift and control vector fields of the system. By considering finite-dimensional approximations of these operators, the density transport problem can be expressed as a control problem for a bilinear system in a high-dimensional, lifted state. With this system, we frame the density control problem as a problem of driving moments of the density function to the moments of a desired density function, where the moments of the density can be expressed as an output which is linear in the lifted state. This output tracking problem for the lifted bilinear system is then solved using differential dynamic programming, an iterative trajectory optimization scheme.
## I Introduction
In this paper, we consider the problem of controlled density transport, where given an initial distribution of states specified by a density function, we seek to determine a control sequence to drive this initial distribution to a desired final distribution. We consider the case where a common control signal is applied to the entire distribution of states. This differs from the usual formulation of swarm control and optimal transport problems, where typically each agent can select a control input independently, making the control signal a function of the states and time. This problem of density transport is motivated by problems of manipulation of a large collection of non-interacting agents using a uniform control signal [1]. Such problems have found applications recently in micro-fluidics, where it has been shown that collections of micro-particles can be manipulated through a fluid using a uniform magnetic field for targeted drug delivery [2] or to generate a pumping effect to transport fluid particles [3]. Simultaneously, the transport of density has relevance to the propagation of an uncertainty distribution arising due to uncertainty in the initial state or of a model parameter through an otherwise deterministic control system (see, e.g. [4, 5, 6]). We formulate and solve this problem using an operator theoretic approach, specifically using the generator of the Perron-Frobenius operator.
In recent years the operator theoretic approach to dynamical systems has gained significant research attention, both from a dynamical systems perspective [7, 8], as well as in application areas such as control systems [9, 10] and fluid mechanics [11, 12]. A dynamical system can be framed in terms of such an operator either by considering the evolution of observable functions of the state using the Koopman operator or by considering the evolution of densities of states using the Perron-Frobenius operator [13]. The interest in these approaches is primarily due to the fact that these operators allow for a linear, although typically infinite dimensional representation of a nonlinear system. The linearity of these operators is useful from an analytical perspective, as it allows for the use of linear systems techniques such as the analysis of eigenvalues and eigenfunctions, but also from a computational perspective, as in many cases a useful approximation for these operators can be found by considering a finite dimensional approximation in which the operator is represented as a matrix acting on coordinates corresponding to a finite set of a set of dictionary functions [14, 15, 16].
In applications in control systems, much of the recent work has been on developing methods involving the Koopman operator [9, 10], as the transformation to a space of observable functions can be viewed as a nonlinear change of coordinates which maps the system to a higher dimensional space where the dynamics are (approximately) linear [17]. This makes the numerical approximation of the operator particularly amenable to linear control methods, such as the linear quadratic regulator (LQR) and model predictive control (MPC) [17, 18, 19]. On the other hand, the Perron-Frobenius operator propagates densities of states forward in time along trajectories of the system, which can have multiple interpretations in the controlled setting. For example, the Perron-Frobenius operator and the Liouville equation, the related PDE formulation, have been used to determine controls for agents in an ensemble or swarm formulation [20, 21, 22]. It should be noted that such formulations are closely related to optimal transport problems which also involve driving an initial distribution to a desired final distribution (see, e.g., [23, 24, 22]). Formulations involving the Perron-Frobenius operator have also been used in the context of fluid flows to study the transport of distributions of fluid particles and to detect invariant or almost invariant sets [25, 26].
Our approach involves first obtaining a finite dimensional approximation of the Perron-Frobenius generators associated with the drift and control vector fields of the system, which allow us to represent the density transport dynamics as a
bilinear system in a lifted state. With this system, we frame the density control problem as a problem of driving moments of the density function to the moments of a desired density function, where the moments of the density can be expressed as a output which is linear in the lifted state. This output tracking problem for the lifted bilinear system is then solved using differential dynamic programming (DDP), an iterative trajectory optimization scheme.
## II Preliminaries
Consider first the autonomous dynamical system on a measure space \((\mathbb{X}\subset\mathbb{R}^{n},\mathcal{A},\mu)\) with a \(\sigma\)-algebra \(\mathcal{A}\) on \(\mathbb{X}\) and \(\mu\) a measure on \((\mathbb{X},\mathcal{A})\),
\[\dot{x}=f(x) \tag{1}\]
and denote the associated time-\(t\) flow from an initial state \(x_{0}\) as \(\Phi^{t}(x_{0})\), where \(x\in\mathbb{X}\) is the state. The Perron-Frobenius operator \(\mathcal{P}^{t}:L^{1}(\mathbb{X})\mapsto L^{1}(\mathbb{X})\) associated with the flow map \(\Phi^{t}\) is defined as
\[\int_{\mathbb{A}}\left[\mathcal{P}^{t}\,\rho\right](x)dx=\int_{(\Phi^{t})^{-1} (\mathbb{A})}\rho(x)\,dx \tag{2}\]
for any \(\mathbb{A}\in\mathcal{A}\), assuming that the relevant measure \(\mu\) is absolutely continuous with respect to the Lebesgue measure and can thus be expressed in terms of a density \(\rho\) (i.e., \(d\mu(x)=\mu(dx)=\rho(x)dx\)). It can be shown that the family of these operators \(\{\mathcal{P}^{t}\}_{t\geq 0}\) form a semigroup, (see [13]). The generator of this semigroup is known as the Liouville operator, denoted \(\mathcal{L}\), or Perron-Frobenius generator and expresses the deformation of the density \(\rho\) under infinitesimal action of the operator \(\mathcal{P}^{t}\)[13, 27]. That is,
\[\frac{d\rho}{dt}=\mathcal{L}\rho=-\nabla_{x}\cdot(\rho f) \tag{3}\]
Alternatively, the action of the generator can be written in terms of the Perron-Frobenius operator as
\[\mathcal{L}\rho=\lim_{t\to 0}\frac{\mathcal{P}^{t}\rho-\rho}{t}=\lim_{t\to 0} \left(\frac{\mathcal{P}^{t}-\mathcal{I}}{t}\right)\rho \tag{4}\]
where \(\mathcal{I}\) is the identity operator.
**Lemma 1**: _Suppose the Liouville operator associated with a vector field \(f_{1}:\mathbb{X}\mapsto\mathbb{R}^{n}\) is denoted by \(\mathcal{L}_{1}\) and the Liouville operator associated with the vector field \(f_{2}:\mathbb{X}\mapsto\mathbb{R}^{n}\) by \(\mathcal{L}_{2}\), then the Liouville operator associated with the vector field \(f(x)=f_{1}(x)+f_{2}(x)\), is \(\mathcal{L}=\mathcal{L}_{1}+\mathcal{L}_{2}\)._
The proof is a direct consequence of Eq. 3. Suppose \(f(x)=f_{1}(x)+f_{2}(x)\). Then \(\mathcal{L}\rho=-\nabla_{x}\cdot(\rho(f_{1}+f_{2}))=-\nabla_{x}\cdot(\rho f_{1 })-\nabla_{x}\cdot(\rho f_{2})=(\mathcal{L}_{1}+\mathcal{L}_{2})\rho\).
The Koopman operator \(\mathcal{K}^{t}:L^{\infty}(\mathbb{X})\mapsto L^{\infty}(\mathbb{X})\) propagates observable functions forward in time along trajectories of the system and is defined as
\[[\mathcal{K}^{t}h](x)=[h\circ\Phi^{t}](x) \tag{5}\]
where \(h(x)\) is an observable. The Koopman and Perron-Frobenius operators are adjoint to one another,
\[\int_{\mathbb{X}}[\mathcal{K}^{t}h](x)\rho(x)dx=\int_{\mathbb{X}}h(x)[ \mathcal{P}^{t}\rho](x)dx\,. \tag{6}\]
## III Numerical approximation of the Perron-Frobenius operator and generator
Traditionally, the most common way of approximating the Perron-Frobenius operator is by a set-oriented approach known as Ulam's method, in which a domain of interest is discretized into cells and then the operator is computed by simulating a large number of trajectories over a short time and by approximating the operator as the Markov transition matrix which contains the transition probabilities between the cells. It is well known that this can be viewed as a Galerkin projection of the Perron Frobenius operator onto the function space spanned by indicator functions corresponding to the discrete cells [16]. In recent works involving numerical approximation of the Koopman operator, one of the most common approaches is that of extended dynamic mode decomposition (EDMD) [15], in which the operator is computed by solving a least squares problem, which can also be viewed as a Galerkin projection of the operator onto a function space spanned by a predefined set of basis functions [15, 16]. By exploiting the duality of the Perron Frobenius and Koopman operators, it has been shown that methods typically used for one operator can be used to compute the other [16]. Based on this idea, recent works have developed variations of EDMD for the computation of the Perron-Frobenius operator [16, 28, 29]. In this work, we also implement EDMD for the computation of the Perron-Frobenius operator, which we outline below, largely following [16].
We begin by selecting a dictionary \(\mathbb{D}\) of \(k\) scalar-valued basis functions, \(\mathbb{D}=\{\psi_{1},\psi_{2},\dots,\psi_{k}\}\), where \(\psi_{i}:\mathbb{X}\mapsto\mathbb{R}\) for \(i=1,\dots,k\), and denote by \(\mathbb{V}\) the function space spanned by the elements of \(\mathbb{D}\). We then collect trajectory data with fixed timestep, \(\Delta t\), arranged into snapshot matrices as
\[X =\begin{bmatrix}x_{1}&,&\cdots&,&x_{m}\end{bmatrix} \tag{7}\] \[Y =\begin{bmatrix}x_{1}^{+}&,&\cdots&,&x_{m}^{+}\end{bmatrix} \tag{8}\]
where the subscript \(i=1,\dots,m\) is a measurement index and \(x_{i}^{+}=\Phi^{\Delta t}(x_{i})\).
We then approxmiate the observable function \(h\) and density \(\rho\) in Eq. 6 by their projections onto \(\mathbb{V}\), the space spanned by the dictionary \(\mathbb{D}=\{\psi_{1},\psi_{2},\dots,\psi_{k}\}\).
\[h(x) \approx\hat{h}^{T}\Psi(x) \tag{9}\] \[\rho(x) \approx\Psi^{T}(x)\hat{\rho} \tag{10}\]
where \(\hat{h}\), \(\hat{\rho}\in\mathbb{R}^{k}\) are column vectors containing the projection coefficients and \(\Psi:\mathbb{X}\mapsto\mathbb{R}^{k}\) is a column-vector valued function where the elements are given by \([\Psi(x)]_{i}=\psi_{i}(x)\). Substituting these expansions into Eq. (6), we have
\[\int_{\mathbb{X}}\mathcal{K}^{\Delta t}[\hat{h}^{T}\Psi]\Psi^{T}\hat{\rho}\,dx= \int_{\mathbb{X}}\hat{h}^{T}\Psi\mathcal{P}^{\Delta t}[\Psi^{T}\hat{\rho}]\,dx\,. \tag{11}\]
Then replacing \([\mathcal{K}^{\Delta t}\Psi](x)=\Psi(x^{+})\) and assuming that \(\mathcal{P}^{\Delta t}\) can be approximated by a matrix \(P\) operating on the coordinates \(\hat{\rho}\), and evaluating this on the collected data, we have
\[\Psi(x_{i}^{+})\Psi^{T}(x_{i})=\Psi(x_{i})^{T}\Psi(x_{i})P+e_{i} \tag{12}\]
for \(i=1,\ldots,m\), where \(e_{i}\) is the residual error arising due to the matrix approximation of \(\mathcal{P}^{\Delta t}\). This can then be posed as a least-squares problem for the matrix \(P\)
\[\min_{P}\|\Psi_{Y}\Psi_{X}^{T}-\Psi_{X}\Psi_{X}^{T}P\|_{2}^{2} \tag{13}\]
where \(\Psi_{X}\),\(\Psi_{Y}\in\mathbb{R}^{k\times m}\) are matrices containing with columns containing \(\Psi\) evaluated on the columns of \(X\) and \(Y\) respectively. The analytical solution of this least squares problem is
\[P=\left(\Psi_{X}\Psi_{X}^{T}\right)^{\dagger}\Psi_{Y}\Psi_{X}^{T} \tag{14}\]
where \((\cdot)^{\dagger}\) is the Moore-Penrose pseudoinverse.
Given this matrix approximation of the operator, \(P\), if the timestep \(\Delta t\) chosen in the data collection is sufficiently small, the corresponding matrix approximation \(L\) of the Perron Frobenius generator can be approximated based on the limit definition of the generator in Eq. 4. as
\[L\approx\frac{P-I_{k}}{\Delta t} \tag{15}\]
where \(I_{k}\) is the \(k\times k\) identity matrix. Just as the matrix operator \(P\) approximates the propagation of a density function \(\rho\) by advancing the propagation coordinates \(\hat{\rho}\) forward for a finite time, the approximation of the generator allows us to approximate the infinitesimal action of the operator \(\mathcal{P}^{t}\) by approximating the time derivative of the projection coordinates
\[\frac{d\hat{\rho}}{dt}=L\hat{\rho}\,. \tag{16}\]
### _Extension to controlled systems_
In the context of applying the Koopman operator to control systems, several recent works have noted the usefulness of formulating the problem in terms of the Koopman generator, rather than the Koopman operator itself [30, 31, 32, 33, 34], which typically results in a lifted system that is bilinear in the control and lifted state. This approach allows for a better approximation of the effects of control, especially for systems in control-affine form
\[\dot{x}=f(x)+\sum_{i=1}^{n_{c}}g_{i}(x)u_{i} \tag{17}\]
as it expresses the effect of the control vector fields \(g_{i}\) in a way that is also dependent on the lifted state. Here we apply a similar approach to the density transport problem, expressed in terms of the Perron-Frobenius generator. As shown in Ref. [32] for the Koopman generator, by the property of the Perron-Frobenius generator given in Lemma 1, if the dynamics are control-affine, then the generators are also control affine, as can be seen by application of Eq. 3. This leads to density transport dynamics of the following form
\[\frac{d}{dt}\rho(x)=(\mathcal{L}_{0}\rho)(x)+\sum_{i=1}^{n_{c}}u_{i}(\mathcal{ B}_{i}\rho)(x) \tag{18}\]
where \(\mathcal{L}_{0}\) is the Perron Frobenius generator associated with the vector field \(f(x)\) and similarly, the \(\mathcal{B}_{i}\) are the Perron Frobenius generators associated with the control vector fields \(g_{i}(x)\). Therefore, given the finite dimensional approximation of these generators, we can approximate the density transport dynamics as
\[\frac{d\hat{\rho}}{dt}=L_{0}\hat{\rho}+\sum_{i=1}^{n_{c}}u_{i}B_{i}\hat{\rho} \tag{19}\]
where the matrices \(L_{0}\) and \(B_{i}\) are the matrix approximations of the operators in Eq. 18.
A similar approximation in terms of the Perron-Frobenius generator has been implemented with control [35, 36] in order to exploit a convex formulation of an optimal control problem in terms of densities.
### _Propagation of moments_
In order to make the problem of controlling a density function using a finite dimensional control input well-posed, we formulate the problem as a control problem of a finite number of outputs. In particular, we will describe the density function, in terms of a finite number of its moments. For a scalar \(x\), recall that the \(i^{\text{th}}\) raw moment, \(m_{i}\) is defined as \(m_{i}=\int_{\mathbb{X}}x^{i}\rho(x)dx\) and the \(i^{\text{th}}\) central moment, \(\mu_{i}\) about the mean \(m_{1}\) is \(\mu_{i}=\int_{\mathbb{X}}(x-m_{1})^{i}\rho(x)dx\).
Given a projection of the density as in Eq. 10, the mean is approximated as
\[m_{1}^{i}=\int x_{i}\rho(x)dx=\hat{\rho}^{T}\int x_{i}\Psi(x)dx \tag{20}\]
which simply indicates that the mean of the density can be written as a summation of the means of the dictionary functions, weighted by the projection coefficients. For higher moments, if the central moment is considered, it will be polynomial in the projection coefficients due to its dependence on the mean, whereas the raw moment remain linear in the projection coefficients. For this reason, we choose to work with the raw moments, as the central moments can also be expressed in terms of the raw moments.
In the numerical examples shown later, the dictionary \(\mathbb{D}\) consists of Gaussian radial basis functions of the form
\[\psi_{l}(x)=\exp\left(-\frac{(x-c_{l})^{T}(x-c_{l})}{2s^{2}}\right) \tag{21}\]
where \(c_{l}\) is the center of the \(l^{\text{th}}\) basis function, and \(s\) is a scaling parameter affecting the spread. For this dictionary, the mean is
\[m_{1}^{i}=2\pi s^{2}\hat{\rho}^{T}c^{i} \tag{22}\]
where the superscript \(i\) refers to the coordinate index of the state vector, not exponentiation, and \(c^{i}\) is a column vector containing the \(i^{\text{th}}\) coordinate of the basis function centers. Similarly, the second raw moment can be written as
\[m_{2}^{ij}=\int x^{i}x^{j}\rho(x)dx=\hat{\rho}^{T}\int x^{i}x^{j}\Psi(x)dx \tag{23}\]
where the last integral reduces to
\[\int x^{i}x^{j}\psi_{l}(x)dx=\begin{cases}2\pi s^{2}(s^{2}+(c_{l}^{i})^{2})&i=j \\ 2\pi s^{2}c_{l}^{i}c_{l}^{j}&i\neq j\end{cases}\]
for a given basis function \(\psi_{l}(x)\) where, again, superscripts \(i\) and \(j\) are coordinate indices.
### _Numerical example_
To illustrate the ability of the proposed framework to propagate density functions forward in time, we consider the propagation of an initial density for a forced Duffing oscillator system, given by
\[\frac{d}{dt}\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix}=\begin{pmatrix}x_{2}\\ x_{1}-x_{1}^{3}+u\end{pmatrix} \tag{24}\]
where \(u\) is the control input. For the purpose of this simulation, we set \(u(t)=\sin(4\pi t)\) and the prediction results are shown in Figs. 1 - 2. For the generator calculation, a dictionary of Gaussian radial basis functions, as shown in Eq. 21, is used where the centers lie on an evenly spaced \(30\times 30\) grid ranging from \(-2.5\) to \(2.5\) in \(x_{1}\) and \(x_{2}\). The operators are approximated using data collected from short time trajectories with \(\Delta t=0.005\) for a \(50\times 50\) grid of initial conditions on the same region. The predicted moment is compared to the sample moment obtained from \(1000\) trajectories from initial conditions sampled according to the initial density. We see that the moment propagation of the proposed method is good for approximately 3 seconds, which motivates the use of this method in a control formulation, as detailed in the following sections. Also shown for comparison in Fig. 2 is a linear prediction, which is computed by propagating the initial Gaussian through a linearization of Eq. 24, where the linearization is re-computed at each timestep about the predicted mean, as is commonly done in the a priori prediction step of an extended Kalman filter.
## IV Control formulation
We have shown in Sec. III-A and III-B that the problem of steering a density \(\rho\) to a desired density can be expressed as an output tracking problem on a lifted, bilinear system given by Eq. 19, where the projection coefficients \(\hat{\rho}\) can be interpreted as the lifted state. Then, if the raw moments are taken to be the relevant output, the output, \(y\) is linear in the lifted state, \(y=C\hat{\rho}\), where the elements of the output matrix \(C\) are given by rewriting Eqs. 20, 23 in matrix form.
For the optimal output tracking problem, we consider a discrete time optimal control problem
\[\min_{u_{1},u_{2},\ldots,u_{H-1}}\sum_{t=1}^{H-1}l(\hat{\rho}_{t}, u_{t})+l_{H}(\hat{\rho}_{H}) \tag{25a}\] \[\mathrm{s.t.}\qquad\hat{\rho}_{t+1}=F(\hat{\rho}_{t},u_{t})\] (25b) \[y_{t}=C\hat{\rho}_{t} \tag{25c}\]
where \(H\) is the number of timesteps in the time horizon and Eq. 25b represents the discrete time version of Eq. 19.
In particular, for output tracking, we consider a quadratic cost of the form
\[l(\hat{\rho}_{t},u_{t})=(y_{t}-y_{t}^{\mathrm{ref}})^{T}S(y_{t}-y_{t}^{ \mathrm{ref}})+u_{t}^{T}Ru_{t} \tag{26}\]
where \(S\) and \(R\) are weighting matrices which define the relative penalty on tracking error and control effort, respectively. Since the output \(y\) is linear in the lifted state \(\hat{\rho}_{t}\), this cost can be rewritten as a quadratic cost in terms of \(\hat{\rho}_{t}\), with an added linear term.
It is well known that for optimal control problems on bilinear systems with quadratic cost, an effective way of
Fig. 1: Moment propagation of proposed method for a Duffing oscillator with sinusoidal forcing. Red points show trajectories from initial conditions sampled from the initial density, \(\rho(x(0))\sim\mathcal{N}([-0.5;1],0.05I)\). Red circle and red ellipse show the sample mean and \(2\sigma\) sample covariance ellipse, respectively. The black circle and black ellipse are the predicted mean and \(2\sigma\) covariance ellipse.
Fig. 2: Moment propagation for a forced Duffing oscillator. Left, top: First raw moment (mean). Left, bottom: sinusoidal control signal. Right: 2nd raw moment. The proposed method is labelled PF prediction.
solving the problem is by iteratively linearizing and solving a finite time linear quadratic regulator (LQR) problem about a nominal trajectory, utilizing the Ricatti formulation of that problem [37]. For this reason, we solve the optimal control problem using differential dynamic programming (DDP) [38, 39], which is closely related to the method of iterative LQR. We briefly recount the primary steps of this algorithm below.
DDP computes a locally optimal control around a nominal trajectory by minimizing a quadratic approximation of the value function along this trajectory, and then doing this iteratively about the new trajectories obtained by applying the locally optimal control. First define the value function \(V(\hat{\rho}_{t},t)\) at time \(t\) as,
\[V(\hat{\rho}_{t},t)=\min_{u_{t}}[l(\hat{\rho}_{t},u_{t})+V(\hat{\rho}_{t+1},t+1)] \tag{27}\]
which expresses the optimal cost-to-go from \(\hat{\rho}_{t}\), where \(V(\hat{\rho}_{H},H)=l_{f}(\hat{\rho}_{H})\). Denote by \(Q(\delta\hat{\rho},\delta u)\) the change in the value function due to applying change in control input \(\delta u\) about the nominal trajectory and consider its quadratic approximation
\[\begin{split} Q(\delta\hat{\rho},\delta u)& \approx Q_{\hat{\rho}}\delta\hat{\rho}+Q_{u}^{T}\delta u+\delta\hat{ \rho}^{T}Q_{\hat{\rho}u}\delta u\\ &\quad+\frac{1}{2}\delta\hat{\rho}^{T}Q_{\hat{\rho}\hat{\rho}} \delta\hat{\rho}+\frac{1}{2}\delta u^{T}Q_{uu}\delta u\end{split} \tag{28}\]
where these derivatives are given by
\[\begin{split} Q_{\hat{\rho}}&=l_{\hat{\rho}}+F_{ \hat{\rho}}^{T}V_{\hat{\rho}}^{\prime}\\ Q_{u}&=l_{u}+F_{u}^{T}V_{\hat{\rho}}^{\prime}\\ Q_{\hat{\rho}\hat{\rho}}&=l_{\hat{\rho}\hat{\rho} }+F_{\hat{\rho}}^{T}V_{\hat{\rho}\hat{\rho}}^{\prime}F_{\hat{\rho}}+V_{\hat{ \rho}}^{\prime}\cdot F_{\hat{\rho}\hat{\rho}}\\ Q_{uu}&=l_{uu}+F_{u}^{T}V_{\hat{\rho}\hat{\rho}}^ {\prime}F_{u}+V_{\hat{\rho}}^{\prime}\cdot F_{uu}\\ Q_{\hat{\rho}u}&=l_{\hat{\rho}u}+F_{\hat{\rho}}^ {T}V_{\hat{\rho}\hat{\rho}}^{\prime}F_{u}+V_{\hat{\rho}}^{\prime}\cdot F_{\hat{ \rho}u}\end{split}\]
where the notation \((\cdot)^{\prime}\) indicates the next time step. The algorithm proceeds by computing these derivatives by recursing backward in time along the nominal trajectory from the end of the horizon. At each iteration, the control policy is improved by optimizing this quadratic expansion with respect to \(\delta u\)
\[\delta u^{*}=\arg\min_{\delta u}Q(\delta\hat{\rho},\delta u)=-Q_{uu}^{-1}\left( Q_{u}+Q_{u\hat{\rho}}\delta\hat{\rho}\right) \tag{29}\]
This can be seen as providing a descent direction in the space of control policies. An updated nominal control is then computed by a linesearch over a stepsize parameter \(\alpha\) to update the policy, that is
\[u_{\text{new}}=u-\alpha Q_{uu}^{-1}Q_{u}-Q_{uu}^{-1}Q_{u\hat{\rho}}\delta\hat{\rho}\]
and this new control is applied to obtain a new nominal trajectory, and this procedure is iterated until the relative change in cost falls to less than a specified tolerance. For full details of the algorithm, the reader should refer to Refs. [38, 39].
## V Validation and Results
Here we consider two examples of the density control formulation using Perron Frobenius generators.
### _Example 1: Forced Duffing oscillator_
As a first example, we consider the forced Duffing oscillator of Eq. 24, and we use DDP to determine a control sequence to steer a Gaussian initial density \(\rho(x(0))\sim\mathcal{N}([1;1],0.025I)\) toward the equilibrium point at \((1,0)\) over a time horizon of 2s. The generators are approximated using the same data as described in Sec. III-C and the same timestep of \(0.005\)s is used for DDP. In differential dynamic programming, the in-horizon cost weights on the reference error of the first two moments and control error are all set to unity. The terminal cost on the reference error in the moments is set to \(1000\), and only the first two moments are considered. The target raw second moment is computed from the desired mean, with the desired variance taken to be zero. The results of this computation are shown in Fig. 3-5.
In Fig. 6, we show a comparison of the performance of the proposed controller, labelled 'PF-DDP' with a standard DDP controller, which computes a control on the Duffing system directly (rather than a lifted state), with the initial condition being the mean. That is, the standard DDP controller acts on the mean as if it were a deterministic initial condition of the system. The comparison shown is the mean and one standard deviation region from trajectories from a set of 500 initial conditions sampled from the initial distribution, with each of the respective controllers applied. We see that the proposed controller moves the mean of the distribution closer to the reference, while maintaining nearly the same variance as the standard DDP controller.
Fig. 3: Controlled transport through the Duffing system from an initial density \(\rho(x(0))\sim\mathcal{N}([0;1],0.025I)\) toward the equilibrium point at \((0,1)\) (green circle). Red circle and red ellipse show the sample mean and 2\(\sigma\) sample covariance ellipse, respectively. The black circle and black ellipse are the predicted mean and \(2\sigma\) covariance ellipse..
### _Example 2: Rotor-driven Stokes flow_
As a second example, we consider the problem of steering a distribution of fluid particles in a Stokes flow, where the flow is produced by two micro-rotors. The micro-rotors are modelled as rotlets, the Stokes flow singularity associated with a point torque in the fluid. For a collection of \(N_{r}\) rotlets, this flow is given by
\[\frac{d\mathbf{x}}{dt}=\sum_{i=1}^{N_{r}}\frac{T_{i}\times(\mathbf{x}-\bar{ \mathbf{x}}_{i})}{\|\mathbf{x}-\bar{\mathbf{x}}_{i}\|^{3}} \tag{30}\]
where \(\bar{\mathbf{x}}_{i}\) and \(T_{i}\) denote the location and torque of the \(i\)-th rotor, respectively. We consider the case where there are two such rotors lying in the \(x_{1}\)-\(x_{2}\) plane, located at \(\bar{\mathbf{x}}_{\mathbf{1}}=(-1,0)\) and \(\bar{\mathbf{x}}_{\mathbf{2}}=(1,0)\) and the controls for the problem are taken to be torques \(u_{1}=T_{1}\), \(u_{2}=T_{2}\), where the direction of these torques is taken to be normal to the \(x_{1}\)-\(x_{2}\) plane (in the positive \(x_{3}\) direction). The streamlines for such a flow are illustrated in Fig. 7 for the case where \(u_{1}=-1\), \(u_{2}=1\).
For this example, the task is to drive a Gaussian initial density \(\rho(x(0))\sim\mathcal{N}([1;1],0.025I)\) toward a target mean at \((-1,-1)\) over a time horizon of 2s. Again, we consider a timestep of \(\Delta t=0.005\)s for both the computation of the generators and for DDP. The in horizon costs weights for the mean error are set to 2, while the weights for the second moment error and control effort are unity. The terminal cost weight on the mean error is 1000 and the terminal cost weight on the second moment error is 500. Results from
Fig. 4: Controlled transport of fluid particles, driven by two rotlets, or micro-rotors, in a Stokes flow from an initial density, \(\rho(x(0))\sim\mathcal{N}([1;1],0.025I)\) toward a target mean (green circle) at \((-1,-1)\). The rotors are located at \((-1,0)\) and \((0,1)\), as indicated by the black circle-cross. Red circle and red ellipse show the sample mean and 2\(\sigma\) sample covariance ellipse, respectively. The black circle and black ellipse are the predicted mean and \(2\sigma\) covariance ellipse. Gray streamlines indicate the flow field produced by the rotlets at the instant shown.
Fig. 5: Control of raw moments for the forced Duffing system. The black line indicates the target (reference). Left, bottom: control from differential dynamic programming for the generator system. Right: 2nd raw moment. Predicted values are from the Perron-Frobenius generator computation. True values are given by the sample moment.
Fig. 6: Comparison of the proposed Perron Frobenius generator DDP with standard DDP for the forced Duffing system. Shaded region indicates one standard deviation
this computation are shown in Figs. 4,8,9.
We see in Fig. 9 that DDP yields a control which drives the mean to the target by first giving a significant counter-clockwise torque on the right rotor to drive the density to a point between the rotors, at which point the left rotor is initiated to drive the flow with a clockwise torque, pulling the density mean near to the target.
This example demonstrates an alternative physical interpretation of the density transport problem, where the density represents a distribution of fluid particles. This also demonstrates the effectiveness of the proposed method on a system which is linear in the controls, but in which the control vector fields are nonlinear.
## VI Conclusion
In this work, we have studied the problem of transporting density functions of states through a controlled dynamical system. This problem formulation has applications both in fluid mechanics and in the control of uncertain systems. Our approach is based on approximations of the Perron-Frobenius operator, whereby we show that approximations of this operator and its generator can be used to model the density transport dynamics as a high-dimensional system which is bilinear in the lifted state and the control. We demonstrated this approach on two examples, a forced Duffing system, in which the density can have the interpretation as an uncertainty in the initial state and on a rotor driven Stokes flow, in which the density formulation takes on the fluid mechanics interpretation of describing a distribution of fluid particles. Future work in these areas could include extending the proposed control formulation for use in a constrained model predictive control framework for uncertain systems or by studying the fluid transport by more realistic biological microswimmers or artificial microrobots.
|
2308.05346 | Towards General and Fast Video Derain via Knowledge Distillation | As a common natural weather condition, rain can obscure video frames and thus
affect the performance of the visual system, so video derain receives a lot of
attention. In natural environments, rain has a wide variety of streak types,
which increases the difficulty of the rain removal task. In this paper, we
propose a Rain Review-based General video derain Network via knowledge
distillation (named RRGNet) that handles different rain streak types with one
pre-training weight. Specifically, we design a frame grouping-based
encoder-decoder network that makes full use of the temporal information of the
video. Further, we use the old task model to guide the current model in
learning new rain streak types while avoiding forgetting. To consolidate the
network's ability to derain, we design a rain review module to play back data
from old tasks for the current model. The experimental results show that our
developed general method achieves the best results in terms of running speed
and derain effect. | Defang Cai, Pan Mu, Sixian Chan, Zhanpeng Shao, Cong Bai | 2023-08-10T05:27:43Z | http://arxiv.org/abs/2308.05346v1 | # Towards General and Fast Video Derain via Knowledge Distillation
###### Abstract
As a common natural weather condition, rain can obscure video frames and thus affect the performance of the visual system, so video derrain receives a lot of attention. In natural environments, rain has a wide variety of streak types, which increases the difficulty of the rain removal task. In this paper, we propose a Rain Review-based General video derrain Network via knowledge distillation (named RRGNet) that handles different rain streak types with one pre-training weight. Specifically, we design a frame grouping-based encoder-decoder network that makes full use of the temporal information of the video. Further, we use the old task model to guide the current model in learning new rain streak types while avoiding forgetting. To consolidate the network's ability to derrain, we design a rain review module to play back data from old tasks for the current model. The experimental results show that our developed general method achieves the best results in terms of running speed and derrain effect.
Video derrain, knowledge distillation, deep learning.
## I Introduction
Rain is one of the most common types of natural weather. In rainy conditions, rainfall can block the photographic equipment from capturing the background environment and affect the subsequent visual technique. Removing the effects of rain on photography has become a widely researched problem, and there are already many effective deep learning methods. However, these methods require different pre-training weights for different rain streak types to work optimally, making it challenging to apply them to real-world environments.
The derrain task is divided into image derrain and video derrain depending on the input. Recently, many effective approaches are proposed in the field of image derrain, such as Adaptive filtering [3], sparse coding [4], dictionary learning [5], data-driven based methods [6, 7], and deep learning methods [8, 9, 10, 11, 12, 13]. Compared to image derrain, video derrain requires better use of temporal information, which poses challenges. Some early video derrain studies attempts to complete derrain tasks based on physical priors. Such as some frequency domain based method [14], low-rank structure method [15, 16], sparse matrix method [17, 18], the blurred Gaussian model [19], tensor structures [20, 21], and some rain streak priors [22]. All these traditional methods require input to satisfy certain physical assumptions and they are difficult to process complex natural rain streaks.
In recent years, along with the development of deep learning, data-based methods are receiving attention from many researchers. The most common deep learning derrain method is to separate the rain layers using convolution, e.g. [1, 23], where Zhang \(et\,al.\)[1] takes advantage of deep residual networks and LSTM convolution to effectively combine spatial
Fig. 1: The proposed method compares with various SOTA video derrain methods on the real-word video. It is clear that our method achieves the best rain removal results.
and temporal features. Mu \(et\,al\)[23] proposes a dynamic model based on the NAS search structure to address the shortcomings of the CNN approach. To make better use of the timing information in the video, some methods align the video frames before drain, e.g. [24, 25, 26, 27]. where Su \(et\,al\). [26] and Yang \(et\,al\). [24] complete the frame alignment using the optical flow method. And Yan \(et\,al\). [25] uses deformable convolution instead of the optical flow method because of the instability of light in rainy conditions. Some researchers try to combine deep learning methods and model-driven methods, such as [2, 28, 29]. Yang \(et\,al\). [2] combines adversarial learning and physical priors to design a two-stage progressive network to handle rain accumulation.
However, none of these methods can handle all types of rain streaks using one pre-trained model. This is because all of the above deep learning methods need to address the problem of catastrophic forgetting. The catastrophic forgetting problem means that after the model is already partially memorizing rain streak knowledge when it learns a completely new type of rain streak, the model immediately forgets how the previous drain task is handled. This makes it necessary for each model to prepare different pre-training weights for different rain streaks. This can greatly affect the application of drain networks in reality.
To mitigate this problem, we develop a Rain Review-based General video derain Network via knowledge distillation (named RRGNet) that handles different rain streak types with one pre-training weight. To make better use of the temporal information of the video, we propose a frame grouping based encoder-decoder network which can extract rain streaks information from different frame rates. And we design a feature and response distillation module which effectively preserves the model's memory of old tasks. To further remember past knowledge, we design a rain review module to generate rain streaks of old tasks. These rain streaks help models review knowledge of old tasks. Our contribution is summarised as follows:
* We propose a general video derain network via feature and response distillation that handles different rain streak types with one pre-training weight.
* We design a simple and effective review module that converts the extracted residuals into old task rain streaks. The review module helps the derain network review the old task without viewing the old task data.
* A large number of experiments demonstrate that the proposed method outperforms other SOTA methods in terms of rain removal effect and running speed, provided that only one model is used.
## II Proposed Method
### _The Overall Framework_
In this work, our goal is to remove different types of rain streaks by using one pre-training model. To this end, we propose a solution based on knowledge distillation. Fig. 2 illustrates the training flow of our proposed method. During the training phase, our approach consists of a student net (current network) \(N^{j}\) and a teacher net (previous stage network) \(N^{j-1}\). And during the evaluation phase, the student network \(N^{j}\) completes the derain task alone. Their input is a video consisting of multiple consecutive frames. We divide the videos into batches, each batch consisting of 5 frames (\(\{\mathbf{X}_{t-2},\mathbf{X}_{t-1},\mathbf{X}_{t},\mathbf{X}_{t+1},\mathbf{X }_{t+2}\}\)), where \(\mathbf{X}_{t}\) is the derain object and the other frames provide timing information for the model. The final output of the model is the rain-free background \(\mathbf{B}\). The review network \(\mathcal{R}_{e}\) generates a rain map of the old task, whose input is the residuals extracted by the derain network and whose output is the rain map of the old task.
### _Frame Grouping Module_
The temporal information is crucial for video deraining. This is because a real rainy video contains various types of rain
Fig. 2: Training flow chart of our method. It should be noted that during the evaluation phase the derain network \(\mathcal{N}^{j}\) does all the rain removal work alone.
streaks, while the background information remains constant. Leveraging this common background information can enhance the model's capability to extract rain streaks. Moreover, the running speed is a crucial factor to make the rain removal method feasible in practice. However, how to effectively and efficiently utilize the temporal information poses a challenging issue in video deraining.
To improve the model's ability for perceiving video information. We propose the Frame Grouping Module (FGM). Distinguishing from traditional U-Net architectures, we use two encoders to extract different temporal information depending on the frame rate. This can be expressed as:
\[\begin{array}{l}\mathcal{F}_{1}=\Psi_{1}(\mathbf{X}_{t-1},\mathbf{X}_{t}, \mathbf{X}_{t+1}),\\ \mathcal{F}_{2}=\Psi_{2}(\mathbf{X}_{t-2},\mathbf{X}_{t},\mathbf{X}_{t+2}),\\ \mathcal{F}=\mathcal{F}_{1}+\mathcal{F}_{2},\end{array} \tag{1}\]
where \(\Psi_{1}\) and \(\Psi_{2}\) denote the encoder and \(\mathcal{F}\) represents the high-dimensional features extracted by the encoder. This allows our network to quickly discover the differences between the central frame and other frames, thus improving the high-dimensional feature quality.
### _Iterative Feature and Response Distillation_
Most existing derain methods require the use of different pre-trained models to achieve optimal rain removal when dealing with different rain streak types. To address the above issues, we propose an iterative feature and response distillation training scheme. Given a training sample of \(\mathcal{D}_{all}=\{(\mathbf{X}^{j},\mathbf{Y}^{j})\}_{j=1}^{K}\), where \(\{\mathbf{X}^{1},\mathbf{X}^{2},...,\mathbf{X}^{K}\}\) represents the different types of rain video and \(\{\mathbf{Y}^{1},\mathbf{Y}^{2},...,\mathbf{Y}^{K}\}\) represents the corresponding clean background frame. \(K\) denotes the total number of rain types. The training samples at stage \(j\) can be denoted as \(\mathcal{D}^{j}=\{(\mathbf{X}^{j},\mathbf{Y}^{j})\}\). Once the model can handle the rain streaks in the current dataset, we will use a new type of rain streak as the dataset \(\mathcal{D}^{j+1}\) and start the next training stage.
When \(j=1\), we initialize our derain models by conventional training methods, and the loss function can be:
\[\begin{array}{l}\theta_{t}^{1}=\{\mathbf{X}_{t-2}^{1},\mathbf{X}_{t-1}^{1},\mathbf{X}_{t}^{1},\mathbf{X}_{t+1}^{1},\mathbf{X}_{t+2}^{1}\},\\ \mathcal{L}_{\mathcal{L}}=\mathcal{L}(\mathcal{N}^{1}(\theta_{t}^{1}),\mathbf{ Y}_{t}^{1}),\end{array} \tag{2}\]
where \((\mathbf{X}_{t-2}^{1},...,\mathbf{X}_{t+2}^{1},\mathbf{Y}_{t}^{1})\) indicates the data in \(\mathcal{D}^{1}\), \(\theta_{t}^{1}\) represents the set of input frames, \(\mathcal{N}^{1}\) denotes the derain model at stage \(1\) and \(\mathcal{L}\) is the loss function which consists of the \(L_{1}\) norm and negative SSIM losses (\(\mathcal{L}=\sigma_{1}*\ell_{ssim}+\sigma_{2}*\ell_{1}\), where \(\ell_{1}\) denotes the \(L_{1}\) norm, \(\ell_{ssim}\) denotes the negative SSIM loss, and \(\sigma_{1}\) and \(\sigma_{2}\) denote the corresponding loss weights).
When \(j>1\), the model inherits the parameters and loss function from the previous stage and starts the next training phase.
To obtain a general video derain model, we utilize response knowledge distillation to constrain the optimization of the current model, i.e.,
\[\mathcal{L}_{RKD}=\mathcal{L}(\mathcal{N}^{j}(\theta_{t}^{j}),\mathcal{N}^{j- 1}(\theta_{t}^{j})). \tag{3}\]
This loss requires that the current model must approximate the old model's output and thus retain the old task data. To further constrain the current model optimization, we introduce feature knowledge distillation with the following loss:
\[\mathcal{L}_{FKD}=\mathcal{L}(\mathcal{F}^{j},\mathcal{F}^{j-1}), \tag{4}\]
where \(\mathcal{F}^{j}\) denotes the features extracted by the \(j\)-stage model encoder. The loss requires the current model's encoder to extract features as close as possible to those extracted by the old task model.
### _Rain Review Module_
To avoid catastrophic forgetting problems, we design a simple and effective rain streak generation module called Rain Review Module (RRM). This module uses the background residuals extracted by the previous model to generate rain streaks of the old task. The module improves the performance of the model on both new and old tasks by data augmentation. To avoid providing unlearned rain knowledge, the review network is trained using the same approach as the derain network. We use the \(j\)-stage training process (\(j>1\)) as an example to show how the review network assists the derain network in recalling old knowledge. The review network takes the residuals extracted from the previous stage network as input and outputs a rain map with information about the old
Fig. 4: We validate in our ablation experiments that grouping video frames together for processing helps the model extract rain streak information.
Fig. 3: Comparing with other methods on RainSynComplex25, RainSynLigh25 and NTU datasets.
task rain streaks:
\[\mathbf{S}=\mathcal{R}_{e}(\mathbf{R}_{t}^{j-1}), \tag{5}\]
where \(\mathcal{R}_{e}\) denotes the review network, \(\mathbf{R}_{t}^{j-1}\) is the residuals extracted by the previous stage network, and \(\mathbf{S}\) means the rain streak map generated by the rain review module. Next, we fuse the rain streak map with the background extracted from the previous stage network to obtain a completely new rain map. To enhance the review effect, we use affine variation to augment the data for the rain streak map.
\[\tilde{\mathbf{X}}=\mathcal{A}(\mathbf{S})+\mathbf{B}_{t}^{j-1}, \tag{6}\]
where \(\mathcal{A}\) denotes the affine variation, \(\mathbf{B}_{t}^{j-1}\) indicates the background map output from the old task network and \(\tilde{\mathbf{X}}\) denotes the newly synthesized rain map. After that, we let the current network remove the rain streak in \(\tilde{\mathbf{X}}\):
\[\begin{split}\tilde{\mathbf{B}}_{r}&=\mathcal{N}^{ j}(\mathbf{X}_{t-2},\mathbf{X}_{t-1},\tilde{\mathbf{X}},\mathbf{X}_{t+1}, \mathbf{X}_{t+2}),\\ \mathcal{L}_{R}&=\mathcal{L}(\tilde{\mathbf{B}}_{r},\mathbf{Y}_{t}^{j}),\end{split} \tag{7}\]
where \(\mathcal{N}^{j}\) denotes the current stage network, \(\tilde{\mathbf{B}}_{r}\) represents the background of the current network recovered according to the new rain map, and \(\mathbf{Y}_{t}^{j}\) indicates the corresponding ground truth. Through the constraint of loss, \(\mathcal{L}_{R}\), the network completes the recall of the old task data. In addition, by training the old task data together with the new task data, the model further learns common features in the different rain streaks. This helps the model to learn new rain streaks knowledge as well.
### _Overall Loss_
We show all loss functions in Section II-C and Section II-D. When training the drain network, the total losses are:
\[\mathcal{L}_{\mathcal{N}}=\lambda_{1}\mathcal{L}_{C}+\lambda_{2}\mathcal{L}_{ RKD}+\lambda_{3}\mathcal{L}_{FKD}+\lambda_{4}\mathcal{L}_{R}, \tag{8}\]
where \(\lambda_{1}\) to \(\lambda_{4}\) denotes loss weights. The review network training loss is similar to the drain network, except that it does not have a review module:
\[\begin{split}&\mathcal{L}_{Re}=\lambda_{1}\mathcal{L}(\mathcal{ F}_{r}^{j},\mathcal{F}_{r}^{j-1})+\lambda_{2}\mathcal{L}(\mathbf{S}^{j}, \mathbf{S}^{j-1})+\\ &\lambda_{3}\mathcal{L}(\mathbf{S}^{j},\mathcal{G}(\mathbf{X}_{ t}^{j}-\mathbf{Y}_{t}^{j})),\end{split} \tag{9}\]
where \(\mathcal{F}_{r}^{j}\) denotes the features extracted by the \(j\)-stage review network and \(\mathbf{S}^{j}\) denotes the rain map it produces, \(\mathbf{Y}_{t}^{j}\) indicates the corresponding ground truth and \(\mathcal{G}\) denotes the graying process.
## III Experimental Results
### _Implementation Details_
We compare the proposed method with the state-of-the-art method on three datasets, RainSynLight25 [28], RainSynComplex25 [28], NTU-Rain [29]. We use the most widely evaluated metrics peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as quantitative evaluation metrics for all methods. The evaluation results are performed on the luminance channel.
Each network is trained for 160 epochs with a learning rate of 0.001. The Adam optimizer is used, and the batch size is 1. We randomly clip all inputs to 240 \(\times\) 240 size. The loss weights \(\sigma_{1}\) and \(\sigma_{2}\) are 1.1 and 0.75, respectively. And \(\lambda_{1}\) to \(\lambda_{4}\) are 0.5, 0.5, 1 and 1 respectively.
### _Ablation Study_
**Effectiveness of knowledge distillation.** To confirm whether feature and response distillation maintain the performance of the drain model on the old task, we compare the difference in performance on old tasks between the model with the addition of knowledge distillation and the base network. The results are shown in Table I. The base network in the table indicates the drain module trained by conventional means. And FRD denotes a drain network trained by feature and response distillation. To ensure fairness, the same training setup is used for each method and the experimental results are compared for the final epoch. It can be seen that the network with knowledge distillation still performs better on the old task compared to the base network.
Fig. 5: The degradation of the model’s performance on the old tasks (Complex and Light dataset) as the model starts to learn the new dataset (NTU Dataset). The decay of PSNR and SSIM on RainSynComplex25 are presented in (a) and (b), respectively. The results for RainSynLight25 are also presented in (c) and (d), respectively.
**Effectiveness of rain review modules.** To verify the effectiveness of the rain review module, we compare the impact of with and without the review module on the drain performance. The results are presented in Table I. It can be seen that the review module strongly improves the rain removal performance of the model.
To further illustrate the effect of the feature and response distillation and the rain review modules on the old task, we show the performance degradation of the model on the old task (RainSynLight and RainSynComplex) for different settings in Fig. 5. It can be seen that the base network quickly forgets how to handle the old task, while the addition of data distillation effectively maintains the performance of the model on the original task. And with the addition of the review module, the model reaches convergence with only a small cost, which demonstrates the effectiveness of our proposed module.
**Effectiveness of frame grouping modules.** To verify whether the model needs to group the inputs according to frames, we compare the results with and without grouping, and the experimental results are shown in Table II. The effect of grouping is 0.3 dB higher than that of direct input on the RainSynLight dataset. Apparently, after grouping the inputs, the model can better extract the temporal information from between frames and thus better recover the background.
### _Comparison with State-of-the-Art_
We compare it with some state-of-the-art video rain removal methods and single-image rain removal methods, including MPRNet [9], VRGNet [10], FastDeRain [30], JORDER [31], J4R-Net [28], SpacCNN [29], DualFlow [24], TMICS [23], ESTINet [1], RMFD [2] and MFDNN [26].
**Comparing on different datasets.** We show in Table III the derain results for all methods on the three datasets. Our approach uses only one model to deal with different types of rain. In this case, the proposed method still achieves good results. It can be seen that our method is significantly better than other SOTA methods. Compare to the latest and best method MFDNN, our method achieves gains of 1.32dB, 0.81dB, and 2.28dB in PSNR on the RainSynLight25, RainSynComplex25, and NTU-Rain datasets respectively. The
Fig. 6: comparing with other methods on real rainy video.
above results demonstrate that our method is more effective in removing the different types of rain scenes.
**Comparing on real-world video frames.** We further compare the performance difference between the proposed and SOTA methods on real videos. The top row and the bottom row respectively are the videos from the NTU dataset and "mixkit.co"1. As shown in Fig. 6 that our method retains most of the background information while removing the rain streaks more cleanly.
Footnote 1: mixkit.co
### _Efficiency Analysis_
Table IV shows the running speed of different advanced methods. All methods are based on the PyTorch implementation. We test all SOTA methods uniformly on a Linux system with the GeForce GTX 2080 Ti GPU. The test video resolution is \(832\times 512\). As can be seen from Table IV, the number of parameters in our method is comparable to other deep learning methods, but the running speed and drain effect are far superior to other methods.
## IV Conclusion
In this work, to avoid catastrophic forgetting, we design a rain review-based general video derain network via knowledge distillation. The method uses an old task model to guide the current model in learning new rain streaks knowledge thus avoiding forgetting. We also design a frame grouping encoder-decoder network, thus making full use of the temporal information of the video. Extensive experiments demonstrate that our proposed method outperforms state-of-the-art methods in terms of derain performance and running time.
## Acknowledgment
This work is supported by Natural Science Foundation of China (Grant No. 62202429, U20A20196, 61976191), Zhejiang Provincial Natural Science Foundation of China (Grant No. LY23F020024, LR21F020002 and LY23F020023) and the Hangzhou AI major scientific and technological innovation project (Grant No. 2022AIZD0061).
|
2307.04805 | The Dragon-II simulations -- I. Evolution of single and binary compact
objects in star clusters with up to 1 million stars | We present the first results of the \textsc{Dragon-II} simulations, a suite
of 19 $N$-body simulations of star clusters with up to $10^6$ stars, with up to
$33\%$ of them initially paired in binaries. In this work, we describe the main
evolution of the clusters and their compact objects (COs). All
\textsc{Dragon-II} clusters form in their centre a black hole (BH) subsystem
with a density $10-100$ times larger than the stellar density, with the cluster
core containing $50-80\%$ of the whole BH population. In all models, the BH
average mass steeply decreases as a consequence of BH burning, reaching values
$\langle m_{\rm BH}\rangle < 15$ M$_\odot$ within $10-30$ relaxation times.
Generally, our clusters retain only BHs lighter than $30$ M$_\odot$ over $30$
relaxation times. Looser clusters retain a higher binary fraction, because in
such environments binaries are less likely disrupted by dynamical encounters.
We find that BH-main sequence star binaries have properties similar to recently
observed systems. Double CO binaries (DCOBs) ejected from the cluster exhibit
larger mass ratios and heavier primary masses than ejected binaries hosting a
single CO (SCOBs). Ejected SCOBs have BH masses $m_{\rm BH} = 3-20$ M$_\odot$,
definitely lower than those in DCOBs ($m_{\rm BH} = 10-100$ M$_\odot$). | Manuel Arca Sedda, Albrecht W. H. Kamlah, Rainer Spurzem, Mirek Giersz, Peter Berczik, Sara Rastello, Giuliano Iorio, Michela Mapelli, Massimiliano Gatto, Eva K. Grebel | 2023-07-10T18:00:52Z | http://arxiv.org/abs/2307.04805v1 | The Dragon-II simulations - I. Evolution of single and binary compact objects in star clusters with up to 1 million stars
###### Abstract
We present the first results of the Dragon-II simulations, a suite of 19 \(N\)-body simulations of star clusters with up to \(10^{6}\) stars, with up to \(33\%\) of them initially paired in binaries. In this work, we describe the main evolution of the clusters and their compact objects (COs). All Dragon-II clusters form in their centre a black hole (BH) subsystem with a density \(10-100\) times larger than the stellar density, with the cluster core containing \(50-80\%\) of the whole BH population. In all models, the BH average mass steeply decreases as a consequence of BH burning, reaching values \(\langle m_{\rm BH}\rangle<15\) M\({}_{\odot}\) within \(10-30\) relaxation times. Generally, our clusters retain only BHs lighter than 30 M\({}_{\odot}\) over 30 relaxation times. Looser clusters retain a higher binary fraction, because in such environments binaries are less likely disrupted by dynamical encounters. We find that BH-main sequence star binaries have properties similar to recently observed systems. Double CO binaries (DCOBs) ejected from the cluster exhibit larger mass ratios and heavier primary masses than ejected binaries hosting a single CO (SCOBs). Ejected SCOBs have BH masses \(m_{\rm BH}=3-20\) M\({}_{\odot}\), definitely lower than those in DCOBs (\(m_{\rm BH}=10-100\) M\({}_{\odot}\)).
keywords: methods: numerical - galaxies: star clusters: general - stars: general, black holes
## 1 Introduction
Massive star clusters in the range (\(10^{4}-10^{6}\)) M\({}_{\odot}\), like globular clusters or young massive clusters, represent galactic repositories of stellar compact objects, and are ideal laboratories to study the interplay of stellar evolution and dynamics. Several hundreds of stellar black holes (BHs), neutron stars (NSs), and white dwarfs (WDs) are expected to form in a typical massive cluster.
In the last decade, it became clear that the fraction of BHs that massive clusters can retain is much larger than previously thought, as suggested by numerous theoretical and numerical works (see e.g. Morscher et al., 2013; Wong et al., 2014; Wang et al., 2016; Askar et al., 2018; Pavlik et al., 2018; Arca Sedda et al., 2018; Peuten et al., 2016; Gieles et al., 2021; Kamlah et al., 2022), providing support to the crescent number of observations of stellar BH candidates in Galactic clusters (Strader et al., 2012; Miller-Jones et al., 2015; Giesers et al., 2018; Chomiuk et al., 2013; Bahramian et al., 2017; Giesers et al., 2019).
The progress in stellar evolution of massive stars (Woosley et al., 2007; Wang et al., 2015; Giacobbo & Mapelli, 2018; Belczynski et al., 2010, 2016; Spera et al., 2019; Vink et al., 2021), partly triggered by the discovery of gravitational-wave (GW) emission by merging BH and NS binaries (The LIGO Scientific Collaboration et al., 2021, 2021), has completely changed our understanding of BHs. Stellar models demonstrated that the evolution of single massive stars is significantly influ
enced by the possible development of so-called pair instability supernovae (PISN), which causes the complete disruption of stars that develop an He core with a mass of \(M_{\rm He}=64-135\) M\({}_{\odot}\), and pulsational pair instability supernovae (PPSIN), a mechanism that leads to an enhanced mass-loss in stars with a He core mass of \(M_{\rm He}=32-64\) M\({}_{\odot}\). This leads to a maximum stellar BH mass in the range \(m_{\rm BH,max}=(40-60)\) M\({}_{\odot}\), depending on the theoretical model adopted and the stellar metallicity. Direct consequence of these two processes is the well known upper-mass gap of BHs, a region of the mass-spectrum where no remnants are expected (Woosley et al., 2007). The boundaries of the upper-mass gap are highly uncertain and depend on the adopted stellar evolution model and metallicity (Woosley et al., 2007; Wang et al., 2015; Belczynski et al., 2016; Spera and Mapelli, 2017; Vink et al., 2021; Costa et al., 2021; Iorio et al., 2022). Only stars with a zero age main sequence mass beyond \(M_{\rm ZAMS}>(200-250)\) M\({}_{\odot}\) can avoid PISN and, depending on their metallicity, directly collapse to an intermediate-mass BH with little mass loss in the process (see e.g. Spera and Mapelli, 2017). Stellar collisions might lead to the formation of BHs in the upper-mass gap (e.g. Spera et al., 2019), thus suggesting that star clusters could be perfect laboratories to form mass-gap BHs (e.g. Di Carlo et al., 2019; Kremer et al., 2020; Rizzuto et al., 2021; Rastello et al., 2021; Rizzuto et al., 2022; Banerjee, 2022), but it is unclear how the stellar merger frequency depends on the cluster initial properties (Rizzuto et al., 2022) or the stellar conditions at merger (Ballone et al., 2023; Costa et al., 2022).
More in general, the formation of a population of compact objects can significantly affect star cluster dynamics. Massive stars and BHs rapidly sink into the cluster centre via mass-segregation, possibly forming a massive subsystem on a core-collapse timescale (e.g. Spitzer, 1987; Breen and Heggie, 2013; Pavlik et al., 2018; Arca Sedda et al., 2018; Leveque et al., 2022) which can contract and determine the onset of runaway stellar collisions if the time does not exceed the stellar evolution timescale (e.g. Spitzer, 1987; Portegies Zwart and McMillan, 2002; Portegies Zwart et al., 2004; Fregeau et al., 2004; Breen and Heggie, 2013; Mapelli, 2016; Giersz et al., 2015; Arca Sedda et al., 2018; Vergara et al., 2021, 2022; Maliszewski et al., 2022). The runaway growth of a massive star can be hampered by the formation of tight binaries that supply energy to the cluster core, cause BH ejection, deplete the cluster's BH reservoir, and eventually kick each other out via super-elastic encounters (Breen and Heggie, 2013).
The competing effect of binary energy supply and stellar collisions likely depends on the cluster mass, density, metallicity, the fraction of primordial binaries, the initial mass function and its boundaries, the natal kicks of BHs and NSs, and the compact object mass spectrum. Typically, the exploration of a tiny part of such parameter space is performed with numerical models capable of simultaneously accounting for stellar dynamics and evolution, either via direct \(N\)-body (e.g. Wang et al., 2015; Banerjee, 2018, 2021; Di Carlo et al., 2019; Rastello et al., 2021; Di Carlo et al., 2021; Chattopadhyay et al., 2022) or Monte Carlo techniques (e.g. Rodriguez et al., 2016; Askar et al., 2017; Rodriguez et al., 2019; Kremer et al., 2020; Maliszewski et al., 2022).
Direct \(N\)-body simulations offer most likely the highest level of accuracy in terms of stellar dynamics modelling, but their computational cost forced the vast majority of works in the literature to focus on star clusters with less than a few \(\times\) 10\({}^{5}\) stars and/or with a relatively small fraction of primordial binaries (Banerjee, 2018, 2021; Di Carlo et al., 2019; Rastello et al., 2021; Di Carlo et al., 2021), with a few notable exceptions. For example, several works have explored the impact of a large primordial binary fraction, up to 100%, on the dynamics of isotropic (e.g. Heggie et al., 2006; Trenti et al., 2007; Pavlik, 2020) and anisotropic (Pavlik and Vesperini, 2022, 2022) low-mass star cluster models, i.e. with \(N<20,000\), with equal-mass stars, and recently in intermediate-mass GCs, i.e. \(N\sim 10^{5}\)(Wang et al., 2022). With regards to simulations tailored to represent massive globular clusters, the DRAGON simulations remain the only one that exploited \(10^{6}\) particles (Wang et al., 2016).
Since the development of such pioneering simulations, and especially after the discovery of GWs, numerical tools underwent major upgrade in terms of stellar evolution and treatment of relativistic binaries.
In this work, we present the Dragon-II simulation database, a suite of 19 direct \(N\)-body simulations performed with the Nbody6++GPU code1 representing star clusters with \(N=(0.12-1)\times 10^{6}\) stars, half-mass radius densities in the \(\rho_{\rm h}=1.3\times 10^{4}-6.9\times 10^{6}\) M\({}_{\odot}\) pc\({}^{-3}\) range, and a fraction \(f_{\rm 2b}=0.10-0.33\) of stars initially paired in primordial binaries. This work, which is the first one of a series, focuses on the evolution of single and binary BHs and compact objects in massive and dense star clusters, paying particular attention to the relation between the BH population (mass, average BH mass, density) and the cluster properties (mass, radius). Our Dragon-II models explore a portion of the parameter space still uncharted by direct \(N\)-body simulations, thus complementing previous works that either rely on Monte Carlo simulations or exploit star cluster models with old stellar evolution recipes or a significantly smaller number of stars.
Footnote 1: [https://github.com/hbody6PPgpu/Wbody6PPGPU-beijing](https://github.com/hbody6PPgpu/Wbody6PPGPU-beijing)
The paper is organised as follows: Section 2 describes the main properties of the Dragon-II clusters and the improvements integrated in the Nbody6++GPU code; Section 3 presents our main results in terms of overall star cluster evolution (Section 3.1), main properties of single and binary compact objects (Sections 3.2 - 3.5), and the possible implementation of \(N\)-body outputs into semi-analytic tools (Section 3.6); whilst Section 4 is devoted to summarise the main outcomes of our work.
## 2 Numerical methods
All the Dragon-II models are carried out exploiting the Nbody6++GPU code (Wang et al., 2015), which represents the current state-of-the-art of direct \(N\)-body codes optimised to exploit GPU-accelerated high-performance supercomputing (Spurzem, 1999; Nitadori and Aarseth, 2012; Wang et al., 2015) altogether with several recently developed codes, like Petar(Wang et al., 2020) or Bifrost(Rantala et al., 2022). Nbody6++GPU belongs to a long-standing family of direct \(N\)-body integrators initiated by Sverre Aarseth and developed for almost 50 years (Aarseth et al., 1974; Spurzem, 1999;
Aarseth 1999, 2003; Aarseth et al. 2008; Nitadori & Aarseth 2012; Wang et al. 2015; Kamlah et al. 2022a).
Nbody6++GPU implements a 4th-order Hermite integrator with individual block-time steps (McMillan 1986; Hut et al. 1995) and sophisticated algorithms for close encounters and few-body dynamics, namely the Kustaanheimo-Stiefel (KS) regularisation (Stiefel & Kustaanheimo 1965), the Ahmad-Cohen (AC) scheme for neighbours (Ahmad & Cohen 1973), and algorithmic chain regularisation (Mikkola & Tanikawa 1999; Mikkola & Merritt 2008), which enables us to closely follow the evolution of binaries with periods \(10^{-10}\) times smaller than the dynamical timescales of star clusters, which typically exceed O(10) Myr.
In the last few years, the code underwent a series of major upgrades related to the treatment of relativistic compact objects (Rizzuto et al. 2021), the implementation of flexible stellar evolution recipes (Kamlah et al. 2022a), and the inclusion of a dedicated treatment for spins (Banerjee et al. 2020; Kamlah et al. 2022a, this work). Here, we expand the possible choices for BH natal spin distribution and implement relativistic recoil for post-merger remnants. In the following, we briefly summarize the features of the code that are most relevant for this work, and discuss the newest upgrades that we implemented into the code and use here for the first time.
### Stellar evolution
Nbody6++GPU implements stellar evolution for single and binary stars via the SSE and BSE routines (Hurley et al. 2000, 2002), which we heavily updated to include up-to-date prescriptions for the evolution of massive stars. We refer the reader to Kamlah et al. (2022a) for a comprehensive discussion about the updated stellar evolution encoded in Nbody6++GPU.
In this work, we adopt the level-B of stellar evolution as defined in (Kamlah et al. 2022a,, see their Table A1). This implies that our models take into account the formation of electron-capture supernovae (ECSNe, following Belczynski et al. 2008), the delayed SN scheme (Fryer et al. 2012), and the development of pair-instability (PISN) and pulsational pair instability supernovae (PPISN) (Belczynski et al. 2016). For the formation of compact objects, we adopt mass loss from Belczynski et al. (2010) with additional metallicity-dependent correction factors taken from Vink et al. (2001) and a dedicated treatment for mass loss of hot and massive H-rich O/B stars (Vink et al. 2001). The adopted stellar evolution models imply that the maximum BH mass attainable by massive stars with zero-age main-sequence mass \(<150\) M\({}_{\odot}\) is \(m_{\rm BH,max}=40.5\) M\({}_{\odot}\)(Belczynski et al. 2016; Banerjee et al. 2020; Kamlah et al. 2022a). The BHs falling in the so-called upper mass-gap can still form via stellar collisions, accretion of stellar material onto stellar BHs, and BH-BH mergers, as we discuss in our companion papers.
Natal kicks for NSs forming via ECSNe, accretion induced collapse (AIC), and merger-induced collapse (MIC) are drawn from a Maxwellian distribution with dispersion 3 km/s (see Gessner & Janka 2018; Kamlah et al. 2022a), whilst for all other NSs we adopt a Maxwellian distribution with dispersion 265 km/s (Hobbs et al. 2005). This latter value is adopted also for BHs, but the kick amplitude is reduced by a factor that accounts for the amount of fallback material (Fryer et al. 2012).
For binary stars, we model common envelope evolution via the parametrised \(\alpha_{\rm CE}-\lambda_{\rm CE}\) scheme, according to which it is possible to regulate the fraction of orbital energy injected into the envelope (\(\alpha_{\rm CE}\)) and to scale the binding energy of the envelope by a factor \(\lambda_{\rm CE}\) in a way similar, but not equal, to the one followed by Claeys et al. (2014) (further details about these parameters are discussed in Kamlah et al. 2022a). In this work, we adopt \(\alpha_{\rm CE}=3\)(Giacobbo & Mapelli 2018; Kamlah et al. 2022a).
### Dynamics of compact objects
In particularly dense clusters, stellar interactions can trigger collisions among stars and/or compact objects. The aftermath of such collisions is still a poorly understood process that can crucially affect the formation and evolution of stellar BHs. Whilst the outcome of stellar mergers is better understood, also thanks to recent detailed hydrodynamical simulations coupled with stellar evolution models (Ballone et al. 2023; Costa et al. 2022), it is still unclear how much mass a massive star can accrete onto a stellar BH. Several works have shown that in the case of a star with a mass \(\sim(1-10)\) M\({}_{\odot}\) merging with a stellar BH, there is little accretion as most of the energy is radiated away via jets, although the mechanism is highly uncertain and likely depends on the star structure and evolutionary stage (Guillochon & Ramirez-Ruiz 2013; MacLeod & Ramirez-Ruiz 2015; Cruz-Osorio & Rezzolla 2020; Kremer et al. 2022). Hydrodynamical simulations of star-BH close encounters have shown that up to 70% of the star mass remains bound to the BH, but energy arguments suggest that even a tiny amount of accreted matter, \(O(10^{-3}-10^{-2}\) M\({}_{\odot})\) would suffice to evaporate the accretion disk and halt the BH growth (Kremer et al. 2022). Nonetheless, recent simulations modelling the common envelope phase of a tight star-BH binary have shown that the BH accretes the stellar core and expels the envelope, a process accompanied by a SN-like transient and spin-up of the BH to nearly extreme values regardless of the initial spin (Schroder et al. 2020). In multiple main-sequence star collisions, the merger product is expected to have a compact core and a tenuous envelope with densities as low as \(10^{-10}\) g cm\({}^{-3}\)(Glebbeek et al. 2009). Therefore, if a) of the merger product mass is in the core (Glebbeek et al. 2009), and b) the core can efficiently feed the BH (Schroder et al. 2020), it is reasonable to assume that a BH would accrete a significant fraction of it.
Given the aforementioned uncertainties, in Nbody6++GPU we parametrise the outcome of star-BH collisions via the fraction of star mass accreted onto the BH, \(f_{c}\)(Banerjee 2021; Rizzuto et al. 2021). Throughout this paper we adopt \(f_{c}=0.5\).
Natal spins are another poorly known property of stellar BHs. Nbody6++GPU implements the so-called "Geneva", "MESA", and "Fuller" models (Belczynski et al. 2020; Banerjee 2021; Kamlah et al. 2022a), and four additional choices implemented in this work, namely: zero-spins, uniform spin distribution, Gaussian spin distribution with mean value \(\chi=0.5\) and dispersion \(\sigma_{\chi}=0.2\), and a Maxwellian distribution with dispersion \(\sigma_{\chi}=0.2\).
Nbody6++GPU also features a treatment for compact binary mergers based on an orbit-averaged formalism (Peters & Mathews 1963; Peters 1964), which enables us to follow the
formation and evolution of in-cluster compact binary mergers, a feature implemented in a number of recent works modelling young star clusters (Di Carlo et al., 2019, 2020, 2021; Rizzuto et al., 2021, 2022; Rastello et al., 2021).
In this work, we present the implementation of three new features of the code: mass and spin of the merger remnant, calculated via numerical relativity fitting formulas (Jimenez-Forteza et al., 2017; Arca Sedda et al., 2020), and the recoil kick imparted by asymmetric GW emission promptly after merging events (Campanelli et al., 2007; Lousto and Zlochower, 2008; Lousto et al., 2012). We follow the implementation depicted in our previous works (e.g. Arca Sedda et al., 2020; Arca-Sedda et al., 2021).
\[\vec{v}_{\rm GW} = v_{m}\hat{e}_{\perp,1}+v_{\perp}(\cos\xi\hat{e}_{\perp,1}+\sin \xi\hat{e}_{\perp,2})+v_{\parallel}\hat{e}_{\parallel}, \tag{1}\] \[v_{m} = A\eta^{2}\sqrt{1-4\eta}(1+B\eta),\] (2) \[v_{\perp} = \frac{H\eta^{2}}{1+q_{\rm BBH}}\left(S_{2,\parallel}-q_{\rm BBH} S_{1,\parallel}\right),\] (3) \[v_{\parallel} = \frac{16\eta^{2}}{1+q_{\rm BBH}}\left[V_{11}+V_{A}\Xi_{ \parallel}+V_{B}\Xi_{\parallel}^{2}+V_{C}\Xi_{\parallel}^{3}\right]\times\] (4) \[\times\left|\vec{S}_{2,\perp}-q_{\rm BBH}\vec{S}_{1,\perp} \right|\cos(\phi_{\Delta}-\phi_{1}).\]
Here \(\eta\equiv q_{\rm BBH}/(1+q_{\rm BBH})^{2}\) is the symmetric mass ratio, \(\vec{\Xi}\equiv 2(\vec{S}_{2}+q_{\rm BBH}^{2}\vec{S}_{1})/(1+q_{\rm BBH})^{2}\), and the subscripts \(\perp\) and \(\parallel\) mark the perpendicular and parallel directions of the BH spin vector (\(\vec{S}\)) with respect to the direction of the binary angular momentum. We adopt \(A=1.2\times 10^{4}\) km s\({}^{-1}\), \(B=-0.93\), \(H=6.9\times 10^{3}\) km s\({}^{-1}\), and \(\xi=145^{\circ}\)(Gonzalez et al., 2007; Lousto and Zlochower, 2008), \(V_{11}=3677.76\) km s\({}^{-1}\), and \(V_{A,B,C}=(2.481,1.793,1.507)\times 10^{3}\) km s\({}^{-1}\). The quantity \(\phi_{\Delta}\) represents the angle between the direction of the infall at merger (which we randomly draw in the binary orbital plane) and the in-plane component of the quantity \(\vec{\Delta}\equiv(M_{a}+M_{b})^{2}(\vec{S}_{b}-q_{\rm BBH}\vec{S}_{a})/(1+q_ {\rm BBH})\), while \(\phi_{1}=0-2\pi\) is the phase of the binary, extracted randomly between the two limiting values.
### Massive star cluster models with up to one million stars
We generate the 19 Dragon-II star clusters with the updated McLuster software (Kupper et al., 2011), as described in Kamlah et al. (2022) and Leveque et al. (2022).
All Dragon-II star clusters are modelled via King (1966) dynamical models with a central dimensionless potential well \(W_{0}=6\), and are characterised by three values of the half-mass radius, \(R_{\rm HM}=0.47,\ 0.80,\ 1.75\) pc, four values of the initial number of stars, \(N=(1.2,\ 3,\ 6,\ 10)\times 10^{5}\), and two values of the primordial binary fraction, as described below. All clusters have the same metallicity \(Z=0.0005\), a value typical of several clusters proposed to host a dense subsystem of stellar BHs, like NGC3201 or a central intermediate-mass black hole (IMBH), like NGC6254 (see e.g. Askar et al., 2018; Arca Sedda et al., 2018; Weatherford et al., 2020).
All simulations were conducted on the Junevels BOOST experiment and the GRACE HPC workstation over a \(\sim 2\) yr timespan. Eventually, the whole Dragon-II database consists of almost 35 Tb of data.
Stellar masses are drawn from the Kroupa (2001) initial mass function limited between \(m_{*}=0.08-150\ \rm M_{\odot}\), which implies an initial average stellar mass is \(\langle m_{*}\rangle\simeq 0.59\ \rm M_{\odot}\). The corresponding initial mass and density scale in Dragon-II clusters are \(M_{c}=(0.7-5.9)\times 10^{5}\ \rm M_{\odot}\) and densities \(\rho_{c}\simeq 1.3\times 10^{4}-6.9\times 10^{6}\ \rm M_{\odot}\ pc^{-3}\), respectively.
All Dragon-II clusters move on a circular orbit at a distance of 13.3 kpc from the centre of a galaxy whose gravitational potential is modelled via a simple Keplerian potential assuming a total galaxy mass of \(M_{g}=1.78\times 10^{11}\ \rm M_{\odot}\). As a consequence, our clusters have initially a tidal radius in the range \(R_{\rm tid}=67-138\) pc and they can all be considered as underfilling systems, thus the gravitational field has a smaller impact on the cluster evolution with respect to internal dynamics, at least at the beginning. Dragon-II clusters would underfill their Roche lobe even in the case of a rather extremely eccentric orbit, e.g. \(e=0.9\).
We assume that a fraction of the total number of stars is initially paired in a _primordial_ binary system. Following in McLuster, we define the binary fraction as the ratio between the number of binaries and the sum of single stars and binaries, \(f_{b}=n_{b}/(n_{a}+n_{b})\). We set a \(f_{b}=0.05-0.2\) depending on the cluster model as summarized in Table 1. Our simulation grid contains two sets that differ only in \(f_{b}\), thus their comparison could unveil some effects triggered by primordial binary dynamics. Note also our definition of \(f_{b}\) implies that the number of stars in binaries over the total is \(f_{\rm 2b}=2f_{b}/(1+f_{b})=0.10-0.33\).
Binaries are initialised assuming the same mass function of single stars and a uniform mass ratio distribution in the range \(q=0.1-1\) for stars heavier than \(m_{*}>5\ \rm M_{\odot}\) or random pairing for the lighter ones (Kiminki and Kobulnicky, 2012; Sana et al., 2012; Kobulnicky et al., 2014). Following previous works on the same topics, we adopt a thermal distribution of the eccentricity and a semi-major axis distribution flat in logarithmic values, with an upper limit set to 50 AU and a lower limit set by the sum of the stars' radii (Wang et al., 2015; Kamlah et al., 2022).
In the majority of the cases, for each value of \(R_{\rm HM}\) and \(N\) we run two simulations with different random seeds to explore possible dependencies on the randomness of the star distribution. The only exception is the case \(R_{\rm HM}=0.47\) pc and \(N=300\)k stars, which was limited to only one model because of the available computational time.
The simulations are performed until either the average mass of stellar BHs falls below \(\langle m_{\rm BH}\rangle\lesssim 15\ \rm M_{\odot}\), no BHs with a mass above 30 \(\rm M_{\odot}\) are retained in the cluster, or the simulated time exceeds at least one relaxation time (Spitzer, 1987; Binney and Tremaine, 2008; Gatto et al., 2021), which can be expressed in the form (Gatto et al., 2021)
\[T_{\rm rlx}=282{\rm Myr}\frac{1}{m_{*}\ln\gamma_{n}N}\sqrt{\frac{M_{c}}{10^{5} \ \rm M_{\odot}}}\left(\frac{R_{\rm HM}}{1{\rm pc}}\right)^{3/2}, \tag{5}\]
where \(\gamma_{n}=0.11-0.4\) for a monochromatic mass spectrum (Giersz and Heggie, 1996; Binney and Tremaine, 2008) but it can be as low as \(\gamma_{n}=0.02\) for a multi-mass mass spectrum (Giersz and Heggie, 1996). These choices result in a physical simulated time ranging between \(T_{\rm sim}\sim 0.1-2.3\) Gyr and lead to an optimal balance between the computational cost of the simulations and the portion of parameter space that can be explored. Table 1 summarizes the main properties of Dragon-II models.
As sketched in Figure 1, in comparison to the most recent studies based on \(N\)-body (e.g. Wang et al., 2015; Banerjee, 2018, 2021; Di Carlo et al., 2019; Rastello et al., 2021;
Di Carlo et al., 2021) and Monte Carlo simulations (e.g. Rodriguez et al., 2016; Askar et al., 2017; Rodriguez et al., 2019; Kremer et al., 2020; Maliszewski et al., 2022), the Dragon-III clusters occupy a region of the \(N\)-\(\rho_{\rm h}\) plane mostly populated by Monte Carlo simulation grids. This, coupled with the fact that simulations with \(N>10^{5}\) stars usually adopt a binary fraction \(<20\%\), makes our Dragon-II simulations an unprecedented grid of models that complements, and further expands, the phase space accessible with direct \(N\)-body models.
## 3 Results
### Star cluster evolution
The Dragon-II clusters were originally devised to explore compact object dynamics, compact binary mergers, and intermediate-mass black hole build-up in dense star clusters, thus they are not meant to be representative of any observed cluster. Nonetheless, it is interesting to compare in Figure 2 the time evolution of the modelled mass and half-mass radius with relatively young, i.e. typical ages \(0.1-1\) Gyr, massive star clusters in the Milky Way (MW), the Small (SMC) and Large Magellanic Cloud (LMC), M31 (Portegies Zwart et al., 2010; Gatto et al., 2021), the Henize 2-10 starburst dwarf galaxy (Nguyen et al., 2014), and the M83 galaxy (Ryon et al., 2015). Over the simulated time, our models overlap with observed clusters, thus indicating that the adopted initial conditions lead to numerical models that can represent one possible evolutionary pathway of some observed clusters.
We find that the mass and half-mass radius evolution is well described by the following relations:
\[M_{\rm cl}(t) = M_{\rm cl,0}\left[1+\alpha_{M}\left(\frac{t}{T_{\rm fix}}\right) ^{-\beta_{M}}\right], \tag{6}\] \[R_{\rm HM}(t) = R_{\rm HM,0}\left[1+\frac{t}{\alpha R_{\rm fix}}\right]^{\beta_ {R}}. \tag{7}\]
The values of the fitting parameters, which are summarised in Table 2, are independent of the initial cluster mass, and weakly depend on the initial value of the half-mass radius. This owes to the fact that the mass-segregation time scales with \(M_{\rm cl}^{*/2}R_{\rm HM}^{3/2}\), thus it is mostly affected by the choice of the half-mass radius.
Figure 3 shows the ratio between the final and initial values of \(R_{\rm HM}\) as a function of the simulated time, normalised to the initial relaxation time. The plot clearly highlights how the cluster expansion depends only on the dynamical age of the cluster, regardless of the initial cluster mass.
\begin{table}
\begin{tabular}{c c c c c c c|c c c|c c c|c c c|c c c} \hline \hline \(N_{*}\) & \(M_{\rm c}\) & \(R_{h}\) & \(f_{b}\) & \(N_{\rm sim}\) & \(T_{\rm fix}\) & \(T_{\rm seg}\) & \(T_{\rm sim}\) & \(N_{\rm GW,in}\) & \(N_{\rm GW,out}\) & \(M_{\rm max}\) & \(M_{\rm max,fin}\) & \(N_{>30}\) & \(N_{>40}\) \\
10\({}^{3}\) & \(10^{5}\) M\({}_{\odot}\) & pc & & & Myr & Myr & & Myr & & & & & & M\({}_{\odot}\) & & M\({}_{\odot}\) & & \\ \hline
120 & 0.7 & 1.75 & 0.05 & 2 & 99 & 2.1 & 2379 & 2326 & 0 & 2 & 2 & 0 & 64 & 76 & 25 & 34 & 0 & 2 & 0 & 0 \\
300 & 1.8 & 1.75 & 0.05 & 2 & 142 & 2.7 & 1196 & 1422 & 0 & 2 & 2 & 2 & 69 & 77 & 40 & 40 & 13 & 13 & 5 & 1 \\
1000 & 5.9 & 1.75 & 0.05 & 2 & 233 & 3.4 & 207 & 194 & 1 & 1 & 4 & 4 & 81 & 146 & 52 & 70 & 149 & 169 & 72 & 85 \\
120 & 0.7 & 1.75 & 0.2 & 2 & 99 & 2.1 & 1710 & 1540 & 2 & 2 & 0 & 2 & 232 & 81 & 38 & 28 & 2 & 0 & 0 & 0 \\
300 & 1.7 & 1.75 & 0.2 & 2 & 142 & 2.7 & 519 & 793 & 1 & 0 & 7 & 5 & 92 & 77 & 65 & 47 & 26 & 26 & 8 & 14 \\
600 & 3.5 & 1.75 & 0.2 & 2 & 189 & 3.4 & 205 & 126 & 0 & 0 & 2 & 5 & 87 & 144 & 59 & 84 & 95 & 103 & 45 & 65 \\
120 & 0.7 & 0.80 & 0.2 & 2 & 30 & 0.7 & 1154 & 1201 & 4 & 3 & 4 & 2 & 120 & 132 & 21 & 27 & 0 & 0 & 0 & 0 \\
300 & 1.7 & 0.80 & 0.2 & 2 & 44 & 0.8 & 307 & 309 & 1 & 0 & 1 & 0 & 93 & 107 & 40 & 43 & 15 & 11 & 2 & 2 \\
120 & 0.7 & 0.47 & 0.2 & 2 & 14 & 0.3 & 1149 & 530 & 2 & 2 & 3 & 1 & 350 & 92 & 50 & 30 & 1 & 0 & 1 & 0 \\
300 & 1.7 & 0.47 & 0.2 & 1 & 30 & 0.4 & 148 & - & 4 & - & 3 & - & 245 & - & 48 & - & 22 & - & 9 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Col. 1-4: initial number of stars, cluster mass, half-mass radius, and primordial binary fraction. Col. 5: number of independent realisations. Col. 6-7: initial relaxation and segregation time. Col. 8: simulated time. Col. 9-10: number of mergers inside the cluster. Col. 11: maximum BH mass during the simulation. Col. 12: maximum BH mass at the end of the simulation. Col. 13-14: number of BHs with a mass \(m_{\rm BH}>30\) M\({}_{\odot}\) or \(>40\) M\({}_{\odot}\) at the last simulation snapshot.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(R_{\rm HM}\) [pc] & \(\alpha_{M}\) & \(\beta_{M}\) & \(\alpha_{R}\) & \(\beta_{R}\) \\ \hline
1.75 & \(0.29-0.33\) & \(0.28-0.31\) & \(0.65-0.81\) & \(0.45-0.59\) \\
0.80 & \(0.18-0.19\) & \(0.34\) & \(0.7\) & \(0.57\) \\
0.47 & \(0.15\) & \(0.35-0.4\) & \(0.78-1.1\) & \(0.44-0.56\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fitting parameters in Equations 6-7.
Figure 1: Initial density, calculated at the half-mass radius, as a function of number of stars for several grids of direct \(N\)-body (blue points) and Monte Carlo (red boxes) simulations. The Dragon-II cluster database is represented by the green star.
By the end of the simulations, our clusters have typically lost \(\sim 25-50\%\) of their initial mass and their radius has expanded by a factor of \(1.5-10\), thus implying a reduction of the density at the half-mass radius by up to four orders of magnitude and a reduction of the velocity dispersion of around \(1-1.5\) times. The drop in density and velocity dispersion crucially affects the rates at which dynamical interactions take place.
A thorough comparison among Dragon-II simulations and the models discussed in the past literature is made hard by the many different assumptions of previous works, like the use of equal-mass stars to represent the cluster, the different binary fraction, the properties of the primordial binary population, the lack of a dedicated treatment to deal with compact binaries, and the use of outdated prescriptions for the evolution of massive stars (\(m_{\rm ZAMS}>50\) M\({}_{\odot}\)).
In order to test the new features of the Nbody6++GPU code, we have carried out an extensive comparison of the evolution of star clusters with 110,000 stars in \(N\)-body and Monte Carlo simulations in our companion paper (Kamlah et al., 2022), where we have shown, among other things, that \(N\)-body models of the same clusters seem to evolve toward sparser configurations compared to Monte Carlo models with large tidal radii simulated with the MOCCA code. This difference is likely due to the different criteria used to identify escapers in the two methods, which can lead to an early removal of escaping stars in MOCCA simulations compared to Nbody6++GPU.
### Stellar and compact object binaries
Mass-segregation of the most massive stars enhances strong dynamical interactions, which can trigger the ejection of the tightest binaries, the ionisation of the loosest ones, and the formation and hardening of new binaries. In the Dragon-II clusters, the processes responsible for the formation and disruption of binaries counterbalance efficiently, determining
Figure 3: Half-mass radius calculated at the end of each simulation, normalised to the initial value, as a function of the total simulated time normalised to the cluster initial relaxation time. The dashed line is a power-law fit of the data, while each point represents one simulation, with the colour map representing the initial cluster mass.
Figure 2: Time evolution of the mass (top panel) and half-mass radius (bottom) of Dragon-II clusters (black lines) compared to observed massive clusters in the Milky Way (MW, blue stars), the two Magellanic Clouds (green squares and red points), the Andromeda galaxy (M31, green squares), the Henize 2-10 starburst galaxy (He2-10, grey diamonds), and the M83 galaxy (light green triangles).
a slow variation of the overall binary fraction. As shown in Figure 4, the binary fraction decreases by a small fraction, down to \(f_{b,fin}\sim 0.16-0.18\) in models starting with \(f_{b}=0.2\) and to \(f_{b,fin}=0.04-0.05\) in models with \(f_{b}=0.05\). Interestingly, this variation in the binary fraction is similar, within the simulation time, to results obtained for lower-\(N\) cluster simulations (see e.g. Heggie et al., 2006). The decrease of the binary fraction is mostly due to the disruption of the softest binaries in the cluster and, for a small fraction (\(<5\%\)), to hard binaries that are ejected in strong dynamical interactions. These binaries have typical semi-major axes broadly distributed in the \(10^{-2}-5\times 10^{2}\) AU. For the sake of comparison, Figure 5 shows the initial period-mass distribution and mass-ratio of the population of primordial binaries in our models.
Figure 6 shows the distribution of the ratio between the semi-major axis of ejected binaries and the hard-binary separation, both measured at the moment of the ejection, and the ejection velocity distribution for two different simulations. The plot makes clear that the vast majority of ejected binaries are hard and that this population is dominated mostly by binaries with a mass \(m_{\rm bin}<2\) M\({}_{\odot}\). The velocities of the ejected binaries generally remain in the range of \(1-100\) km s\({}^{-1}\), too small compared to the circular velocity of the Galaxy to permit the identification of these escapers as former cluster members.
The upper panel of Figure 7 shows the variation of the fraction of binaries normalised to the total number of stars in a given mass bin and at a given time. Initially, around \(35-50\%\) of all stars with a mass above \(20\) M\({}_{\odot}\) are initially binary members, with the maximum percentage achieved for stars heavier than \(100\) M\({}_{\odot}\). However, the population of heavy objects is rapidly depleted (note that \(t/T_{\rm fix}=0.22\) corresponds in this case to \(t=18.8\) Myr) owing mostly to stellar/binary evolution, which causes a sharp drop in their number. The maximum stellar mass keeps decreasing over time, whilst a small population of binaries with components in the \(5-100\) M\({}_{\odot}\) develops - clearly owing to the formation of binaries with one or two BHs. The mass distribution of objects in binary systems, shown in the lower panel of Figure 7, highlights that the number of binaries with at least one component heavier than \(10\) M\({}_{\odot}\) is relatively small compared to the total number of objects in binaries. Assuming initially \(N=120,000\) stars and \(f_{b}=0.2\), we see that less than \(1,000\) binaries contain a component with a mass \(m_{*}>10\) M\({}_{\odot}\), most of them being former components of a primordial binary.
The progenitors of compact objects, which are the most massive stars and stellar binaries in the cluster, have already sunk into the cluster centre when compact objects form. Therefore, to dissect the properties of compact binaries in Dragon-II clusters, we focus on binaries forming within the cluster half-mass radius, calculated along the cluster evolution.
Figure 8 shows the number of binaries with a WD, NS, or BH as a function of time for all models.
The population of binaries containing at least one WD (dWDs), \(N_{\rm WD}\), depends on the half-mass radius and binary fraction. At fixed half-mass radius, the number of binaries with a WD significantly decreases at decreasing \(f_{b}\), because most of these binaries are of a primordial origin. In fact, at fixed \(N\) stars and \(R_{\rm HM}\), the ratio between the number of dWDs is \(4-5\) times higher in models with \(f_{b}=0.2\) compared to those with \(f_{b}=0.05\), thus comparable to the ratio between the initial amount of primordial binaries in one case or the other. At fixed value of \(f_{b}\), instead, the smaller the half-mass radius, the smaller is the number of dWDs. In general, by the end of the simulations we find \(N_{\rm dWD}\simeq 200-700\) dWDs per cluster. The amount of binaries with a WD monotonically increases over the simulated time, highlighting the competition between WD depletion via dynamical encoun
Figure 4: Binary fraction as a function of time normalised to the relaxation time, taking into account all binaries.
Figure 5: Initial values of total mass (x-axis) and period (y-axis) for all primordial binaries in one of the models with \(R_{\rm HM}=1.75\) pc, \(N=120\)k, and \(f_{b}=0.2\). The colour map marks the mean value of the mass ratio inside each pixel.
ters and the formation of new WDs, mostly via binary stellar evolution (see also Figure 4 in Kamlah et al. 2022a).
The evolution of the number of binaries with a NS (dNS) shows two clear peaks at \(20\) and \(\sim 100\) Myr. These peaks correspond to the formation of NSs from stars in the high-end (the first) and low-end (the second) of the NS progenitor mass range. The drop after each of the peaks is due to NS natal kicks, which cause the ejection of a substantial fraction of NSs from the parent cluster. The width of the peaks is related to the time needed for the NS to leave the cluster, i.e. when their distance from the cluster centre exceeds twice the tidal radius. After the second peak, the number of binaries with a NS decreases in all simulations, regardless of the initial conditions. We find that the largest value of \(N_{\rm aNS}\) is reached in the case of \(R_{\rm HM}=1.75\) pc, \(f_{b}=0.2\), and \(N=600\)k. At fixed value of \(R_{\rm HM}\) and \(N\) we find that a larger initial binary fraction leads to a more numerous population of binaries with a NS, around \(50\%\) more for models with \(f_{b}=0.2\). At fixed value of \(N\) and \(f_{b}\) the number of binaries with a NS increases at increasing values of \(R_{\rm HM}\) because in denser clusters it is more likely that massive stellar binaries either are ejected or merge before stellar evolution becomes dominant.
The population of binaries with a BH (dBH), similarly to those with a NS, are characterised by two peaks of forma
Figure 6: Upper panels: Distribution of the semi-major axis of ejected binaries normalised to the hard-binary separation measured at the moment of ejection for binaries with a mass \(m_{\rm bin}<2\) M\({}_{\odot}\) (blue steps) or heavier (orange steps). Lower panels: ejection velocities for the same classes of binaries. The plots refer to two simulations with \((R_{\rm HM},~{}N)=(1.78~{}{\rm pc},~{}120\)k) (left-hand panels) and \((0.8~{}{\rm pc},~{}300\)k) (right-hand panels).
tion, one at around 10 Myr, driven by stellar evolution, and another at later times driven by dynamics. The number of binaries with a BH, \(N_{\rm dBH}\), in the primary peak depends on the initial number of stars - the larger \(N_{0}\) the larger \(N_{\rm bBH}\), whilst the number in the secondary peak depends on both the half-mass radius and binary fraction, although it is hard to discern the effects of different initial conditions in this case. the Dragon-II clusters,
### Ejection of single and double compact objects
Over the simulated time, all clusters lose around 20-70 single BHs, depending on the cluster initial conditions, and 10-70 binaries containing either one or two compact objects. Figure 9 shows the mass distribution of ejected single BHs, which is characterised by two peaks, one at \(m_{\rm BH}\sim 3\) M\({}_{\odot}\) and another at \(m_{\rm BH}\sim 25\) M\({}_{\odot}\), and a tail that extends up to \(m_{\rm BH}\sim 10^{2}\) M\({}_{\odot}\). The first peak is due to the natal kick of NSs and low-mass BHs, with masses in the range \(m_{\rm BH}=2.5-6\) M\({}_{\odot}\), and develops in the first 10-50 Myr, whilst the secondary peak is due to dynamical interactions2.
Footnote 2: In our simulations the minimum mass allowed for BHs is \(m_{\rm BH,min}=2.5\) M\({}_{\odot}\)
The population of ejected binaries hardly depends on the cluster initial conditions. Therefore, for the sake of simplicity, we gather the ejected binaries from all simulations to have a statistically larger sample. In the following, we distinguish between binaries containing two compact objects, labelled as DCOB, and those containing one compact object and a star, labelled as SCOB. Figure 10 shows the component mass, semi-major axis, and eccentricity distribution of the ejected binaries in all the Dragon-II clusters.
Around 94% of the ejected binaries are primordial.
A clear difference between double and single compact object binaries arises from these Figures. In total, we find 229 ejected DCOBs of both dynamical (144) and primordial (85) origin. The DCOBs exhibit a similar mass distribution for the primary and the companion, characterised by a plateau in the \(m_{1,2}=2-20\) M\({}_{\odot}\) and a clear peak at \(m_{1}\sim 45\) M\({}_{\odot}\) for the primary and \(m_{2}\sim 27\) M\({}_{\odot}\) for the companion. The resulting mass ratio distribution is quite peculiar, with a clear dominance of DCOB with a mass ratio \(q>0.6\), owing to the tendency of dynamical interactions to pair objects of comparable mass. The eccentricity distribution is dominated by a peak around 0, caused by a sub-population of primordial binaries that underwent the common envelope phase (64.7%), and a nearly flat distribution in the range \(e=0.5-1\).
Additionally, we find 375 ejected SCOBs, the vast majority of which coming from primordial binaries (353) with a small contribution from dynamically assembled systems (22). The mass distribution of the compact objects in SCOBs peaks at a value, \(m_{\rm CO}\sim 2-4\) M\({}_{\odot}\), in the range of NSs and small BHs, definitely smaller compared to the mass distribution of the stellar companion, which peaks at 10 M\({}_{\odot}\), but with a secondary peak at \(\sim 0.3-0.5\) M\({}_{\odot}\). The binary mass-ratio distribution of SCOBs clearly differs from DCOBs, showing a peak at \(q\sim 0.2\) and a decrease toward larger values. The compact object in the SCOBs is mostly a low-mass BH (200) - typically with a mass \(m_{\rm BH}<10\) M\({}_{\odot}\) (173) - or a NS (173), and in only two cases a ONeWD (2). The stellar companion is a main-sequence star in the vast majority of the cases (353), followed by core He burning stars (20) (all with a primary weighing \(<5\) M\({}_{\odot}\)), and 2 naked He main-sequence (MS) star. Stellar companions in the MS phase are relatively massive: 18 of them have a mass \(m_{\rm MS}<1\) M\({}_{\odot}\), 245 have a mass in the range \(1<m_{\rm MS}/\) M\({}_{\odot}<10\), 74 in the range \(10<m_{\rm MS}\) M\({}_{\odot}<20\), and just one with a mass \(m_{\rm MS}=29\) M\({}_{\odot}\). All stars in the CHeB phase have a mass in the \(m_{\rm CHeB}=5-16\) M\({}_{\odot}\) range and are paired with an object lighter than \(m_{\rm CO}<5\) M\({}_{\odot}\), all of them come from primordial binaries.
Focusing on DCOBs, we find a few peculiar and interesting systems. Among all ejected BBHs only 5 merge within a Hubble time, because most BBHs were ejected when the density and velocity dispersion of the cluster had already dropped due to its expansion and mass loss.
In two cases, the ejected BBH contains an IMBH with mass either \(M_{\rm IMBH}=120\) M\({}_{\odot}\) or 350 M\({}_{\odot}\). In five cases, instead, we find an ejected BBH with a merging time smaller than a Hubble time. Table 3 summarises the number of ejected
Figure 7: Top: fraction of objects in binaries for different mass bins and different times. Bottom: number of binaries with mass in a given bin at different times. We show a simulation with \(f_{b}=0.2\), \(N=120\)k stars, and \(R_{\rm HM}=1.75\) pc.
single and binary BHs, and of BBHs and BH-IMBH binaries that merge within a Hubble time.
### Black hole - main sequence star binaries
The sample of known BH-MS star systems has significantly grown over the last few years (e.g. Giesers et al., 2018, 2019; Saracino et al., 2022; El-Badry et al., 2022; Shenar et al., 2022; Mahy et al., 2022). Some of the BHs observed in a BH-MS binary appear to reside in star clusters both in the Milky Way (Giesers et al., 2018, 2019) and the Large Magellanic Cloud (Saracino et al., 2022; Shenar et al., 2022), whilst others appear to be in the Galactic disc (El-Badry et al., 2022). It is an open question whether these BH-MS systems come from primordial or dynamically assembled binaries. In the case of a dynamical origin it is also unknown whether the stellar companion captured the BH or its progenitor.
In these regards, the Dragon-II models offer us a novel way to look for BH-MS binaries in simulated clusters and identify possible properties of BH-MS binaries formed through different channels. Since the Dragon-II cluster database is relatively small and limited to a single metallicity, we cannot perform a comprehensive comparison between observed and simulated BH-MS binaries. Nonetheless, it is il
Figure 8: Number of binaries with at least one WD (upper panel), a NS (central panel), or a BH (lower panel) as a function of time for all Dragon-II clusters. Different colours correspond to different values of the initial half-mass radius. There are two lines for each colour, corresponding to two realisations of the same cluster.
lustrative to qualitatively compare the properties of BH-MS binaries formed in Dragon-II models and the observed one.
For example, Dragon-II models permit us to dissect the population of BH-MS binaries into those forming inside the cluster, some of which have a lifetime much shorter than the cluster life and are disrupted via interactions with other cluster members, or that have been ejected from the cluster. Figure 11 shows the component masses, period, and eccentricity of _in-cluster_ and _ejected_ BH-MS binaries. We assume that in-cluster binaries are those forming at any time over the simulated time, therefore the same binary or one or both components can appear multiple times in the plot. We see that in-cluster binaries are markedly different from ejected binaries. The latter can be divided in two sub-classes. The first sub-class exhibits a short period (\(P<0.1\) day) and an almost null eccentricity, \(e\sim 0\). Binaries in this sub-class are characterised by a BH with mass \(m_{\rm BH}<10\) M\({}_{\odot}\) and a MS star with a mass in the \(2-10\) M\({}_{\odot}\) range. They originate from binary evolution, and, in particular, underwent a common envelope phase that shrank the semi-major axis and damped the eccentricity of the binary. The ejection engine of these binaries is a SN explosion. The second sub-class, instead, comprises heavier BHs (\(m_{\rm BH}=10-100\) M\({}_{\odot}\)) and lighter MS stars (\(m_{\rm MS}<1\) M\({}_{\odot}\)), and is characterised by eccentricities in the range \(e=0.2-1\), indicating that these binaries come from dynamical interactions sufficiently strong to eject the binary from the cluster.
In-cluster BH-MS binaries can contain BHs and MS stars significantly heavier than the ejected binaries and are characterised by longer periods (\(P>10\) d) compared to ejected binaries. Most in-cluster binaries with a period \(P\lesssim 10^{3}\) d have zero eccentricity, whilst practically all those with a longer period have eccentricity \(>0.1\) and up to extreme values.
From Figures 11, it is evident that in-cluster binaries exhibit a peculiar distribution in the \(m_{\rm BH}-m_{\rm MS}\), which suggests the existence of two sub-classes. We find that the first class is characterised by a companion with a mass \(m_{\rm MS}/m_{\rm BH}=k\,(m_{\rm BH}/1\) M\({}_{\odot})^{-1/2}\), with \(k=2-10\). Most binaries falling in this class have a period shorter than 100 d, whilst the second class involves binaries with \(m_{\rm BH}>10\) M\({}_{\odot}\) and \(m_{\rm MS}<5\) M\({}_{\odot}\).
An even more clear distinction is shown in Figure 12, where the MS-to-BH mass ratio is shown against the orbital period and eccentricity. This plot highlights four interesting peculiarities of in-cluster BH-MS binaries:
* the vast majority of binaries with \(e<0.1\) are primordial. Most of them are characterised by \(m_{\rm MS}/m_{\rm BH}>0.3\), heavy MS stars \(m_{\rm MS}>1\) M\({}_{\odot}\), and periods below \(P<100\) d;
* primordial binaries with \(e>0.1\) have larger periods (\(P=10^{2}-10^{6}\) d), and similar mass ratio and MS mass as circular primordial binaries;
* the vast majority of dynamically formed binaries have \(e>0.1\) and periods in the range (\(P=10^{2}-10^{9}\) d). They are generally characterised by a mass ratio \(m_{\rm MS}/m_{\rm BH}<0.3\), MS stars with a mass \(m_{\rm MS}<10\) M\({}_{\odot}\) and a BH with mass \(m_{\rm BH}=(10-100)\) M\({}_{\odot}\);
* only a handful dynamically formed binaries have \(e<0.1\), and are all characterised by a period \(P=1-10\) d.
As shown in Figure 12, we find that the longer is the orbital period the larger the binary eccentricity, and almost all binaries with eccentricity \(e>0.1\) have a period \(P>100\) d, with a handful exceptions. Most binaries with a period shorter than \(P<100\) d, instead, are primordial and involve a MS star heavier than \(m_{\rm MS}>1\) M\({}_{\odot}\).
The difference between primordial and dynamical BH-MS binaries is further highlighted in Figure 13, which shows the component masses of these two classes of binaries. From the plot, it is apparent that dynamically assembled binaries dominate the region of the plane with \(m_{\rm BH}>10\) M\({}_{\odot}\) and \(m_{\rm MS}<10\) M\({}_{\odot}\).
\begin{table}
\begin{tabular}{c c c c c|c c c} \hline \hline \(N\) & \(R_{\rm HM}\) & \(f_{b}\) & ID sim & \multicolumn{4}{c}{\(N_{\rm ejec}\)} \\ \(10^{3}\) & pc & & & ejected & BBH & GW & IMBH \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 3: Col. 1-3: initial number of stars, half-mass radius, and binary fraction. Col. 4: simulation ID. Col. 5: number of ejected BHs, ejected BBHs, ejected BBHs that merge within 14 Gyr, and ejected BBHs that merger within 14 Gyr and involve one IMBH.
Figure 9: Mass distribution of ejected BHs in all Dragon-II simulations.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline name & \(\mathbf{m}_{\rm BH}\) & \(\mathbf{m}_{\rm MS}\) & \(\mathbf{P}\) & \(\mathbf{e}\) & **loc** & **SYSTEM** & **Z** & **ref** \\ & M\({}_{\odot}\) & M\({}_{\odot}\) & days & & & & & \\ \hline
**BH1** & \(9.62\pm 0.18\) & \(0.93\pm 0.05\) & \(185.59\pm 0.05\) & \(0.451\pm 0.005\) & MW & disc-field & solar & E-Badry et al. (2022) \\
**VFFs 243** & \(10.1\pm 2\) & \(25.0\pm 2.3\) & \(10.4031\pm 0.0004\) & \(0.017\pm 0.012\) & LMC & OC & sub-solar & Shearer et al. (2022) \\
**HD130298** & \(7.7\pm 1.5\) & \(24.2\pm 3.8\) & \(14.62959\pm 0.000854\) & \(0.457\pm 0.007\) & MW & runway & solar & Mahy et al. (2022) \\
**ACS21859\({}^{*}\)** & \(7.68\pm 0.50\) & \(0.61\pm 0.05\) & \(2422\pm 0.0001\) & \(0.07\pm 0.04\) & NGC3201 & GC & sub-solar & Giesers et al. (2019) \\
**ACS21859\({}^{*}\)** & \(4.53\pm 0.21\) & \(0.81\pm 0.05\) & \(167.01\pm 0.09\) & \(0.61\pm 0.02\) & NGC3201 & GC & sub-solar & Giesers et al. (2019) \\
**ACS21859\({}^{*}\)** & \(4.40\pm 2.82\) & \(0.64\pm 0.05\) & \(764\pm 11\) & \(0.28\pm 0.16\) & NGC3201 & GC & sub-solar & Giesers et al. (2019) \\ \hline \end{tabular}
\end{table}
Table 4: Orbital properties of the observed BH–MS binaries. For the sources labelled with an \(*\) only lower limits to the BH mass are available.
Figure 10: Upper panel: mass (left) and mass ratio (right) distribution of ejected binaries containing either two compact objects, i.e. BHs, NSs, or WDs. The mass distribution refers to the primary (black straight steps) and companion (black dashed steps) in the case of double compact object binaries, and to the compact object (red straight steps) and the star (red dashed steps) in the case of binaries with one compact object. Lower panels: semi-major axis (left) and eccentricity (right) distribution of ejected binaries containing two (black straight steps) or one (red dashed steps) compact objects.
The observed BH-MS binaries have orbital properties quite different from our ejected binaries, especially if we consider the observed period and eccentricity. However, only the quiescent BH candidates in NGC3201 are still associated with a star cluster, whilst the origin of the other binaries is unknown. Two of the six observed binaries (Shenar et al., 2022; Maly et al., 2022) have component masses compatible with our primordial binaries, one of them (El-Badry et al., 2022)
Figure 11: Masses, eccentricity, and period of in-cluster (grey dots) and ejected (blue diamond) BH–MS binaries. The red stars represent observed binaries taken from El-Badry et al. (2022), Shenar et al. (2022) and Maly et al. (2022). The light grey stars represent binaries observed in the NGC3201 globular cluster Gisers et al. (2019).
Figure 12: Orbital period as a function of the ratio between the MS and BH masses for all BH–MS binaries formed inside the clusters. The colour coding marks the binary eccentricity.
Figure 13: Masses of the MS star and BH in primordial (grey squares) and dynamical (blue triangles) BH–MS binaries. The red and green stars represent observed binaries as in Figure 11. The plot refers to both in-cluster and ejected binaries.
falls in a range where only dynamically assembled binaries are present, and the three sources observed in the Galactic globular cluster NGC3201 have component masses compatible with both in-cluster and ejected binaries.
In our models, the vast majority of ejected binaries have a primordial origin and their small period (\(P<0.01\) d) owes to mass transfer episodes. The few ejected binaries formed dynamically are characterised by a period \(P<1\) d, still much shorter than observed values. Wider, and more numerous, ejected binaries could form in substantially looser or lighter star clusters. On the one hand, decreasing the cluster mass or density would enlarge the hard-binary separation and possibly increase the semi-major axis of ejected binaries (Morscher et al., 2015). On the other hand, a smaller cluster mass would correspond to a lower escape velocity and thus it is more likely for binaries to escape the parent cluster.
In principle, MS-MS binaries ejected in the earliest phase of the cluster life could further contribute to the population of BH-MS binaries, but these binaries are removed from our simulations before they can further evolve. Nonetheless, we find that only two ejected MS-MS binaries have at least one component with mass above the threshold for BH formation, i.e. \(\sim 18\) M\({}_{\odot}\), thus ensuring that ejected MS-MS binaries do not contribute to the population of ejected BH-MS binaries.
Among all observed data, the binaries observed in NGC3201 are probably the ones more suited for a comparison with our models, given the metallicity and mass of NGC3201.
From the central and bottom panel of Figure 11, it is apparent that our in-cluster binaries have periods, eccentricities, and BH masses compatible with those observed in NGC3201. The fact that our models do not match well the companion mass may be due to NGC3201's age. In fact, this cluster is relatively old (\(\sim 11.5\pm 0.4\) Gyr VandenBerg et al., 2013), thus its population of binaries has likely been heavily processed over time, and most of its stellar population with super-solar mass already left the MS. Figure 13 favours this interpretation. Note that both the mass of BHs and MS stars in dynamically formed BH-MS binaries tend to be smaller compared to primordial binaries. As the BH-burning process proceeds, the average BH mass will keep decreasing, while stellar evolution processes will deplete the high-end tail of the MS mass distribution, possibly favouring the formation of BH-MS binaries in the region populated by NGC3201 sources.
### Black hole subsystem
In all Dragon-II clusters, the segregation time is generally shorter than the stellar evolution timescale of massive stars, therefore massive stars sink to the cluster centre before evolving to BHs. This implies a possible enhancement of the probability for stellar mergers and star-BH collisions.
Given the short segregation times, BHs dominate the dynamics in the cluster core already after a time \(t=20-40\) Myr, making up the \(50-80\%\) of the mass in the cluster core and around \(10\%\) of the mass within the half-mass radius, as shown in Figure 14. Given the amount of mass in BHs enclosed within the core radius, this length scale can be regarded as the BH sub-system scale radius (see e.g. Arca Sedda et al., 2018).
A similar trend of the BH mass fraction inside \(R_{\rm HM}\) has been found also in recent simulations performed with the Monte Carlo code MOCCA (Giersz et al., 2019; Wang et al., 2022) and the \(N\)-body code PeTar(Wang et al., 2022), which both exploit similar stellar evolution recipes.
Both the primordial binary evolution and the onset of three-body and multiple gravitational scattering favour the formation of binaries containing at least one BH. Figure 15 shows the _BH formation efficiency_, defined as the ratio between the number of BHs inside the cluster core radius and the initial cluster mass, i.e. \(\epsilon_{\rm BH,BBH}=N_{\rm BH,BBH}(<R_{c})/M_{\rm cl,0}\). We find that, regardless of the initial cluster mass, half-mass radius, or binary fraction, all models are characterised by \(\epsilon_{\rm BH}\simeq(0.8-2)\times 10^{-3}\) M\({}_{\odot}^{-1}\) for single BHs and \(\epsilon_{\rm BBH}\simeq(0.8-2)\times 10^{-4}\) M\({}_{\odot}^{-1}\) for binary BHs. As shown in the right panel of Figure 15, the BH formation efficiency slightly increases with the simulation time, although it is unclear whether this quantity saturates already at \(t_{\rm sim}/T_{\rm HK}\gtrsim 10\). Note that our definition of \(\epsilon_{\rm BBH}\) implies that a cluster with initial mass \(7\times 10^{4}(6\times 10^{5})\) M\({}_{\odot}\) contains around 7(60) BHs in a binary system after 10 relaxation times.
It might seem trivial that \(\epsilon\) is independent of the cluster initial conditions, as it suggests that it is just a consequence of the adopted mass function. However, the BH-burning mechanism (e.g. Banerjee et al., 2010; Downing et al., 2011, 2010; Breen and Heggie, 2013; Arca Sedda et al., 2018; Kremer et al., 2020), by which the most massive BHs pair in binaries that first eject the lighter BHs from the cluster and then get themselves ejected via super-elastic binary-single and binary-binary scatterings, could significantly affect the population of BHs. This does not seem the case in the Dragon-II models. The small spread observed in the BH binary formation efficiency is related to the initial cluster half-mass radius and binary fraction, whilst the weak increase of \(\epsilon_{\rm BBH}\) over time is the result of dynamically formed binaries.
Figures 16-18 show the cluster and BH subsystem density profiles at different times for three cluster models with
Figure 14: Fraction of mass in BHs within the cluster half-mass (straight lines) and core radius (dashed lines). Different colours correspond to different simulations.
\(N=(0.3-1)\times 10^{6}\) and \(R_{\rm HM}=0.47-1.75\) pc. The central density of BH subsystems attains values around \(\rho_{\rm BB}\simeq(10^{4}-10^{5})\)\({\rm M}_{\odot}\) pc\({}^{-3}\), i.e. values 10-100 times larger than the density of stars, whilst their scale radius is roughly \(R_{\rm BHS}\simeq(0.5-1)\) pc in all models, corresponding to the radius at which the density contribution from the BHs and stars equal each other.
Looking at the different panels it is possible to identify the signatures of the whole BH burning process as described in Breen and Heggie (2013). Firstly, BHs start forming and interacting, driving the formation of the BH subsystem and its subsequent expansion over a timescale \(t\sim T_{\rm rlx}\). Secondly, dynamical BH interactions cause the steepening of the BHs density and the contraction of its structure, driven by BH ejections over a time \(1<t/T_{\rm rlx}<5\). Thirdly, the BH subsystem rebounces and expands again, reaching a seemingly stable structure, at least within the simulated time.
Figure 19 shows the BH mass distribution at different times for a model with \(N=1.2\times 10^{5}\) stars, \(R_{\rm HM}=1.75\) pc, and \(f_{b}=0.2\). This plot shows all BHs inside the cluster at a given time, regardless whether they are components of a binary system or single BHs. For the sake of comparison, we added in the plots the overall BH mass distribution inferred by the LVC (The LIGO Scientific Collaboration et al., 2021). The plot highlights an initial phase in which the first BHs start to form, some of them falling in the upper-mass gap, but as the evolution proceeds new, lighter, BHs form while the most massive BHs are ejected via binary-single and binary-binary scatterings, as expected in the BH-burning scenario.
Interestingly, our simulations suggest that the evolution of the cluster can naturally lead to the peak around 10 \({\rm M}_{\odot}\) inferred from GW detections, mostly owing to stellar dynamics that crucially sculpts the BH population. Nonetheless, any comparison among our data, which show all BHs in the cluster, and LVC observations, which are representative of BH mergers, must be taken with a grain of salt. There are other potential explanations for the 10 \({\rm M}_{\odot}\) peak, like isolated binary stellar evolution (e.g. van Son et al., 2022), impact of primordial binary evolution in star clusters (e.g. Arca Sedda et al., 2021), metal rich star clusters (e.g. Area Sedda et al., 2020; Rastello et al., 2021; Chattopadhyay et al., 2022). Hopefully, the new data acquired during the forthcoming four LVC observation run could help pinning down the impact of different processes on the BH mass distribution.
We find that almost all BHs heavier than \(>30\)\({\rm M}_{\odot}\) are ejected from the simulated clusters reaching more than \(\sim 15\) relaxation times.
To further highlight the BH burning process, we reconstruct the time evolution of the average BH mass, \(\langle m_{\rm BH}\rangle\), for all BHs enclosed within the half-mass radius. As shown in Figure 20, \(\langle m_{\rm BH}\rangle\) follows the same trend regardless of the cluster initial condition, namely: i) the most massive BHs form first and the average mass sets close to the peak allowed by the adopted stellar evolution model (\(35-40\)\({\rm M}_{\odot}\)); ii) more numerous, lighter BHs start to form causing a rapid decrease of the average mass down to \(15-20\)\({\rm M}_{\odot}\); iii) dynamical processes kick in and trigger BH ejection, leading to a secular decrease of the BH average mass down to \(\sim 8-10\)\({\rm M}_{\odot}\) (see also Giersz et al., 2019).
The similar \(\langle m_{\rm BH}\rangle\) time evolution observed in different models supports the idea that the BH burning process is substantially due to dynamics. This is further highlighted in Figure 21, which shows the BH average mass as a function of the time normalised to the cluster relaxation time. We find that at a time \(t>T_{\rm rlx}\) the average BH mass is well described
Figure 15: Left-hand panel: number of BHs (straight lines) and BBHs (dashed lines) inside the core radius normalised to the initial cluster mass (BH formation efficiency) as a function of time normalised to the initial cluster relaxation time. The large jumps follow the jumps observed in the core calculation. The colour-map identifies the cluster mass. Right-hand panel: BH formation efficiency \(\epsilon\) for single (points) and binary BHs (diamonds) calculated at the end of each simulation as a function of the ratio between the simulated time and the cluster relaxation time.
by a simple relation:
\[\langle m_{\rm BH}(t)\rangle\simeq m_{\rm BH,\rm rlx}-4\rm{Log}(t/T_{\rm fix}), \tag{8}\]
where \(m_{\rm BH,\rm rlx}=17.4\pm 0.1\) M\({}_{\odot}\).
Although our models are not meant to be representative of any observed cluster, and although there are certainly many pathways leading to the same final cluster evolutionary stage, our results suggest that old Galactic globular clusters and massive clusters in the Small Magellanic Cloud could be harboring a population of relatively light BHs (see Figure 2). This would explain why observations of BHs in binary systems are generally characterised by masses \(m_{\rm BH}<20\) M\({}_{\odot}\), relatively lighter than the typical value inferred for the population of merging BHs, i.e. \(m_{\rm BH,GW}\simeq 30\) M\({}_{\odot}\).
### Using scaling relations as input for semi-analytic codes
It is well known that \(N\)-body simulations of star clusters require generous computational resources to enable an exploration of the phase space and to reach an appreciably long simulated time. The Dragon-II simulations make no exceptions, as they required in total approximately 2.2 million core hours. To overcome this problem, many works have proposed semi-analytic tools specifically devoted to study the evolution of compact objects in the last few years, and especially BH binary mergers (e.g. Arca Sedda et al., 2020; Mapelli et al., 2021; Fragione and Kocsis, 2018; Antonini and Gieles, 2020; Antonini et al., 2019; Arca Sedda et al., 2021; Kritos et al., 2022; Mapelli et al., 2022).
One ingredient missing in some of these fast and accurate codes is a treatment of the co-evolution of the star cluster and the BH population, which may significantly affect the
Figure 16: Density profile of the cluster (red straight line), stars (red dashed line), and BHs (black straight line) at different times for models with \(N=10^{6}\) and \(R_{\rm HM}=1.75\) pc.
Figure 17: Same as in Figure 16, but for one simulation with \(N=3\times 10^{5}\) and \(R_{\rm HM}=1.75\) pc.
Figure 18: Same as in Figure 16, but for one simulation with \(N=3\times 10^{5}\) and \(R_{\rm HM}=0.47\) pc.
formation of merging compact objects (see e.g. Antonini & Gieles, 2020; Arca Sedda et al., 2021).
The Dragon-II models could provide important fitting formulas to implement the evolution of under-filling cluster models in such semi-analytic tools.
The overall evolution of Dragon-II star clusters can be described by simple expressions (Equations 6 and 7). If the cluster initial mass and half-mass radius are known, the aforementioned relations enable an accurate description of its evolution, at least in the case of under-filling star cluster models.
Moreover, our models offer also insights on the internal evolution of the cluster, providing, for example, details about the mass distribution of ejected single and double compact objects, and the properties of the central black hole subsystem. These ingredients can be easily implemented in semi-analytic tools to obtain a fast and accurate description of compact object dynamics in clusters too massive to be modelled with \(N\)-body models.
A simple implementation of the cluster evolution has been already developed by Arca Sedda et al. (2021) in their B-POP code, showing that the inclusion of cluster mass loss and expansion causes a critical decrease of the probability of high-generation mergers in dense and massive star clusters (see also Antonini & Gieles, 2020).
## 4 Summary and conclusions
In this work, we have presented the first results from the Dragon-II star cluster simulations: a suite of 19 direct \(N\)-body simulations, performed with the Nbody6++GPU code, modelling the evolution of star clusters with up to 1 million stars and up to 33% of stars initially in a binary, over a timescale of \(\sim 0.5-2\) Gyr. These simulations contain up-to-date stellar evolution models, and for the first time a series of recipes to treat relativistic binaries in terms of merger remnant mass, spin, and post-merger recoil. Our models represent clusters initially under-filling their Roche lobe, and therefore their evolution can be considered quasi-isolated. The Dragon-II models considerably expand the portion of parameter space covered with full \(N\)-body simulations, opening the possibility to compare with large-\(N\) Monte Carlo models. Clearly, there is a vast number of parameters whose impact on the simulation results remains unclear. For example, adopting a sufficiently large value of the metallicity would imply the impossibility to form IMBHs
Figure 19: Mass distribution of BHs in a cluster model with \(N=120\)k stars, half-mass radius \(R_{\rm HM}=1.75\) pc, and initial binary fraction \(f_{b}=0.2\). The black straight line and shaded area correspond to the mass distribution of primary BHs in merging binaries inferred from observed BBH mergers during the first, second, and third observing runs of the LIGO-Virgo-Kagra collaboration The LIGO Scientific Collaboration et al. (2021).
from stellar collapse. However, we expect that our main conclusions about the properties of the BH population should not be severely affected by cluster metallicity, as they appear to be driven mostly by dynamics.
We find that the amount of primordial binaries seems to poorly affect the overall evolution of the cluster and the evolution of the BH population; however the adopted initial orbital properties could become important when comparing our data with observations, like in the case of BH-MS binaries. For example, a different assumption on the initial mass-ratio distribution could lead to primordial binaries with final BH-MS component masses more similar to the observed one. However, discrepancies among observations and models could arise from a combination of different assumptions, making it hard to pinpoint the main source of uncertainty.
Finally, our simulations model initially underfilling clusters, meaning that the impact of the Galactic field is almost negligible compared to clusters' internal dynamics. This choice enabled us to have a clean view at the impact of stellar interactions on the evolution of the whole cluster and its BH population, and incidentally lead to star cluster models that resemble observed clusters in term of mass and radius. Future simulations adopting filling or overfilling clusters may help understanding whether the evolution of BH subsystems is intrinsically linked to the overall evolution of the host cluster, for example in terms of mass-loss and expansion.
Figure 20: Average BH mass for BHs within the cluster half-mass radius. Note that the spikes can be due to heavy BHs orbiting on the edge of the half-mass radius, or on eccentric trajectories that occasionally enter the half-mass radius.
The main outcomes of the Dragon-II models can be summarised as follows.
* mass-loss and expansion of Dragon-II clusters is mostly determined by internal dynamics and can be described by simple analytical expressions, with parameters that weakly depend on the initial conditions. The binary fraction varies mildly over the simulated time, within \(10-15\%\) of its initial values. Nonetheless, stellar evolution and dynamics cause a progressive drop of the fraction of stars in binary systems for primary masses \(m_{1}>2\) M\({}_{\odot}\) [Figures 2-7];
* over a Gyr timescale, Dragon-II clusters contains around 200-700 binaries with at least one WD, whilst the number of binaries with a NS or a BH generally remains below 1-10 and 5-40, respectively. In general, binaries with at least one compact object are more numerous in clusters with a larger initial binary fraction, suggesting that most of these binaries have a primordial origin. Moreover, the denser the cluster is the smaller the number of binaries, owing to energetic dynamical interactions that disrupt binaries more efficiently [Figure 8];
* ejected binaries with one (SCOB) or two (DCOB) compact objects have different properties. DCOBs exhibit masses following a nearly flat distribution around \(2-20\) M\({}_{\odot}\) and a peak at \(m_{\rm BH}=45\) M\({}_{\odot}\), a peculiar mass-ratio distribution that peaks around \(q\gtrsim 0.6\), and a flat eccentricity distribution in the range \(e=0.5-1\). SCOBs, most of which formed from primordial binaries, typically involve low-mass BHs (\(m_{\rm BH}=3-10\) M\({}_{\odot}\)) and fairly massive MS stars (\(m_{\rm ST}=1-10\) M\({}_{\odot}\)) [Figure 10];
* we find a substantial population of BH-MS binaries in Dragon-II models. Most BH-MS binaries forming inside the cluster have typical BH masses \(m_{\rm BH}>10\) M\({}_{\odot}\), a companion star with mass \(m_{\rm MS}=0.7-100\) M\({}_{\odot}\), orbital periods \(>10\) days, and span the entire eccentricity range. Ejected BH-MS binaries, instead, feature significantly smaller BH masses \(m_{\rm BH}<10\) M\({}_{\odot}\), shorter periods (\(<10\) days), and are mostly contributed by primordial binaries. We find that the properties of the modelled binaries are compatible with some features of observed BH-MS binaries, especially those observed in the globular cluster NGC3201 [Figures 11-13];
* dynamics in the BH subsystem critically affects the BH mass spectrum, owing to the BH-burning process. The peak of the mass distribution generally shifts from initial values \(m_{\rm BH,pk}=25\) M\({}_{\odot}\) down to \(m_{\rm BH,pk}=5-15\), and the average mass steadily decreases after one relaxation time, following an identical evolution regardless of cluster properties [Figures 19-21].
Our simulations suggest that dynamically old star clusters harbour in their centre a population of BHs whose amount scales linearly with the cluster bound mass. The older the cluster is, the smaller the peak of the BH mass spectrum and the average BH mass.
## Acknowledgements
The authors thank the referee for their insightful feedback, which helped us improving our analysis. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the McLuster code, and Vincenzo Ripepi for useful discussions and comments. This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 "Dynamical Mechanisms of Accretion in Galactic Nuclei" and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 "The Milky Way System" (in particular subproject A08), and by the COST Action CA16104 "GWverse".
The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Julich Supercomputing Centre (JSC).
MAS acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda).
AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD).
The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9B870. PB acknowledge the support within the grant No. AP14869395 of the Science
Figure 21: Average BH mass inside the half-mass radius as a function of the time normalised to the cluster initial relaxation time.
Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triume model of Galactic center dynamical evolution on cosmological time scale"). The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346.
RS thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits
MG was partially supported by the Polish National Science Center (NCN) through the grant UMO-2021/41/B/ST9/01191
GI, MM, and SR acknowledge financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contract no. 770017.
## Data Availability
The data from the runs of these simulations and their initial models will be made available upon reasonable request by the corresponding author. The Nbody6++GPU code is publicly available3. The McLuster version used in this work will soon be available. A similar version is described in Leveque et al. (2022b).
Footnote 3: [https://github.com/nbody6PPgpu/Nbody6PPGPU-beijing](https://github.com/nbody6PPgpu/Nbody6PPGPU-beijing)
|
2303.07623 | Uncertainty-weighted Multi-tasking for $T_{1ρ}$ and T$_2$ Mapping in
the Liver with Self-supervised Learning | Multi-parametric mapping of MRI relaxations in liver has the potential of
revealing pathological information of the liver. A self-supervised learning
based multi-parametric mapping method is proposed to map T$T_{1\rho}$ and T$_2$
simultaneously, by utilising the relaxation constraint in the learning process.
Data noise of different mapping tasks is utilised to make the model
uncertainty-aware, which adaptively weight different mapping tasks during
learning. The method was examined on a dataset of 51 patients with
non-alcoholic fatter liver disease. Results showed that the proposed method can
produce comparable parametric maps to the traditional multi-contrast pixel wise
fitting method, with a reduced number of images and less computation time. The
uncertainty weighting also improves the model performance. It has the potential
of accelerating MRI quantitative imaging. | Chaoxing Huang, Yurui Qian, Jian Hou, Baiyan Jiang, Queenie Chan, Vincent WS Wong, Winnie CW Chu, Weitian Chen | 2023-03-14T04:24:37Z | http://arxiv.org/abs/2303.07623v1 | Uncertainty-weighted Multi-tasking for \(T_{1\rho}\) and \(T_{2}\) Mapping in the Liver with Self-supervised Learning
###### Abstract
Multi-parametric mapping of MRI relaxations in liver has the potential of revealing pathological information of the liver. A self-supervised learning based multi-parametric mapping method is proposed to map \(T_{1\rho}\) and \(T_{2}\) simultaneously, by utilising the relaxation constraint in the learning process. Data noise of different mapping tasks is utilised to make the model uncertainty-aware, which adaptively weight different mapping tasks during learning. The method was examined on a dataset of 51 patients with non-alcoholic falter liver disease. Results showed that the proposed method can produce comparable parametric maps to the traditional multi-contrast pixel wise fitting method, with a reduced number of images and less computation time. The uncertainty weighting also improves the model performance. It has the potential of accelerating MRI quantitative imaging.
_Clinical relevance--_ This study establishes a potential way for accelerating multi-parametric mapping in quantitative magnetic resonance imaging and facilitate their clinical applications.
## I Introduction
\(T_{1\rho}\) and \(T_{2}\) are two important biomarkers in quantitative MRI (qMRI) for liver pathological studies[1, 2]. In multi-tasking (multi-parametric mapping) scenarios, the simultaneous mapping of \(T_{1\rho}\)and \(T_{2}\) acquires multiple \(T_{1\rho}\)-weighted images and \(T_{2}\)-weighted images within a single breathhold and the parametric maps are fitted separately[3, 4, 5]. It is desirable to quantify different parametric maps at the same time with a reduced number of MR contrast since it can reduce the scan time and potentially improve the quantification accuracy.
Deep learning has been used as an advanced mapping technique in quantitative MRI to map a reduced number of contrast images or undersampled k-space data to the parametric maps[6, 7]. While most of the previous works focus on single parametric mapping from only one kind of MR contrast, learning-based multi-parametric mapping has gained interests recently. Qiu et al.[8] proposed a fully supervised deep learning framework to infer \(T_{1}\) and \(T_{2}\) maps of brain simultaneously from \(T_{1}\) and \(T_{2}\) contrasts. Similarly, Saez et al.[9] trained a network from synthetic data in a supervised way and map the parametric maps of \(T_{1}\) and \(T_{2}\) of brain. Li et al.[10] used supervised learning method with relaxation constraint to map \(T_{1\rho}\) and \(T_{2}\) of knee from a reduced number of undersampled contrasts. All those methods are in a supervised way, which relies on high quality labelled data. Previous work on learning-based liver parametric mapping shows that supervised learning does not provide satisfactory results as the label outside liver parenchyma are noisy[11]. It is not uncommon for the scan protocol to sacrifice the data quality outside the parenchyma to ensure a reliable relaxation quantification in the liver. On the other hand, those learning-based multiparametric mapping methods simply treat each mapping task equally while ignoring how different mapping tasks contribute to the whole learning process of the model in different ways. This could be problematic as treating different tasks equally in multi-task learning can sometimes degrade the performance of a single task compare to its single task learning counterpart[12]. The intensity scale of different MR constrasts and data noise of different mapping tasks may varies, making the difficulties of learning each parametric mapping different. This could create a bias between mapping tasks during learning. It is also pointed out by Wang et al.[13] that taking the noise and uncertainty of the MRI data into consideration is an open challenge in AI application in multi-parametric MRI. How to better integrate the learning of different mapping tasks in qMRI multi-tasking by utilising the data noise remains to be explored.
To tackle the aforementioned problems, we propose a self-supervised multi-parametric mapping method from a reduced number of MR contrasts, which alleviates the needs of ground-truth data during training. We also leverage the concept of uncertainty loss weighting from multi-task learning in our learning algorithm, which utilises the data noise to exploit suitable contribution of different mapping tasks during learning.
## II Materials and Method
### _Data Acquisition and Dataset_
Our in vivo studies were conducted with the approval of the institute. The scans were conducted on a 3.0 T MRI scanner (Philips Achieva, Philips Healthcare,Best, Netherland). The RF transmitter was a body coil and a 32-channel cardiac coil was the receiver. Our pulse sequence can acquire \(T_{1\rho}\) and \(T_{2}\) weighted images for \(T_{1\rho}\) and \(T_{2}\) mapping within a single breath-hold[3]. Pencil-beam volume shimming box was placed on the right lobe of the liver to reduce the \(B_{0}\) field inhomogeneity. The \(B_{1}\) field inhomogeneity was reduced using dual transmit and vendor-provided RF shimming. \(T_{1\rho}\)-weighted images were acquired under the time of spin-lock (TSL) of 0, 10, 30, and 50 ms; \(T_{2}\)-weighted images were
acquired under the \(T_{2}\) preparation time (TP) of 0, 20, 40, and 60 ms; The \(T_{2}\) preparation time used for \(T_{2}\) fitting was corrected by subtracting the total refocusing time and was 0, 18.2, 34.6, and 51.0 ms, respectively. The \(T_{1\rho}\) -weighted image acquired with TSL = 0 and the \(T_{2}\)-weighted image acquired with TP = 0 shared the same image (referred as "shared image" in the following context)[14]. The protocol acquired three slices of data from each subject. The scan time to collect data for each slice was around 16 s. Detailed imaging parameters configuration are in Table I.
The retrospective data of 51 patients with non-alcoholic fatty liver disease was used as the dataset. We followed a three-fold cross validation scheme, with the data of 17 patients in each fold.
### _Method_
We first model the learning-based multi-parametric mapping from a probabilistic perspective and derive the loss function in a supervised way. Then we adopt it into the self-supervised form.
#### Iii-B1 Multi-parametric mapping likelihood
Let us denote the output of the multi-parametric mapping neural network as \(\mathbf{f}^{\mathbf{W}}(\mathbf{x})\), with weights \(\mathbf{W}\) on input MR contrasts \(\mathbf{x}\). The multi-parametric mapping likelihood can then be defined as \(p(\mathbf{T}_{1\rho},\mathbf{T}_{2}|\mathbf{f}^{\mathbf{W}}(\mathbf{x}))\), where \(\mathbf{T}_{1\rho}\) and \(\mathbf{T}_{2}\) are the ground-truth parametric maps.
We further factorise our output following [15], and have the likelihood of the multi-parametric mapping in the following form:
\[p(\mathbf{T}_{1\rho},\mathbf{T}_{2}|\mathbf{f}^{\mathbf{W}}(\mathbf{x}))=p( \mathbf{T}_{1\rho}|\mathbf{f}^{\mathbf{W}}(\mathbf{x}))p(\mathbf{T}_{2}| \mathbf{f}^{\mathbf{W}}(\mathbf{x})) \tag{1}\]
We assume the distribution of each factorized likelihood as a Laplacian distribution, and minimise the following negative log likelihood of the multi-parametric mapping:
\[-\log p(\mathbf{T}_{1\rho},\mathbf{T}_{2}|\mathbf{f}^{\mathbf{W}}( \mathbf{x})) \propto\frac{|\mathbf{T}_{1\rho}-\mathbf{f}^{\mathbf{W}}(\mathbf{ x})|}{\sigma_{1}}+\frac{|\mathbf{T}_{2}-\mathbf{f}^{\mathbf{W}}(\mathbf{x})|}{ \sigma_{2}}\] \[+\log(2\sigma_{1})+\log(2\sigma_{2}) \tag{2}\]
where \(\sigma_{1}\) and \(\sigma_{2}\) are the scale parameters of different parametric map, respectively. The scale parameter is equivalent to the standard deviation of a Gaussian distribution. Eq(2) is our initial derived objective function to be minimised. Note that the uncertainty terms (scale parameters) are learnable terms, which enable an adaptive uncertainty weighted loss during training. More specifically, this utilises the data noise to automatically tune the contribution of different mapping tasks in the learning process. If the data noise of one of the relaxation mappings is large, the L1 norm at the numerator will be large as the model has more difficulties in learning a good mapping. Consequentially, the uncertainty at the denominator becomes larger to suppress the loss value, which guides the model to put less importance on those noisy data during learning. This adaptive weighting provides more flexibility than manual hard weighting in integrating the information of different measurement in training.
The data uncertainty can be further divided into two categories, the heteroscedastic uncertainty (HETEU) and the homoscedastic uncertainty (HOMOU)[15, 16]. The former is dependent on a specific input, and it is usually modeled as an additional output tensor with the same dimension as the output variable in deep learning. The latter is independent of a specific input, while it is task-dependent as it captures the general data uncertainty of the training data of a certain mapping task. It is modelled as a learnable constant during training. We study both case in this work.
#### Iii-B2 Leveraging self-supervised learning
We first briefly introduce the relaxation constraints in the mono-exponential decay model of \(T_{1\rho}\) and \(T_{2}\) imaging:
\[\mathbf{I}(TSL_{i})=\mathbf{I}(TSL_{j})\exp(\frac{TSL_{j}-TSL_{i}}{\mathbf{T} _{1\rho}}) \tag{3}\]
\[\mathbf{M}(TP_{m})=\mathbf{M}(TP_{n})\exp(\frac{TP_{n}-TP_{m}}{\mathbf{T}_{2 }}) \tag{4}\]
where \(\mathbf{I}\) and \(\mathbf{M}\) stands for \(T_{1\rho}\)-weighted image and \(T_{2}\) weighted image respectively, and \(i,j,m,n\) are the index of different dynamic scans of the same slice.
Since ground-truth maps are not available in the self-supervised learning settings, we replace the L1 norm in the numerator of the original derived loss function as a signal reconstruction term that complies the above relaxation constraints, and the objective function can further be written as:
\[L =\frac{|\mathbf{I}(TSL_{i})-\mathbf{I}(TSL_{j})\exp(\frac{TSL_{j} -TSL_{i}}{\hat{\mathbf{T}}_{1\rho}})|}{\sigma_{1}}\] \[+\frac{|\mathbf{M}(TP_{m})-\mathbf{M}(TP_{n})\exp(\frac{TP_{n} -TP_{m}}{\hat{\mathbf{T}}_{2}})|}{\sigma_{2}}+\log(2\sigma_{1})\] \[+\log(2\sigma_{2}) \tag{5}\]
where \(\widehat{\mathbf{T}_{1\rho}}\) and \(\widehat{\mathbf{T}_{2}}\) are the predicted parametric maps from the neural network. In practice, all possible pairs of constraint (\((i,j)\) or (\(m,n\))) are constructed and back-propogated to update the network parameters during learning.
#### Iii-B3 Network setting
We adopt a similar U-Net architecture for parametric mapping as in [11], in which the output layer has two channels, one for \(T_{1\rho}\) map and one for \(T_{2}\) map. The input is a three-channel tensor consists of the shared image, a \(T_{1\rho}\) contrast and a \(T_{2}\) contrast. As for the case of estimating HETEU, an additional decoder branch was added to output the uncertainty. The additional decoder branch has the same architecture as the decoder branch for parametric mapping. The illustration are shown in Fig. 1.
## III Experiments and Results
### _Evaluation metric_
We evaluate the performance in the ROI as previous works [10, 11], by computing the pixel-wise mean absolute error between the inference maps and the reference maps in the ROI. We refer it as ROI Mean Absolute Error (RMAE). The ROI is manually drawn on the right lobe of the liver to cover the parenchyma as much as possible while avoiding large vessels and bile-ducts. The drawing was conducted before any fitting to ensure the evaluation fairness. We used parametric maps fitted by four \(T_{1\rho}\) contrasts and four \(T_{2}\) contrasts using the non-linear least square fitting method as the reference map. Note that the area outside the parenchyma is not taken into account as its relaxation values from the reference maps are not reliable due to the application of the localized shimming on the right lobe of the liver.
### _Implementation details_
The experiments were carried out using Python 3.7 and Pytorch 1.10 framework[17], with one Nvidia GTX 1080ti GPU and 40 E5-2630 CPUs. All images were resized as 256 x 256, and data augmentation was applied with random slight rotation and translation. During training and testing, we constructed three combinations of input with images from different TSL or TP. The combinations were as follow:
\([I(TSL=0),I(TSL=10ms),M(TP=18.20ms)]\)
\([I(TSL=0),I(TSL=30ms),M(TP=34.60ms)]\)
Batch size was 4 and the learning rate was 5e-4. ADAM [18] was used as the optimizer with a weight decay of 1e-4. The two learnable constant in HOMOU were initialized as 1. Each fold of training took around 8 hours for 300 epochs and early stopping was applied.
### _Comparison study_
We compare our proposed method with the following models:
#### Iii-C1 Two-point
The logarithm of the quotient between the shared image and the corresponding \(T_{1\rho}\) weighted image or the \(T_{2}\) weighted image is taken to get the parametric maps in a closed form.
#### Iii-C2 Single task with single modality (STSM)
Two separate self-supervised networks that map \(T_{1\rho}\) and \(T_{2}\) map was trained respectively. Each mapping task follows the "Baseline" method in [11]. The input consists of the shared image and the corresponding contrast (shared image \(+T_{1\rho}\) contrast for \(T_{1\rho}\) mapping or shared image \(+T_{2}\) contrast for \(T_{2}\) mapping). The loss function is the L1 norm for signal reconstruction in self-supervised learning based on the constraint shown in Eq (3) or Eq (4).
#### Iii-C3 Single Task (ST)
This is similar to STSM, except for the input. The input is the three-channel tensor as that of our proposed method.
#### Iii-C4 Supervised Learning (SL)
The multi-parametric mapping network is trained in a supervised way similar to those previous work[8, 9]. The input of the network is the same as that in our proposed method and the ground-truth for supervision were the reference maps fitted by four images. The loss function is the sum of the L1 norm of both mapping tasks.
#### Iii-C5 Baseline
The network is trained in a self-supervised way without those uncertainty terms in the loss.
The results are shown in Table II. It can be seen that the self-supervised baseline model outperforms the traditional two point fitting, supervised learning model and those self-supervised single task models. By adding HETEU or HOMOU during training, the performance of the model can be further improved to the level of around 3.30 ms. The performance of HETEU and HOMOU are close to each other, and details will be provided in the discussion section.
Fig. 2 shows examples of the fitted \(T_{1\rho}\) maps and \(T_{2}\) maps. As is shown, the Two-Point method produced very noisy results and the SL method results are poor at revealing the anatomical information with oversmoothing effect. This is aligned with those result in supervised single parmetric mapping reported in[11]. The maps produced by our proposed method demonstrate a general good agreement with the reference maps in the right lobe of the liver parenchyma in the ROI.
### _Effectiveness of adaptive weighting_
Two experiments were conducted in this session. For the first experiment, we compared the results of our proposed
Fig. 1: The illustration of the network settings. HOMOU stands for training with homoscedastic uncertainty and HETEU stands for training with Heteroscedastic uncertainty.
method with those baseline models with different manual tuned weights between the \(T_{1\rho}\) contrast reconstruction term and the \(T_{2}\) contrast reconstruction term. The loss function follows the form as: \(\lambda_{a}L_{a}+\lambda_{b}L_{b}\), where \(L_{a}\) and \(L_{b}\) stand for the \(T_{1\rho}\) contrast and \(T_{2}\) contrast reconstruction loss term respectively, and \(\lambda_{a}+\lambda_{b}=1\). For the second experiment, we investigate if the adaptive weighting can handle those situation with scaling variation and data corruption. Specifically, we multiply the signal scale of \(T_{2}\) contrast in the reconstruction term with a very large factor or apply random motion (rotation and translation) to \(T_{2}\) contrast in the loss function.
Fig. 3 shows the results of the first experiment. It can be seen that the performance of the self-supervised network is sensitive to manual weighing, and the mapping task performance bias can be seen in some of the weighting scenarios. On the other hand, the uncertainty weighted adaptive weighting give an overall improved performance and the performance bias is not obvious.
In Table III, the results show that applying signal scale imbalance and motion corruption to \(T_{2}\) contrast in the loss function give inferior \(T_{1\rho}\) results for the Baseline model, compared with the baseline \(T_{1\rho}\) results in Table II. By applying the uncertainty-weighted loss, the performance degradation can be relieved. Note the performance of \(T2\) is not reported in the motion scenario as the \(T_{2}\) data were corrupted. It is also noticeable that the HOMOU method produce a better results than the HETEU method, detailed discussion will be provided.
set. Table IV illustrates that the proposed method achieved a shorter computation time than that of the traditional pixel wise fitting method. The former can simultaneously produce two parametric maps at one forward pass while the latter can only produce one parametric map at one forward pass.
## IV Discussion
The results demonstrate that our proposed learning-based method can produce comparable multi-parametric mapping results to the standard multi-image pixel-wise fitting method, by using less images and computation time. From a practical point of view, this could potentially improve the efficiency in large scale qMRI study, as the acquisition time and the post-processing time are both reduced.
Our studies also demonstrated the benefits of the the uncertainty-based adaptive weighting. It improves the model performance by utilising the data noise of different tasks in the multi-parametric mapping to automatically exploit a better contribution mechanism from different mapping tasks. This is beneficial as it saves the time of manual weight tuning. Future multi-sequences or multi-sites study may also see benefits, as the data noise variation problem can be more significant in those scenarios.
It is also noticeable that the performance of the HETEU and HOMOU method are similar, and the HOMOU achieves a slightly better performance under the case of scaling variation and motion corruption. The HETEU model captures the pixel-wise weighting while the HOMOU model capture the general weighting of the two task globally. The former may includes redundant spatial information in learning the adaptive weighting, and the weighting strategies may overfit on the data. The latter learns the task-specific weighting, and it may reflect a general contribution mechanism of different parametric mapping tasks.
In this work, we factorise the likelihood of the multi-parametric mapping with the assumption that the distribution of \(T_{1\rho}\) mapping and the distribution of \(T_{2}\) mapping are independent from each other. While it is common to apply the task independent assumption in multi-task learning, we believe it is worth exploring the correlation of \(T_{1\rho}\) and \(T_{2}\) mapping in future learning-based multi-parametric mapping research. Recent work on hepatic iron in the liver has demonstrated their correlation bio-physically[19]. Future research will focus on applying multi-variate multi-task learning[20] to learn the correlation between different mapping tasks.
## V Conclusions
Our proposed uncertainty-weighted learning-based multi-parametric mapping method is able to simultaneous map \(T_{1\rho}\) and \(T_{2}\) in the liver from a reduced number of contrasts. The uncertainty-weighted learning improves the performance of the mapping model by utilising the data noise of different mapping tasks. Future work on learning correlation between different mapping tasks in multi-parametric mapping is required.
## Acknowledgment
This study was supported by a grant from the Research Grants Council of the Hong Kong SAR (Project GRF 14201721), a grant from the Innovation and Technology Commission of the Hong Kong SAR (Project No.MRP/046/20x).
|
2310.08428 | $SG$-classes, Singular Symplectic Geometry and Order-preserving
Isomorphisms | The geometric theory of pseudo-differential and Fourier Integral Operators
relies on the symplectic structure of cotangent bundles. If one is to study
calculi with some specific feature adapted to a geometric situation, the
corresponding notion of cotangent bundle needs to be adapted as well and leads
to spaces with a singular symplectic structure. Analysing these singularities
is a necessary step in order to construct the calculus itself.
In this article we provide some new insights into the symplectic structures
arising from asymptotically Euclidean manifolds. In particular, we study the
action of the Poisson bracket on $SG$-pseudo-differential operators and define
a new class of singular symplectomorphisms, taking into account the geometric
picture. We then consider this notion in the context of the characterisation of
order-preserving isomorphisms of the $SG$-algebra, and show that these are in
fact given by conjugation with a Fourier Integral Operator of $SG$-type. | Alessandro Pietro Contini | 2023-10-12T15:52:44Z | http://arxiv.org/abs/2310.08428v1 | # \(Sg\)-classes, singular symplectic geometry and order-preserving isomorphisms
###### Abstract
The geometric theory of pseudo-differential and Fourier Integral Operators relies on the symplectic structure of cotangent bundles. If one is to study calculi with some specific feature adapted to a geometric situation, the corresponding notion of cotangent bundle needs to be adapted as well and leads to spaces with a singular symplectic structure. Analysing these singularities is a necessary step in order to construct the calculus itself.
In this thesis we provide some new insights into the symplectic structures arising from asymptotically Euclidean manifolds. In particular, we study the action of the Poisson bracket on \(SG\)-pseudo-differential operators and define a new class of singular symplectomorphisms, taking into account the geometric picture. We then consider this notion in the context of the characterisation of order-preserving isomorphisms of the \(SG\)-algebra, and show that these are in fact given by conjugation with a Fourier Integral Operator of \(SG\)-type.
###### Contents
* 1 Introduction
* 2 The \(SG\)-calculus
* 2.1 \(SG\)-symbols and operators
* 2.2 \(SG\)-symbols and the symplectic structure
* 2.3 \(SG\)-Fourier Integral Operators
* 3 Scattering geometry
* 3.1 Manifolds with corners and scattering geometry
* 3.2 Symplectic and contact properties of the scattering bundle
* 4 Order-preserving isomorphisms
* 4.1 Preliminary definitions and auxiliary results
* 4.2 The case of the formal symbol algebra
* 4.3 Lifting the characterisation to _LG_
## 1. Introduction
This thesis is concerned with aspects of the global calculus of \(SG\)-pseudo-differential operators, the corresponding classes of Fourier Integral Operators, and their relation as algebras and modules.
Pseudo-differential operators (\(\Psi\)DOs in what follows) are one of the most important tools for the study of (elliptic) partial differential equations (PDEs) and have proven to be objects of interest for a number of different areas of modern mathematics. The basic idea is to construct a large class of operators where differential operators admit inverses, at least in an approximate sense, and with good formal properties which allow one to more or less freely take compositions, adjoints and so on, while at the same time being able to control the errors. This is achieved by a generalisation and formalisation of the techniques of asymptotic analysis, whose origin dates back at least to the 19th century, with the pioneering works of Laplace, Stokes and Kelvin on the method of stationary phase. In the 20th century, the study of singular integral operators, initiated by Hilbert and brought to completion by Miklin [16], Calderon [17],[18], was paired with the language of distributions of Schwartz fame and with many ideas from the world of quantum mechanics. This led Kohn and Nirenberg [19] and Hormander [20] to develop a general calculus of \(\Psi\)DOs and study elliptic PDEs of a very general type, obtaining existence and uniqueness results for a swath of then-unsolved problems. In particular, on a compact manifold one can take advantage of the compactness of the Sobolev embeddings to prove regularity results for the solutions. Furthermore, thanks to the properties of the calculus, the parametrix construction of Hadamard, originally invented for differential equations, extends to pseudo-differential operators and shows that elliptic \(\Psi\)DOs on compact manifolds are Fredholm thanks to the fact that the "residual" operator of the construction is compact. Far-reaching subsequent generalisations led to a global theory of elliptic boundary-value problems on compact manifolds, including the global definition of the principal symbol of an operator as a function on the cotangent bundle, and finally to the celebrated index theorem of Atiyah and Singer [14],[15]. This highlighted the incredible amount of topological and geometrical information these operators carried and became (in many senses still is to this day) one of the main motives of research in geometric and global analysis. Shortly thereafter, the study of limit and boundary-value problems for pseudo-differential equations led Louis Boutet de Monvel [1] to construct a calculus of manifolds with boundary and to a topological index formula1.
Footnote 1: See also [12] for an analytical counterpart and the book [21] for a overarching discussion.
Tailored to the study of elliptic equations, the theory of \(\Psi\)DO required considerable effort to be adapted to other classes of PDEs. In relation to the blooming index theory, the study of the heat equation associated with a second order elliptic \(\Psi\)DO produced many insights into the analytical nature of the Atiyah-Singer formula and led to the local index theorem of Atiyah [1]. In the following years, a full-fledged theory of Dirac operators on spin manifolds, shedding light on their importance to mathematics and physics alike, was investigated and is to this day a very active area of research (we refer here to the books of Berline, Getzler and Vergne [1] and Gilkey [15] for a deep and interesting discussion).
On the other hand, even for simple hyperbolic equations it was clear that \(\Psi\)DOs could not provide a satisfactory answer on their own power and that a more general theory had to be developed. Building on ideas from geometrical optics and earlier
work of Lax and Maslov, Hormander [11] developed the calculus of Fourier Integral Operators (FIOs) and applied it2, together with Duistermaat, to the study of hyperbolic systems [12]. The theory of FIOs proved to be, in the following years, a fundamental tool to approach a large number of yet to be tackled problems, including but not limited to existence and uniqueness for hyperbolic equations, and invigorated the calculus of \(\Psi\)DOs by providing new methods to study elliptic equations. This is rendered possible by the celebrated theorem of Egorov [1], stating that conjugating a \(\Psi\)DO with principal symbol \(p\) with an invertible FIO produces again a \(\Psi\)DO with a principal symbol given by pull-back of \(p\) along an underlying canonical transformation of the cotangent bundle. Thanks to this fact, the theory of \(\Psi\)DOs and FIOs can be seen, in the context of the Heisenberg picture of quantum mechanics, as a quantisation scheme where observables are mapped to self-adjoint \(\Psi\)DOs and the evolution operator of the system, classically a canonical transformation, acts as an FIO on the space of observables. At the same time, this idea of "quantised canonical transformation" expressed by FIOs can be further characterised: by a theorem of Duistermaat and Singer [13], the only order-preserving isomorphisms (OPIs) of the algebra of (integer order, classical, properly supported) \(\Psi\)DOs are exactly given by conjugation with an invertible FIO. This reflects the classical property that if a diffeomorphism transforms Hamilton equations in Hamilton equations (namely, preserves the canonical 1-form on the phase bundle), then it is a canonical transformation.3
Footnote 2: The name is historically controversial. While the theory of Hörmander is without a doubt more general, a great part of the key ideas can be found in [14], which is a late translation from Russian of a 1965 opus of the same author. At the same time, it seems hard to criticise Dieudonne [15, 2] when he appends them the name “operateurs de Lax-Maslov”, glossing “appeles malencontreusement aussi operateurs integraux de Fourier>>, ce qui est d’autant plus ridicule que la transformation de Fourier n’y joue aucun role”. While we acknowledge all these contributions and recognise the elements of truth, we stick here to the name of Fourier Integral Operators out of mere laziness.
Footnote 3: We remark that here and later we use the terms _quantisation_ or _quantisation scheme_ without properly defining what we mean by it. In whole honesty, this correspondence does not give a full quantisation of a classical system according to the Dirac axioms, since we are not really addressing questions such as the classical limit, for example. In this sense the classical theory of FIOs is scale-invariant, since we could in principal work at the level of co-sphere bundles and contact forms, while a “true” quantisation should allow one to look at large scale approximation and makes more sense in the context of semi-classical analysis. Nevertheless the similarities are enough to justify our abuse.
The question that at this point one might ask is: what can we say about calculi adapted to non-compact4 manifolds? While the construction of the calculi with the same formal properties as above does not break down in this setting, one is confronted with the annoying fact that the residual operators of the parametrix construction, even though regularising (namely they smooth out all singularities of distribution on arbitrarily large compact sets), are not compact. This, together with the fact that the Sobolev embeddings are not compact, constitutes a fundamental obstacle to the process of constructing solutions with a certain regularity. The problem lies in the calculus itself: the standard class of \(\Psi\)DOs is only well-suited
to control asymptotic behaviour in the "cotangent direction", namely if \((x,\xi)\) are coordinates on \(\mathbb{R}^{2n}\), we define the class \(S^{m}(\mathbb{R}^{2n})\) by imposing bounds for \(x\) varying in a compact set \(K\) and \(\left|\xi\right|\to\infty\). In particular, the behaviour of symbols in the \(x\) variable is hardly restricted and it suffices that they are smooth. But then we cannot hope to get from this class any kind of reasonably sufficient information as \(\left|x\right|\to\infty\).
In order to obviate to these issues, global calculi were introduced. The main feature5 of a global calculus (and main difference in comparison to ordinary \(\Psi\)DOs) on \(\mathbb{R}^{n}\) is that we posit a bound on the symbols involving the spatial directions \(x\), too. The two main (and more successful) examples in this setting are known as \(\Gamma\)-classes and \(SG\)-classes. Introduced by Shubin, the \(\Gamma\)-classes are also known as _completely isotropic_ symbols and contain those smooth functions \(a(z)\) on \(\mathbb{R}^{2n}\) such that \(\left|\partial^{\alpha}a(z)\right|\lesssim\left\langle z\right\rangle^{m- \left|\alpha\right|}\) for a fixed \(m\in\mathbb{R}\) known as the order of \(a\). The residual operators in this calculus are exactly integral operators with kernel in \(\mathcal{S}(\mathbb{R}^{2n})\), which are known to be compact on \(L^{2}(\mathbb{R}^{n})\). They have so far found wide ranging application to a number of different problems in index theory ([10, 11]), quantisation ([12]), PDEs and spectral theory ([13]). Helffer [15] has in addition introduced global FIOs modelled on the \(\Gamma\)-classes (at the level of the phase and the amplitude), and studied their spectral properties. However, to this day it seems that the classes haven't been defined on non-compact manifolds more general than \(\mathbb{R}^{n}\). Furthermore, Helffer does not study the associated class of symplectomorphisms on \(\mathbb{R}^{2n}\) that putatively should be quantised by his class of FIOs, but derives the properties of the calculus from purely analytical facts.
Footnote 5: Notice also the earlier attempt of Grushin [11], where only uniform bounds on \(x\) are required.
The other main approach (at least for our concerns) is that of \(SG\)-classes6. Originally introduced by Parenti [16] to study PDEs on unbounded domains, the theory benefited from the contributions of many authors and in particular of Schrohe [12, 13]. Their usefulness was soon recognised by Cordes, who significantly enlarged the original calculus and presented a very general theory of \(\Psi\)DOs in [11]. The core of the matter is as follows: instead of looking at completely isotropic symbols, introduce a new filtration to obtain a class \(SG^{m_{1},m_{2}}\), where \(a\) is a symbol of bi-order \((m_{1},m_{2})\) if \(a\) is bounded by (a positive constant times) \(\left\langle x\right\rangle^{m_{1}}\left\langle\xi\right\rangle^{m_{2}}\) and each \(x\)-derivative, respectively \(\xi\)-derivative, improves the bound by \(1\) in \(x\), respectively \(\xi\). Then, the intersection of all these classes clearly consists of Schwartz functions and the residual operators are again exactly the integral operators with kernel in \(\mathcal{S}(\mathbb{R}^{2n})\), but the more flexible structure of the bounds accounts for a more general class of operators to be studied (albeit of course with the extra complexity that has been introduced). In [12], the calculus has been generalised to a large class of non-compact manifolds, so-called \(SG\)-manifolds, although one might argue that the class might even be too large for some purposes (more on this later). The extra flexibility of \(SG\)-classes plays here a crucial role: apart from a technical condition on the charts (in practice, always satisfied), all it suffices to ask is that the changes of coordinates on the manifold have components which are in \(SG^{1,0}\). This is of course to be expected if one wants to define \(SG\)-classes in an invariant way on a manifold, since in order to preserve the filtrations we have to require as a minimum that the transformed base, respectively cotangent,
variables be of order \((1,0)\), respectively \((0,1)\). Therefore, the restrictions are in truth quite lax. \(SG\)-\(\Psi\)DOs have been applied to a variety of problems, including but not limited to spectral asymptotics on asymptotically Euclidean manifolds (cfr. [13, 21]), and mathematical physics (see for example [1]). FIO calculi modelled on \(SG\)-classes have been introduced by Coriasco [14] first and enriched by Andrews [1] in a second moment. Although they have been described only on \(\mathbb{R}^{n}\) as globally defined operators, explicit and implicit hints to possible geometric generalisations were present in both pieces. However, to this day no such theory has been fully understood and in particular the study of canonical transformations associated with the existing classes hasn't been carried out.
In the context of analysis on non-compact spaces, another point of view has been introduced and studied by many authors falling, to various degrees, under the umbrella of the so-called "Melrose school". In this picture, one limits the study to classes of manifolds having a somewhat "regular" structure at infinity, namely one assumes that, outside of a compact centre, the non-compact manifold \(X\) admits a Riemannian metric with a specific asymptotic behaviour as "\(|x|\to\infty\)" (cf. [10]). Then, one can construct (explicitly!) a compactified manifold with boundary \(\overline{X}\) whose interior is diffeomorphic to the original manifold \(X\), and obtain a metric on \(\overline{X}\), smooth in the interior and with a prescribed singularity at the boundary. The main example is that of manifolds with _ends_. Topologically these are just given by a compact manifold \(X_{0}\) with boundary \(\partial X_{0}=B_{1}\cup\dots\cup B_{d}\), where each \(B_{j}\) is a closed codimension 1 submanifold, and cylinders \(C_{1},\dots,C_{d}\), \(C_{j}=\mathbb{R}^{+}\times B_{j}\), each glued to the corresponding connected component of \(\partial X_{0}\). These builds up the "ends" or "exits" of \(X\). This setting, while included in \(SG\)-manifolds, allows for a more refined analysis of the metric structure on each end. For example, we might consider a metric which is asymptotically cylindrical, namely that on the end takes the form
\[g=\mathrm{d}t^{2}+h\]
for \(t\) the coordinate on \(\mathbb{R}^{+}\) and \(h\) a metric on \(B_{j}\). Introducing the change of coordinates \(x=e^{-t}\) maps the infinite cylinder to a finite one, and the above tensor is transformed to
\[\overline{g}=\frac{\mathrm{d}x^{2}}{x^{2}}+\overline{h},\]
where \(\overline{h}\) is simply the metric \(h\) in the changed coordinates. Having so "rescaled" the metric structure, it becomes evident that we can consider our original manifold as the interior of a manifold with boundary by attaching a _closed_ cylinder to the boundary component \(B_{j}\), provided that at the same time we keep in mind that we have performed a change of coordinates. The so-called \(b\)-geometry, namely the generalisation of this example to manifolds with corners, is built upon considering the properties of the Lie algebra of vector fields on \(\overline{X}\) which are tangent to the boundary. Starting from this Lie algebra one constructs a calculus of differential and pseudo-differential operators, the so-called \(b\)-calculus7. On the one hand, this is an
extremely powerful tool and idea, on the other there are however a number of non-negligible technical difficulties. In particular, elliptic operators in the classical sense are not Fredholm and a sort of "non-commutative boundary symbol" (called _indicial operator_) needs to be taken into account. Both the power and the difficulties of the \(b\)-calculus are beautifully expounded in [11], together with thorough discussion of the related aspects of index theory (specifically, the Atiyah-Patodi-Singer index theorem) on manifolds with boundary.
With a similar approach one can work in the setting of asymptotically Euclidean manifolds by attaching cones to each \(B_{j}\) instead of cylinders. While of course this produces the same underlying topological space as before, we are here imposing that the metric is conic "at \(\infty\)". After changing coordinates as before and compactifying, we are then working with a metric which near the boundary is of the form
\[\overline{g}=\frac{\mathrm{d}x^{2}}{x^{4}}+\frac{\overline{h}}{x^{2}}.\]
While this looks more singular at first glance (indeed recall that \(x=0\) at the boundary), it turns out the associated Lie algebra of vector fields (the so-called _scattering_ vector fields) is much easier to study since it is actually commutative "at the boundary". Correspondingly, the differential and pseudo-differential operators posses a second _commutative_ symbol \(\sigma_{N}\), defined at the boundary and related to the asymptotic behaviour as \(|x|\to\infty\) of the full symbol in the \(SG\)-calculus. In fact, _classical_\(SG\)-operators and _classical_\(sc\)-pseudo-differential operators8 are two sides of the same coin: already in the '90s it was well known that the symbol classes for the two calculi are isomorphic9. The choice of which approach to exploit over the other is a mainly just a matter of personal preference but they have advantages and disadvantages, in that the first is more explicitly computable whereas the latter is more manifestly global in nature, being defined directly on a class of manifolds. Part of the goal of this thesis is to explore this correspondence further, especially in its relation with the symplectic geometry of the cotangent bundle.
Footnote 8: Here \(sc\) stands of course for “scattering”, another instance of the above naming convention.
Footnote 9: See for example [10], Section 8.2.2 or the more recent [18].
Whereas \(\Psi\)DOs are a very well-explored topic in both these examples (and many others), the state-of-the-art of FIOs in singular situations, including their ellipticity properties, index formulae and the study of their geometrical theory has lagged behind. In fact, attempts here are somewhat scarce. For the case of the \(b\)-calculus, the early paper of Melrose [11] already contains a study of the class of Lagrangian distributions of interest, based on the geometrical properties that one should expect from a Lagrangian relation on a manifold with boundary. However, a study of the respective ellipticity, Fredholmness and index properties has not been carried out (as far as the author knows, the only Atiyah-Singer-type formula for the index of an FIO has been derived in the setting of closed manifolds by Epstein and Melrose [11] and Leichnam, Nest and Tsygan [12]). To this end, one would need to analyse closely the behaviour of the FIO at the boundary and construct a "full calculus" for these objects10. More in the spirit of Boutet de Monvel, a calculus of FIOs on manifolds with boundary has been constructed by Battisti, Coriasco and Schrohe [1]. There, the geometric and analytical conditions at the boundary were studied in great detail and the authors were able to prove the
Fredholm property for the elliptic elements in their calculus, thereby setting up the frame for an index problem in the spirit of Weinstein [20]. In particular they showed that the notion of boundary canonical transformation of Section III in [17] produces appropriate operators in this calculus. However, the peculiarities of the Boutet de Monvel calculus complicate the analytical picture and an index formula was not established.
A third point of view is that of associating \(\Psi\)DO and FIO calculi with a _groupoid_. The philosophy behind this is akin to the singular analytical approach: one considers operators with a specific degeneracy/peculiarity, tries to encode their properties in a geometric object (a groupoid in this case, in contrast to the metric on the compactified space in the Melrose approach), and takes advantage of an overarching calculus structure defined in general on/for the geometric object. This has been successfully brought to completion in a multitude of contributions. For \(\Psi\)DOs, Nistor, Weinstein and Xu [21] and Monthubert [14] introduced the first calculi11, while FIOs have required considerable more effort and only appeared in such a general setting very recently in [10]. Despite covering a lot of previously examined settings, it appears that an analysis of the conditions under which the calculus of FIOs contains Fredholm operators on an appropriately defined scale of Sobolev spaces is yet to be examined. Indeed, a direct specialisation of the techniques in the above papers only recovers the "small calculus", namely12 constructs operator classes adapted to the geometric situation but without regard for the Fredholmness "at the boundary". Since these aspects are paramount to us, we shall not touch on this subject any further.
Footnote 11: Also notice the recent approaches for nilpotent Lie groups and filtered manifolds of van Erp and Yuncken [23] and Ewert [11].
Footnote 12: This is lingo for a family of operators in which a parametrix construction for elliptic operators makes sense, but does not necessarily produce a compact remainder on Sobolev spaces.
The author's interest in the global calculi stems from an idea of Schrohe that a result like Theorem 1 in [13], namely the characterisation of order-preserving isomorphisms of \(\Psi\)DOs, might hold true for the classes \(LG^{m_{1},m_{2}}\) of \(SG\)-\(\Psi\)DOs which are classical and of order \((m_{1},m_{2})\in\mathbb{Z}\times\mathbb{Z}\). This is the main problem we set out to tackle in the thesis. While this looks like a fairly reasonable expectation (indeed, for example, the class \(LG^{0,m_{2}}\) is a subclass of \(\Psi^{m_{2}}(\mathbb{R}^{n})\) and the properly supported property, required for composition in the usual calculus, is substituted in the \(SG\) picture by the estimates as \(|x|\to\infty\)), it quickly turned out that a proof along the lines of the original paper and completely in terms of the "local picture" of the \(SG\)-calculus was cumbersome to say the least. On the other hand, the scattering approach, while being conceptually advantageous, is less explicit and requires to pick specific local coordinates for computations. Together with the fact that the existing parametrization results for Lagrangian submanifolds in the \(SG\) setting already employed a "mixed" approach, we resolved to try and take as much advantage as possible of this double point of view.
We describe briefly the organization of the manuscript. Section 2 contains the basics of the (classical) \(SG\)-calculus on \(\mathbb{R}^{n}\). We follow the exposition in [1] rather closely, especially in regard to classicality, however we prefer amplitudes over double symbols when it comes to composition. Most proofs are here omitted for the sake of brevity, and can be found in the cited literature. We proceed to analyse the relation of the symplectic structure with \(SG\)-symbols, in particular delineating
the action of the Poisson brackets on principal symbols. We give an overview of the class of \(SG\)-FIOs of type \(\mathcal{Q}\) introduced by Andrews [And], which generalises the operators of Coriasco [Cor99] and appears naturally at the end of Chapter 3. Most of the material in this chapter is taken almost directly from the cited sources. Notable exceptions are Section 1.2, containing the analysis of the relation between the Poisson bracket and the principal symbol maps, and the \(SG\)-Egorov Theorem at the end of Section 1.3, which slightly generalises Proposition 14 in [Cor99].
Section 3 is an introduction to the geometric structure underlying the scattering calculus of Melrose, as presented in [Mel95] and [Mel94]. We start with an overview of manifolds with corners and the corresponding spaces of distributions and vector fields. We proceed with a discussion of the scattering cotangent bundle and the symbol spaces, together with the associated operator classes and the symbol maps. We specialise thereafter to the example of the radial compactification of \(\mathbb{R}^{n}\), on which the equivalence between the classical \(SG\)- and \(sc\)-calculi is mostly evident, and which will be our main focus in Chapter 4. Here again we refer the reader to the cited literature for the majority of well-known proofs. Novel work starts to appear here: We introduce a definition of "scattering canonical transformation" (SCT), analyse its geometric properties, and show that, locally in a suitable sense, its graph admits a parametrisation via an \(SG\)-phase function, parallel to previous work on \(sc\)-Lagrangian distributions.
Section 4 contains the main results we obtained. We employ the machinery exposed in the previous Chapters, together with the ideas of Mathai and Melrose [MM17], to give a proof of the \(SG\)-analogue of Lemma 2 in [DS76]. In particular, we prove that the notion of scattering canonical transformation introduced in Chapter 3 appears naturally. The approximation scheme of the original paper is then adapted to show that the OPI is ascertained at the level of the formal symbol algebra by an elliptic \(SG\)-FIO of type \(\mathcal{Q}\), associated with the scattering canonical transformation above. We exploit Lemma 3 of [DS76] to find an Eidelheit-type isomorphism in our setting and compare it to the \(SG\)-FIO appearing at the formal level. We prove that this composition is given by an \(SG\)-\(\Psi\)DO and show that its mapping properties determine it to be the identity up to an operator with kernel in the Schwartz class. This allows us to conclude that the Eidelheit isomorphism is itself, up to a smoothing operator, an operator of type \(\mathcal{Q}\), thereby bringing our task to a close.
The author would like to thank Sandro Coriasco and Philipp Schmitt, for many interesting discussions and comments that have led to a better understanding and exposition and Elmar Schrohe, under whose supervision the thesis was completed.
## 2. The \(SG\)-calculus
We present here a collection of concepts and facts concerning the \(SG\)-calculus, beginning with a discussion of symbol spaces. A thorough analysis of classical symbols is included, before moving to the associated operators and the relation between these classes.
### \(SG\)-symbols and operators
For later reference, we start by defining Hormander classes.
**Definition 2.1**.: The class of _Hormander symbols_ of order \(m\in\mathbb{R}\) is the set \(S^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) containing all functions \(p\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) such that for each
and each compact \(K\subset\mathbb{R}^{n}\) one has
\[\left|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}p(x,\xi)\right|\lesssim_{\alpha, \beta,K}\left\langle\xi\right\rangle^{m-|\beta|}.\quad x\in K,\xi\in\mathbb{R}^ {N}. \tag{2.1}\]
A symbol \(p\in S^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) with \(m\in\mathbb{Z}\) is said to be _classical_ if for all \(j\geq 0\) there exists a smooth function \(p_{m-j}(x,\xi)\), \(\xi\)-homogeneous of degree \(m-j\) outside of a compact neighbourhood of \(0\in\mathbb{R}^{N}\), such that for all \(M\in\mathbb{N}\) we have the asymptotic expansion
\[p(x,\xi)-\sum_{j=0}^{M}p_{m-j}(x,\xi)\in S^{m-M-1}(\mathbb{R}^{n}\times\mathbb{ R}^{N}). \tag{2.2}\]
In case the original symbol does not depend on \(x\), we write \(S^{m}(\mathbb{R}^{N})\) and speak about _global classical symbols_.
**Definition 2.2**.: The class \(\mathit{SG}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) of _SG-symbols_ of order \(m=(m_{e},m_{\psi})\in\mathbb{R}^{2}\) is the set of all \(\mathcal{C}^{\infty}\) functions \(a\colon\mathbb{R}^{n}\times\mathbb{R}^{N}\to\mathbb{R}\) such that for all \(\alpha\in\mathbb{N}^{n},\beta\in\mathbb{N}^{N}\) there exists \(c=c(\alpha,\beta)>0\) with
\[\left|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)\right|\lesssim_{ \alpha,\beta}\left\langle x\right\rangle^{m_{e}-|\alpha|}\left\langle\xi \right\rangle^{m_{\psi}-|\beta|},\quad x\in\mathbb{R}^{n},\xi\in\mathbb{R}^{N}. \tag{2.3}\]
These are all Frechet spaces with respect to the semi-norms \(\|\cdot\|_{(\alpha,\beta)}\) given by the best possible \(c(\alpha,\beta)\) in (2.3). We often write \(\mathit{SG}^{m}=\mathit{SG}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) since we will work mainly on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\cong T^{*}\mathbb{R}^{n}\). We call \(m_{e}\) the _exit order_ and \(m_{\psi}\) the _pseudo-differential order_, see Remark 2.10.
We collect basic properties of these classes in the Lemma 2.3.
**Lemma 2.3**.: The following holds true.
1. There is a double filtration on the union \(\mathit{SG}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) of the classes \(\mathit{SG}^{m}\), that is, if \(p=(p_{e},p_{\psi})\leq m=(m_{e},m_{\psi})\), then \(\mathit{SG}^{p}(\mathbb{R}^{n}\times\mathbb{R}^{N})\subset\mathit{SG}^{m}( \mathbb{R}^{n}\times\mathbb{R}^{N})\).
2. The projective limits \(\mathit{SG}^{m_{e},-\infty}\) and \(\mathit{SG}^{-\infty,m_{\psi}}\) are isomorphic to \(\mathcal{S}(\mathbb{R}^{N},S^{m_{e}}(\mathbb{R}^{n}))\) and \(\mathcal{S}(\mathbb{R}^{n},S^{m_{\psi}}(\mathbb{R}^{N}))\), respectively, while the (double) projective limit \(\mathit{SG}^{-\infty 1}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) is a Frechet space equalling the class of Schwartz functions \(\mathcal{S}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) (we call elements of these projective limits \(\psi\)_-smoothing, e-smoothing_ and smoothing, respectively).
3. Pointwise multiplication on \(\mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) restricts to \(\mathit{SG}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) to make it (together with addition) into a commutative bi-filtered algebra.
4. For \(m\in\mathbb{R}^{2}\) the functions \(\lambda^{m}(x,\xi)=\left\langle x\right\rangle^{m_{e}}\left\langle\xi\right\rangle ^{m_{\psi}}\in\mathit{SG}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) are nowhere zero. Multiplication by \(\lambda^{m}(x,\xi)\) induces isomorphisms of Frechet algebras \(\mathit{SG}^{p}(\mathbb{R}^{n}\times\mathbb{R}^{N})\to\mathit{SG}^{m+p}( \mathbb{R}^{n}\times\mathbb{R}^{N})\) for all \(p\in\mathbb{R}^{2}\).
The symbols \(\lambda^{m}\) will be used to give a characterization of the following scale of Sobolev spaces adapted to the \(\mathit{SG}\)-calculus.
**Definition 2.4**.: For \(m=(m_{e},m_{\psi})\in\mathbb{R}^{2}\) we define the \(L^{2}\)-based _SG-Sobolev spaces_ as
\[\mathit{HG}_{m}\equiv\left\langle x\right\rangle^{-m_{e}}H_{m_{\psi}}(\mathbb{ R}^{n}). \tag{2.4}\]
Much like for Hormander classes, a notion of asymptotic expansion is defined and the principle of asymptotic completeness holds true. The existence of the second filtration implies that we can define multiple notions of asymptotic sums, so we summarize them in the following theorem.
**Theorem 2.5**.: The following holds true.
1. Let \(a_{j}(x,\xi)\in\mathit{SG}^{m^{(j)}}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) be a sequence of functions with \(m^{(j)}=(m_{e}^{(j)},m_{\psi}^{(j)})\to-\infty\mathbb{1}\) as \(j\to\infty\). There exists \(a\in\mathit{SG}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N}),m=(\max m_{e}^{(j)}, \max m_{\psi}^{(j)})\), such that, given any \(c\in\mathbb{R}\), we can find \(K=K(c)\in\mathbb{N}\) with (2.5) \[a(x,\xi)-\sum_{j=0}^{K}a_{j}(x,\xi)\in\mathit{SG}^{m-c1}(\mathbb{R}^{n}\times \mathbb{R}^{N}),\] and \(a\) is furthermore unique mod \(\mathcal{S}(\mathbb{R}^{n}\times\mathbb{R}^{N})\);
2. Let \(a_{j}(x,\xi)\in\mathit{SG}^{m^{(j)}_{e},m_{\psi}}(\mathbb{R}^{n}\times \mathbb{R}^{N})\) be a sequence of functions with \(m_{e}^{(j)}\to-\infty\) as \(j\to\infty\). There exists \(a\in\mathit{SG}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N}),m=(\max m_{e}^{(j)}, m_{\psi})\), such that, given any \(c\in\mathbb{R}\), we can find \(K=K(c)\in\mathbb{N}\) with (2.6) \[a(x,\xi)-\sum_{j=0}^{K}a_{j}(x,\xi)\in\mathit{SG}^{m-c1}{}_{e}(\mathbb{R}^{n} \times\mathbb{R}^{N}),\] and \(a\) is furthermore unique mod \(\mathit{SG}^{-\infty,m_{\psi}}(\mathbb{R}^{n}\times\mathbb{R}^{N})\);
3. Let \(a_{j}(x,\xi)\in\mathit{SG}^{m_{e},m_{\psi}^{(j)}}(\mathbb{R}^{n}\times \mathbb{R}^{N})\) be a sequence of functions with \(m_{\psi}^{(j)}\to-\infty\) as \(j\to\infty\). There exists \(a\in\mathit{SG}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{N}),m=(m_{e},\max m_{\psi }^{(j)})\), such that, given any \(c\in\mathbb{R}\), we can find \(K=K(c)\in\mathbb{N}\) with (2.7) \[a(x,\xi)-\sum_{j=0}^{K}a_{j}(x,\xi)\in\mathit{SG}^{m-c1}{}_{\psi}(\mathbb{R}^ {n}\times\mathbb{R}^{N}),\] and \(a\) is furthermore unique mod \(\mathit{SG}^{m_{e},-\infty}(\mathbb{R}^{n}\times\mathbb{R}^{N})\).
In all of the above cases we write \(a\sim\sum a_{j}\) to indicate that \(a\) is the asymptotic sum of the sequence \(a_{j}\). The scale which we refer to will be in general clear from the context.
Recall that, in the classical theory of \(\Psi\)DOs, homogeneous functions can be turned into symbols with the help of an excision function, namely if \(b\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{N}_{0})\) is homogeneous of degree \(k\) in \(\xi\) and13\(\chi(\xi)=0\) near \(0\) and \(\chi(\xi)=1\) for large \(|\xi|\), then \(a(x,\xi)=\chi(\xi)b(x,\xi)\in S^{k}(\mathbb{R}^{n}\times\mathbb{R}^{N})\) and \(a(x,\xi)-b(x,\xi)=(1-\chi(\xi))b(x,\xi)\) is compactly supported in \(\xi\). Similarly, asymptotic sums can be made convergent with the help of such a \(\chi\) and a sequence \(\mathbb{R}^{+}\ni c_{j}\to\infty\) sufficiently fast as \(j\to\infty\), by setting
Footnote 13: Namely, \(b(x,\mu\xi)=\mu^{k}b(x,\xi)\) for all \(x,\xi\in\mathbb{R}^{n},\xi\neq 0,\mu>0\).
\[a(x,\xi)\equiv\sum_{j\geq 0}\chi\left(\frac{\xi}{c_{j}}\right)a_{j}(x,\xi). \tag{2.8}\]
The same process works for \(\mathit{SG}\)-classes with respect to both sets of variables separately, so that each of the asymptotic sums in Theorem 2.5 can be made convergent up to some smoothing term.
Our main object of interest is the subclass of _classical_ (also known as polyhomogeneous) symbols. Since the situation is slightly more complex than in the case of the Hormander classes, we exercise some extra care here in order to define the notion. In particular, the following relaxed notions of homogeneity are required.
**Definition 2.6**.: For \(\bullet\in\{e,\psi\}\), define the classes of _partially \(m_{\bullet}\)-homogeneous functions_, \(m_{\bullet}\in\mathbb{R}\), by
\[\mathcal{H}_{\psi}^{(m_{\psi})} =\left\{a(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\times \mathbb{R}^{N}_{0})\text{ s.t. }\forall\lambda>0,x\in\mathbb{R}^{n},\xi\in\mathbb{R}^{N}_{0}\;a(x, \lambda\xi)=\lambda^{m_{\psi}}a(x,\xi)\right\}\] \[\mathcal{H}_{e}^{(m_{e})} =\left\{a(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n}_{0}\times \mathbb{R}^{N})\text{ s.t. }\forall\lambda>0,x\in\mathbb{R}^{n}_{0},\xi\in\mathbb{R}^{N}\;a( \lambda x,\xi)=\lambda^{m_{e}}a(x,\xi)\right\}. \tag{2.9}\]
Also define the class of _bi-homogeneous functions_, letting for each \(m=(m_{e},m_{\psi})\in\mathbb{R}^{2}\)
\[\mathcal{H}^{(m)} =\left\{a(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n}_{0}\times \mathbb{R}^{N}_{0})\text{ s.t. }\right.\] \[\left.\forall\lambda,\mu>0,(x,\xi)\in\mathbb{R}^{n}_{0}\times \mathbb{R}^{N}_{0}\;a(\lambda x,\mu\xi)=\lambda^{m_{e}}\mu^{m_{\psi}}a(x,\xi) \right\}. \tag{2.10}\]
The conditions defining these classes can be relaxed to define _eventually homogeneous_ functions, that is, homogeneous outside \(\mathbb{B}_{c}(0)\) for some \(c>0\). Namely
\[\mathcal{H}_{\psi}^{[m_{\psi}]} =\left\{a(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n+N})\text{ s.t. }\forall\mu\geq 1,x\in\mathbb{R}^{n},|\xi|>c,\;a(x,\mu\xi)=\mu^{m_{\psi}}a(x, \xi)\right\}\] \[\mathcal{H}_{e}^{[m_{e}]} =\left\{a(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n+N})\text{ s.t. }\forall\lambda\geq 1,|x|>c,\xi\in\mathbb{R}^{N},\;a(\lambda x,\xi)= \lambda^{m_{e}}a(x,\xi)\right\}\] \[\mathcal{H}^{[m_{e},m_{\psi}]} =\left\{a(x,\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n+N})\text{ s.t. }\forall\lambda,\mu\geq 1,|x|\,,|\xi|>c,\;a(\lambda x,\mu\xi)=\lambda^{m_{e}} \mu^{m_{\psi}}a(x,\xi)\right\}. \tag{2.11}\]
We have then _homogeneous symbols_:
\[\begin{split} SG^{m_{e},[m_{\psi}]}&=\mathcal{H}_{ \psi}^{[m_{e}]}\cap SG^{m_{e},m_{\psi}}\\ SG^{[m_{e}],m_{\psi}}&=\mathcal{H}_{e}^{[m_{e}]} \cap SG^{m_{e},m_{\psi}}\\ SG^{[m_{e}],[m_{\psi}]}&=\mathcal{H}^{[m_{e}],[m_{ \psi}]}\cap SG^{m_{e},m_{\psi}}\end{split} \tag{2.12}\]
**Definition 2.7**.: The spaces of \(\xi\)_-classically homogeneous SG-symbols_ and \(\xi\)_-classical SG-symbols_ are:
\[SG_{cl(\psi)}^{[m_{e}],m_{\psi}} =\left\{a\in SG^{[m_{e}],m_{\psi}}\text{ s.t. }\exists a_{k}\in SG^{[m_{e}],[m_{\psi}-k]}\forall N\;a-\sum_{k=0}^{N}a_{k} \in SG^{m_{e},m_{\psi}-N-1}\right\},\] \[SG_{cl(\psi)}^{m_{e},m_{\psi}} =\left\{a\in SG^{m_{e},m_{\psi}}\text{ s.t. }\exists a_{k}\in SG^{m_{e},[m_{\psi}-k]} \forall N\;a-\sum_{k=0}^{N}a_{k}\in SG^{m_{e},m_{\psi}-N-1}\right\}. \tag{2.13}\]
Similarly we have \(x\)-classically homogeneous and \(x\)-classical _SG_-symbols:
\[SG_{cl(e)}^{m_{e},[m_{\psi}]} =\left\{a\in SG^{m_{e},[m_{\psi}]}\text{ s.t. }\exists a_{k}\in SG^{[m_{e}-k],[m_{\psi}]} \forall N\;a-\sum_{k=0}^{N}a_{k}\in SG^{m_{e}-N-1,m_{\psi}}\right\},\] \[SG_{cl(e)}^{m_{e},m_{\psi}} =\left\{a\in SG^{m_{e},m_{\psi}}\text{ s.t. }\exists a_{k}\in SG^{[m_{e}-k],m_{\psi}} \forall N\;a-\sum_{k=0}^{N}a_{k}\in SG^{m_{e}-N-1,m_{\psi}}\right\}. \tag{2.14}\]
**Definition 2.8**.: The space of _classical SG-symbols_ or _classical symbols with exit condition_ is the set \(SG_{cl}^{m}\) consisting of those symbols \(a\in SG^{m}\) satisfying:
1. \(\forall k\in\mathbb{N}\;\exists a_{k}^{\psi}\in SG_{cl(e)}^{m_{e},[m_{\psi}-k]} \text{ s.t. }\forall N\) (2.15) \[a(x,\xi)-\sum_{k=0}^{N}a_{k}^{\psi}(x,\xi)\in SG_{cl(e)}^{m_{e},m_{\psi}-N-1};\]
2. \(\forall j\in\mathbb{N}\)__\(\exists a_{j}^{e}\in SG_{cl(\psi)}^{[m_{e}-j],m_{\psi}}\)__s.t.__\(\forall N\)__ (2.16) \[a(x,\xi)-\sum_{j=0}^{N}a_{j}^{e}(x,\xi)\in SG_{cl(\psi)}^{m_{e}-N-1,m_{\psi}}.\]
We will, from now on, only deal with classical \(SG\)-symbols (and later operators). Therefore, the subscripts \(cl,cl(e),cl(\psi)\) will be omitted and existence of asymptotic expansions tacitly assumed throughout.
**Remark 2.9**.: We remark that an alternative definition for classical \(SG\)-symbols has been given in [10]. Therein, the structure of symbol classes with values in a Frechet space is explored and, in particular, it is proven that \(S_{cl}^{m}(\mathbb{R}^{n};S_{cl}^{l}(\mathbb{R}^{n}))\cong S_{cl}^{m}\hat{ \otimes}_{\pi}S_{cl}^{l}\cong SG_{cl}^{m,l}.\) Here \(S_{cl}^{m}\) denotes here the space of _global classical symbols in one variable of order \(m,\)_ namely the space of those smooth functions \(a(\xi)\) on \(\mathbb{R}^{n}\) which satisfy symbol estimates of order \(m\) and admit an asymptotic expansion in homogeneous functions \(a_{k}(\xi)\) of degree \(m-k\). This would justify the terminology "product-type symbols" for the \(SG\)-classes. However, we refrain from its use since it might be easily confused with other, similar classes (e.g. the bi-singular operators defined by Rodino [11]). We remark that, with this definition, it is also directly possible to define classical operators of complex order \((s_{e},s_{\psi})\) by saying that they are exactly those operators with symbol in \(S^{s_{e}}(\mathbb{R}^{n})\hat{\otimes}_{\pi}S^{s_{\psi}}(\mathbb{R}^{n}).\)
**Remark 2.10**.: We will often use the terms "exit \(\Box^{*}\) and "pseudo-differential \(\Box^{*}\) when speaking about properties of an object \(\Box\) associated with the \(e\)-asymptotic expansion and the \(\psi\)-asymptotic expansion. For example, we will speak in a short while of "exit symbol of order \(m_{e}-k\)" and "pseudo-differential symbol of order \(m_{\psi}-j\)" for the maps \(\sigma_{e}^{m_{e}-k}\) and \(\sigma_{\psi}^{m_{\psi}-j},\) respectively. Also, for brevity's sake and convenience of notation, we often shorten \(\sigma_{\bullet}^{m_{\bullet}-l}(a)\) with \(a_{m_{\bullet}-l}^{*}\) for \(\bullet\in\{e,\psi,\psi e\}\) and, in particular, \(a_{\bullet}\equiv\sigma_{\bullet}^{m_{\bullet}}(a)\). We call the maps \(\sigma_{\bullet}^{m_{\bullet}}\) the \(\bullet\)-_principal symbol maps_. We will see in the next chapter how helpful this lingo is in identifying properties of functions defined on different boundary hyper-surfaces of the scattering cotangent bundle.
**Remark 2.11**.: We will, in general, use round brackets to denote homogeneity in the respective part of the domain (functions will be defined outside of the corresponding "zero section"), while square brackets indicate eventual homogeneity as in (2.11). It is then clear that, being interested only in the asymptotic behaviour of the symbols, we can pass from round to square brackets and vice-versa by multiplying with an excision function and adapting therefore the notion of "convergence" of asymptotic sums as in (2.8). For a symbol \(a\) in \(SG^{[m_{e}],m_{\psi}}\), respectively \(SG^{m_{e},[m_{\psi}]}\), the asymptotic expansion (2.16), respectively (2.15), is then trivial, in the sense that it consists of the function itself.
**Remark 2.12**.: The terms in the asymptotic sums of Definition 2.8 are uniquely determined modulo elements in \(SG^{-\infty,m_{\psi}}\) and \(SG^{m_{e},-\infty}\), respectively. That is, the _eventual_ behaviour (outside a compact neighbourhood of \(0\)) is well-defined.
The conditions in Definition 2.8 allow us to canonically identify maps
\[\begin{split}\sigma_{e}^{m_{e}-k}\colon SG^{m}\to\mathcal{H}_{e }^{(m_{e}-k)},\quad\sigma_{e}^{m_{e}-k}(a)(x,\xi)=a_{m_{e}-k}^{e}(x,\xi),\\ \sigma_{\psi}^{m_{\psi}-j}\colon SG^{m}\to\mathcal{H}_{\psi}^{(m_{ \psi}-j)},\quad\sigma_{\psi}^{m_{\psi}-j}(a)(x,\xi)=a_{m_{\psi}-j}^{\psi}(x, \xi),\end{split} \tag{2.17}\]
taking values, respectively, in the classes \(SG^{(m_{e}-k),m_{\psi}}\) and \(SG^{m_{e},(m_{\psi}-j)}\). In particular, \(\sigma_{e}^{m_{e}-k}(a)\) admits an asymptotic expansion in the classes \(SG^{(m_{e}-k),m_{\psi}-j},j\geq 0\), so that we can canonically identify bi-homogeneous elements \(\sigma_{\psi}^{m_{\psi}-j}\sigma_{e}^{m_{e}-k}(a)\in\mathcal{H}^{(m_{e}-k,m_{ \psi}-j)}\). The same process, applied to \(\sigma_{\psi}^{m_{\psi}-j}(a)\) in the classes \(SG^{m_{e}-k,(m_{\psi}-j)}\), \(k\geq 0\), produces bi-homogeneous functions \(\sigma_{e}^{m_{e}-k}\sigma_{\psi}^{m_{\psi}-j}(a)\in\mathcal{H}^{(m_{e}-k,m_{ \psi}-j)}\), so that we naturally are interested in the relation between the two. The following Lemma (Exercise 3, Section 8.2 in [1]) tells us that it is actually (and luckily) quite simple.
**Lemma 2.13**.: For any \(m\in\mathbb{R}^{2},k,j\in\mathbb{N}\) the maps \(\sigma_{\psi}^{m_{\psi}-k}\) and \(\sigma_{e}^{m_{e}-j}\) commute and define functions \(a_{jk}^{\psi e}\equiv\sigma_{\psi e}^{m_{e}-k,m_{\psi}-j}(a)\equiv\sigma_{e}^ {m_{e}-k}\sigma_{\psi}^{m_{\psi}-j}(a)\). In particular, with any classical \(SG\)-symbol of order \(m\in\mathbb{R}^{2}\), there is canonically associated an "infinite-dimensional matrix" (we sometimes call this an _asymptotic matrix_) of bi-homogeneous functions \(\{a_{jk}^{\psi e}\}_{j,k\geq 0}\) with \(a_{jk}^{\psi e}\in\mathcal{H}^{(m_{e}-k,m_{\psi}-j)}\), such that each "row \(j\)" or "column \(k\)" can be asymptotically summed to give \(\sigma_{\psi}^{m_{\psi}-k}(a)\) or \(\sigma_{e}^{m_{e}-j}(a)\), respectively.
**Remark 2.14**.: In order to lighten the notation, we often omit the superscript \(\psi e\) when dealing with asymptotic matrices. Notice, in addition, that an asymptotic matrix can always be considered as a single asymptotic expansion. It suffices to consider the triangular enumeration of \(\mathbb{N}^{2}\) and sum first each diagonal (a finite sum), so that we are left with a usual asymptotic expansion and we can determine its sum modulo \(\mathcal{S}\). Therefore, with the help of an excision function, we can always sum an asymptotic matrix.
Similarly to the standard class \(\Psi(\mathbb{R}^{n})\), we define a notion of principal symbol.
**Definition 2.15**.: For \(a\in SG^{m}\), the _principal symbol_ of \(a\) is the triple of functions \(\sigma_{pr}^{m}(a)\equiv(\sigma_{e}^{m_{e}}(a),\sigma_{\psi}^{m_{\psi}}(a), \sigma_{\psi e}^{m}(a))\equiv(a_{e},a_{\psi},a_{\psi e})\) canonically associated with \(a\) as in Lemma 2.13.
**Proposition 2.16** (Properties of the principal symbol).: The following holds true:
1. For \(m\in\mathbb{R}^{2}\) the quotient \(\Sigma G^{m}\equiv{}^{SG^{m}}\diagup_{SG^{m-1}}\) contains the principal symbols \((a_{e},a_{\psi},a_{\psi e})\) of the \(a\in SG^{m}\). Equivalently, \(\Sigma G^{m}\) contains the pairs \((a_{e},a_{\psi})\) with \(a_{e}\in SG^{(m_{e}),m_{\psi}}\) and \(a_{\psi}\in SG^{m_{e},(m_{\psi})}\) such that \(\sigma_{e}^{m_{e}}(a_{\psi})=\sigma_{\psi}^{m_{\psi}}(a_{e})\).
2. The direct sum of the principal symbol spaces \(\Sigma G=\bigoplus_{m\in\mathbb{Z}}\Sigma G^{m}\) has the structure of a commutative graded module over \(\Sigma G^{0}\).
3. We can compute the exit and pseudo-differential symbols as (2.18) \[\begin{split}\sigma_{e}^{m_{e}}(a)(x,\xi)&=\lim_{ \mu\to\infty}\mu^{-m_{e}}a(\mu x,\xi),\\ \sigma_{\psi}^{m_{\psi}}(a)(x,\xi)&=\lim_{\mu\to \infty}\mu^{-m_{\psi}}a(x,\mu\xi).\end{split}\]
4. The \(\bullet\)-principal symbol maps are multiplicative on the respective components, i.e. \(\sigma_{\bullet}^{m_{\bullet}+l_{\bullet}}(ab)=\sigma_{\bullet}^{m_{\bullet}} (a)\sigma_{\bullet}^{l_{\bullet}}(b)\) if \(a\in SG^{m},b\in SG^{l}\).
5. If \(a\in SG^{m}\) and \(\sigma_{e}^{m_{e}}(a)=0=\sigma_{\psi}^{m_{\psi}}(a)\), then \(a\in SG^{m-1}\).
6. \(\sigma_{pr}\equiv\bigoplus_{m\in\mathbb{Z}}\sigma_{pr}^{m}\) defines a surjective homomorphism of \(SG\) onto \(\Sigma G\).
**Remark 2.17**.: Importantly, in Proposition 2.16 we are _not_ identifying functions with asymptotic expansions in \(\sigma_{\psi}SG^{m}\) and \(\sigma_{e}SG^{m}\), that is, we are not working in
[MISSING_PAGE_POST]
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
[MISSING_PAGE_POST]
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
[MISSING_PAGE_POST]
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\right.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.\)
\(\left.
**Definition 2.22**.: A function \(a\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{k})\) is called an _amplitude of \(SG\)-type_, or just \(SG\)-amplitude, of order \((m_{1},m_{2},m_{3})\), if for all \(\alpha,\beta\in\mathbb{N}^{n},\gamma\in\mathbb{N}^{k}\) it satisfies the global estimate on \(\mathbb{R}^{n}\)
\[\left|\partial_{x}^{\alpha}\partial_{y}^{\beta}\partial_{\xi}^{\gamma}a\right| \lesssim\left\langle x\right\rangle^{m_{1}-|\alpha|}\left\langle y\right\rangle ^{m_{2}-|\beta|}\left\langle\xi\right\rangle^{m_{3}-|\gamma|}. \tag{2.20}\]
Notice that an \(SG\)-symbol of order \((m_{e},m_{\psi})\) is just an \(SG\)-amplitude of order \((m_{e},0,m_{\psi})\) which is independent of \(y\). The discussion on classicality can be generalised directly to the case of an amplitude by asking that it admits asymptotic expansions separately in all three sets of variables. We come now to \(SG\)-pseudo-differential operators. We remark that most definitions and results below make sense or could be phrased for non-classical operators as well. However, in view of our future needs, we limit ourselves to classical objects and, as before, drop the subscripts "cl" from our notation.
**Definition 2.23**.: For \(a\in\mathit{SG}^{m_{1},m_{2},m_{3}}\) and \(u\in\mathcal{S}\) let (in the sense of oscillatory integrals)
\[\mathrm{Op}(a)u(x)\equiv\int e^{\mathrm{i}(x-y)\xi}a(x,y,\xi)u(y)\,\mathrm{d}y\, \mathfrak{C} \tag{2.21}\]
and call it the _operator_ defined by the amplitude \(a\).
**Lemma 2.24**.: Le \(A=\mathrm{Op}(a)\) be an \(SG\)-pseudo-differential operator defined by an \(SG\)-amplitude of order \((m_{1},m_{2},m_{3})\). There exists an \(SG\)-symbol \(b\in\mathit{SG}^{m_{1}+m_{2},m_{3}}\) such that \(\mathrm{Op}(a)=\mathrm{Op}(b)\). Vice-versa, for every \(SG\)-symbol \(b\in\mathit{SG}^{m_{e},m_{\psi}}\) and any \(t\in\mathbb{R}\) we can find an \(SG\)-amplitude \(a_{t}\) of order \((t,m_{e}-t,m_{\psi})\) with \(\mathrm{Op}(a_{t})=\mathrm{Op}(b)\).
**Definition 2.25**.: We let \(\mathit{LG}^{m}\) denote the class of pseudo-differential operators defined by either symbols or amplitudes of \(SG\)-type and symbol order \(m\) and employ the notation \(\mathcal{R}G\equiv\mathit{LG}^{-\infty,-\infty}\). \(\mathcal{R}G\) is a two-sided ideal in each \(\mathit{LG}^{m}\) and we let \(\mathcal{B}G\) denote the quotient algebra \(\raisebox{0.0pt}{\includegraphics[height=14.226378pt]{LGG}}\diagup\mathcal{R}G\).
**Lemma 2.26**.: For any \(m\in\mathbb{R}^{2}\) the operators in \(\mathit{LG}^{m}(\mathbb{R}^{n})\) act continuously between \(\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})\) and \(\mathcal{C}^{\infty}(\mathbb{R}^{n})\), and also on \(\mathcal{S}(\mathbb{R}^{n})\). Moreover operators \(A\in\mathit{LG}^{0,m_{\psi}}\) act continuously on \(\mathcal{C}_{b}^{\infty}(\mathbb{R}^{n})\) and we can compute the symbol via
\[\sigma(A)(x,\xi)=e^{-\,\mathrm{i}\,x\xi}Ae^{\,\mathrm{i}\,x\xi}. \tag{2.22}\]
**Proposition 2.27**.: The following holds true:
1. Each \(A\in\mathcal{R}G\) is of the form \(\mathrm{Op}(a)\) with \(a\in\mathcal{S}(\mathbb{R}^{2n})\);
2. The map \(\mathrm{Op}\colon\mathit{SG}^{m}\to\mathit{LG}^{m}\) is bijective.
**Theorem 2.28** (Theorem 7, Section 8.2 in [1]).: Let \(A=\mathrm{Op}(a)\in\mathit{LG}^{m},B=\mathrm{Op}(b)\in\mathit{LG}^{l}\). Then \(AB=\mathrm{Op}(c)\in\mathit{LG}^{m+l}\) and we have the asymptotic expansion (the so-called _Leibniz product_ of \(a\) and \(b\))
\[c(x,\xi)\sim\sum_{\alpha\geq 0}\frac{(-\,\mathrm{i})^{|\alpha|}}{\alpha!}\frac{ \partial^{\alpha}a}{\partial\xi^{\alpha}}(x,\xi)\frac{\partial^{\alpha}b}{ \partial x^{\alpha}}(x,\xi). \tag{2.23}\]
Moreover, the principal symbol maps are multiplicative, i.e.
\[\sigma_{e}^{m_{e}+l_{e}}(AB) =\sigma_{e}^{m_{e}}(A)\sigma_{e}^{l_{e}}(B),\] \[\sigma_{\psi}^{m_{\psi}+l_{\psi}}(AB) =\sigma_{\psi}^{m_{\psi}}(A)\sigma_{\psi}^{l_{\psi}}(B),\] \[\sigma_{\psi e}^{m+l}(AB) =\sigma_{\psi e}^{m}(A)\sigma_{\psi e}^{l}(B). \tag{2.24}\]
In particular, there is an algebra isomorphism \(\mathcal{B}G\cong\nicefrac{{SG}}{{SG^{-\infty\,1}}}\), where on the left we have composition and on the right the Leibniz product.
**Theorem 2.29**.: Let \(A\in LG^{m}\) and \(A^{\dagger}\) be the formal \(L^{2}\)-adjoint, defined by \(\left(Au,v\right)=\left(u,A^{\dagger}v\right)\) for all \(u,v\in\mathcal{S}\). Then \(A^{\dagger}\in LG^{m}\) and, if \(A=\operatorname{Op}(a)\), we have \(A^{\dagger}=\operatorname{Op}(a^{\dagger})\) for \(a^{\dagger}\) admitting the asymptotic expansion
\[a^{\dagger}(x,\xi)\sim\sum_{\alpha\geq 0}\frac{(-\operatorname{i})^{|\alpha|}}{ \alpha!}\partial_{x}^{\alpha}\partial_{\xi}^{\alpha}\overline{a(x,\xi)}. \tag{2.25}\]
**Lemma 2.30**.: There is a short exact sequence for any \(m\in\mathbb{Z}^{2}\)
\[0\to LG^{m-1}\to LG^{m}\xrightarrow{\sigma_{pr}}\Sigma G^{m}\to 0. \tag{2.26}\]
We point out that for operators in \(LG\) we have a characterization of the Fredholm property in terms of the ellipticity. This was historically one of the reasons for the introduction of global calculi.
**Theorem 2.31**.: The following holds true:
1. The space \(LG^{-\infty 1}(\mathbb{R}^{n})\) consists of compact operators on \(L^{2}(\mathbb{R}^{n})\);
2. An operator \(P\in LG^{m}\) is elliptic if and only if it admits a parametrix \(Q\in LG^{-m}\), i.e. an operator \(Q\) such that \(PQ-I,QP-I\in LG^{-\infty 1}\);
3. The following are equivalent: 1. \(P\in LG^{m}\) is elliptic. 2. \(P\) extends to a Fredholm operator \(P\colon HG_{l}\to HG_{l-m}\) for some \(l\in\mathbb{Z}^{2}\). 3. \(P\) extends to a Fredholm operator \(P\colon HG_{l}\to HG_{l-m}\) for all \(l\in\mathbb{Z}^{2}\).
We end this section by recalling the existence of so-called _order reducing operators_.
**Lemma 2.32**.: There exist classical, elliptic, invertible operators \(P\in LG^{1_{\varepsilon}},Q\in LG^{1_{\psi}}\) giving isomorphisms \(LG^{m}\to LG^{m+1_{\bullet}}\) by composition. In particular we can take \(P=\operatorname{Op}\left\langle x\right\rangle,Q=\operatorname{Op}\left\langle \xi\right\rangle\).
Taking advantage of Lemma 2.24 we introduce a notion of ellipticity for amplitudes.
**Definition 2.33**.: We say that an amplitude \(a\in SG^{m_{1},m_{1}m_{\varepsilon}}\) is _elliptic_ if can be quantised to an elliptic symbol. Namely, \(a\) is elliptic if and only if \(a\) defines an operator \(A\in LG^{m_{1}+m_{2},m_{3}}\) whose symbol is elliptic.
### \(Sg\)-symbols and the symplectic structure
We equip \(\mathbb{R}^{2n}\cong T^{*}\mathbb{R}^{n}\) with the standard symplectic structure \(\omega=\mathrm{d}\xi_{i}\wedge\mathrm{d}x^{i}\), where \(\xi_{i}\) is the canonically dual coordinate to \(x^{i}\). Recall that this induces a Poisson bracket on smooth functions by
\[\{f,g\}=\frac{\partial f}{\partial\xi_{i}}\frac{\partial g}{\partial x^{i}}- \frac{\partial f}{\partial x^{i}}\frac{\partial g}{\partial\xi_{i}}. \tag{2.27}\]
The interplay between this operation and the \(SG\)-calculus will help us clarify the situation for the study of singular symplectomorphisms in Chapter 3.
**Proposition 2.34**.: The following holds true:
1. \(SG\subset\mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) is a commutative algebra with respect to the pointwise product, which is, in addition, bi-filtered.
2. The Poisson bracket gives the structure of a Lie algebra to \(\mathit{SG}\) and in particular is a bi-filtered bi-derivation of \(\mathit{SG}\). That is, \(\{a,b\}\in\mathit{SG}^{m+k-1}\) if \(a\in\mathit{SG}^{m},b\in\mathit{SG}^{k},m,k\in\mathbb{R}^{2}\).
3. \(\mathit{SG}^{0}\) is a sub-algebra and Lie sub-algebra of \(\mathit{SG}\), and \(\Sigma G^{0}\) inherits the structure of a commutative Lie algebra (namely, the Poisson bracket is trivial in the quotient).
4. The Poisson bracket induces Lie algebra structures on \(\mathit{SG}^{1}\) and \(\Sigma G^{1}\), and the principal symbol map is then an homomorphism of Lie algebras. More specifically, the \(\bullet\)-principal symbol of \(\{a,b\}\) only depends on the \(\bullet\)-principal symbols of \(a\) and \(b\) and can be computed explicitly as (2.28) \[\begin{split}\sigma^{1}_{\psi}(\{a,b\})&=\{\sigma_{ \psi}(a),\sigma_{\psi}(b)\},\\ \sigma^{1}_{e}(\{a,b\})&=\{\sigma_{e}(a),\sigma_{e} (b)\},\\ \sigma^{1}_{\psi e}(\{a,b\})&=\{\sigma_{\psi e(a)},\sigma_{\psi e}(b)\}.\end{split}\]
We group the terms according to their homogeneity to obtain
\[\begin{split}\partial^{r}a\partial_{s}b&\sim\partial^{r }a_{00}\partial_{s}b_{00}\\ &\quad+\partial^{r}a_{10}\partial_{s}b_{00}+\partial^{r}a_{01} \partial_{s}b_{00}+\partial^{r}a_{00}\partial_{s}b_{10}+\partial^{r}a_{00} \partial_{s}b_{01}\\ &\quad+\partial^{r}a_{20}\partial_{s}b_{00}+\partial^{r}a_{02} \partial_{s}b_{00}+\partial^{r}a_{00}\partial_{s}b_{20}+\partial^{r}a_{00} \partial_{s}b_{02}\\ &\quad+\partial^{r}a_{11}\partial_{s}b_{00}+\partial^{r}a_{10} \partial_{s}b_{10}+\partial^{r}a_{10}\partial_{s}b_{01}\\ &\quad+\partial^{r}a_{01}\partial_{s}b_{10}+\partial^{r}a_{01} \partial_{s}b_{01}+\partial^{r}a_{00}\partial_{s}b_{11}\\ &\quad+\ldots\\ &=\sum_{m\geq 0}\sum_{\begin{subarray}{c}k,j\geq 0\\ k+j=m\end{subarray}}c^{r}_{s}(k,j),\end{split} \tag{2.30}\]
where
\[c^{r}_{s}(k,j)=\sum_{\begin{subarray}{c}l_{1}+l_{3}=k\\ l_{1},l_{3}\geq 0\end{subarray}}\sum_{\begin{subarray}{c}l_{2}+l_{4}=j\\ l_{2},l_{4}\geq 0\end{subarray}}\partial^{r}a_{l_{1}l_{2}}\partial_{s}b_{l_{3}l_{4 }}\in\mathcal{H}^{(m_{e}+l_{e}-1-k,m_{\psi}+l_{\psi}-1-j)}\]
are the components of the asymptotic matrix of \(\partial^{r}a\partial_{s}b\). We can of course obtain a similar expression for \(\partial_{s}a\partial^{r}b\). Taking the trace \(r=s\) and subtracting the two expressions gives then the asymptotic matrix of the Poisson bracket \(\{a,b\}\) for symbols of general orders \(m,l\in\mathbb{R}^{2}\).
Since taking traces and differences in \(\mathit{SG}^{m}\) cannot increase the order of the symbols in any fashion, the class of \(\{a,b\}\) in the quotient spaces \(\Sigma G\) can be computed from the classes of \(a\) and \(b\), namely, from their principal symbols. Indeed we see directly from (2.30) that the outer row and column of the asymptotic matrix of \(\partial^{r}a\partial_{s}b\) correspond to taking \(k=0\) or \(j=0\) in \(c^{r}_{s}(k,j)\), and thus only depend on the outer row and column of \(a\) and \(b\). Furthermore it is clear that the Poisson bracket commutes with our asymptotic expansions: indeed, the bracket of classical symbols is again a classical symbol.
We are mainly interested in the algebraic structure of the spaces \(\Sigma G^{0}\) and \(\Sigma G^{1}\). For the former, notice that \(a,b\in\mathit{SG}^{0}\) implies that \(\partial^{r}a\partial_{s}b\in\mathit{SG}^{-1}\) and the same must be true for \(\{a,b\}\). Hence, the Poisson bracket vanishes on \(\Sigma G^{0}\). For the latter, we start from (2.30) to compute
\[\{a,b\}\sim\{a_{00},b_{00}\}+\sum_{k\geq 1}\sum_{l_{1}+l_{2}=k}\{a_{l_{1}0},b_ {l_{2}0}\}+\sum_{j\geq 1}\sum_{l_{3}+l_{4}=j}\{a_{0l_{3}},b_{0l_{4}}\}+\sum_{j,k \geq 1}(c^{r}_{r}(k,j)-\tilde{c}^{r}_{r}(k,j)), \tag{2.31}\]
where \(\tilde{c}^{r}_{s}\) are the components of the asymptotic matrix of \(\partial_{s}a\partial^{r}b\). Now, all the terms in the last sum are at most in \(\mathit{SG}^{0}\), while the others are just, respectively, the symbol \(\sigma_{\psi e}(\{a,b\})\), the \(e\)-asymptotic expansion of \(\sigma_{\psi}(\{a,b\}-\{a_{00},b_{00}\})\) and the \(\psi\)-asymptotic expansion of \(\sigma_{e}(\{a,b\}-\{a_{00},b_{00}\})\).
In fact, more is true: the \(\bullet\)-principal symbol of \(\{a,b\}\) only depends on the \(\bullet\)-principal symbols of \(a\) and \(b\) and can be computed explicitly (notice that the previous computation only gives the classes of \(\sigma_{\bullet}(\{a,b\})\) up to \(\mathit{SG}^{-1\bullet\infty}\)). To show this, consider for example symbols \(a,b\in\mathit{SG}^{1,k}\) and \(r\in\mathit{SG}^{0,l}\), and look at
\[\{a+r,b\}=\{a,b\}+\{r,b\}. \tag{2.32}\]
It is clear that \(\{a,b\}\in\mathit{SG}^{1,k+l-1}\) and \(\{r,b\}\in\mathit{SG}^{0,k+l-1}\), in view of the properties of the calculus. Then, the class of \(\{a+r,b\}\) in \(\Sigma G^{1,\bullet}\) does not depend on \(r\), the limits
\(\lim_{\lambda\to\infty}\lambda^{-1}\partial^{j}(a+r)(\lambda x,\xi)\partial_{j}b( \lambda x,\xi)\) and \(\lim_{\lambda\to\infty}\lambda^{-1}\partial_{j}(a+r)(\lambda x,\xi)\partial^{j}b (\lambda x,\xi)\) exist, and do not depend on \(r\) either. We can then directly compute, for \(x\neq 0\), that
\[\begin{split}\sigma^{1}_{e}(\{a,b\})&=\lim_{\lambda \to\infty}\lambda^{-1}(\partial^{j}a(\lambda x,\xi)\partial_{j}b(\lambda x,\xi )-\partial_{j}a(\lambda x,\xi)\partial^{j}b(\lambda x,\xi))\\ &=\partial^{j}a_{e}(x,\xi)\partial_{j}b_{e}(x,\xi)-\partial_{j}a _{e}(x,\xi)\partial^{j}b_{e}(x,\xi)\\ &=\{a_{e},b_{e}\}.\end{split} \tag{2.33}\]
Since this can be done in the same way for the other components of \(\sigma_{pr}(\{a,b\})\), we conclude that \(\Sigma G^{1}\) is a Lie algebra with respect to the Poisson bracket acting component-wise.
We now turn to examine the commutator of two \(SG\)-pseudo-differential operators. For \(A,B\in{\it LG}^{1}\) with symbols \(a,b\) the asymptotic expansions for the products \(AB,BA\) can be written as
\[\begin{split}\sigma(AB)&\sim ab+\partial_{\xi_{j}}a (x,\xi)D_{x^{j}}b(x,\xi)+\sum_{|\alpha|\geq 2}\frac{1}{\alpha!}\partial^{ \alpha}_{\xi}a(x,\xi)D^{\alpha}_{x}b(x,\xi),\\ \sigma(BA)&\sim ab+\partial_{\xi_{j}}b(x,\xi)D_{x^{j} }a(x,\xi)+\sum_{|\alpha|\geq 2}\frac{1}{\alpha!}\partial^{\alpha}_{\xi}b(x,\xi)D^{ \alpha}_{x}a(x,\xi),\end{split} \tag{2.34}\]
so that, taking the difference, we obtain
\[\begin{split}\sigma([A,B])&\sim\mathrm{i}\left( \partial_{\xi_{j}}a(x,\xi)\partial_{x^{j}}b(x,\xi)-\partial_{\xi_{j}}b(x,\xi) \partial_{x^{j}}a(x,\xi)\right)\\ &+\sum_{|\alpha|\geq 2}\frac{\mathrm{i}^{\alpha}}{\alpha!}\left( \partial^{\alpha}_{\xi}a\partial^{\alpha}_{x}b-\partial^{\alpha}_{x}a\partial ^{\alpha}_{\xi}b\right).\end{split} \tag{2.35}\]
In view of the properties of \(SG\)-symbols we have \(\partial^{\alpha}_{\xi}a,\partial^{\alpha}_{\xi}b\in{\it SG}^{-1,1},\partial ^{\alpha}_{x}a,\partial^{\alpha}_{x}b\in{\it SG}^{1,-1}\) whenever \(|\alpha|\geq 2\), so that every time we take a product as in the second term in (2.35) we obtain at most a symbol in \({\it SG}^{0}\). Hence, in the quotient it holds true that
\[\sigma([A,B])\sim\mathrm{i}\{a,b\}, \tag{2.36}\]
and we can take advantage of (2.33) and its \(\psi-\) and \(\psi e-\)counterparts to get the required formulas at the level of principal symbols. QED
**Remark 2.35**.: As one can directly deduce from the proof of Proposition 2.34, it holds true that \(\sigma_{pr}(\{a-\ddot{a},b\})=0\) for any \(a,b\in{\it SG}^{1}\). Accordingly, we can also compute the Poisson bracket from the associated symbols as
\[\begin{split}\sigma^{1}_{\psi}(\{\ddot{p},\ddot{q}\})& =\{p_{\psi},q_{\psi}\},\\ \sigma^{1}_{e}(\{\ddot{p},\ddot{q}\})&=\{p_{e},q_{e }\},\\ \sigma^{1}_{\psi e}(\{\ddot{p},\ddot{q}\})&=\{p_{\psi e },q_{\psi e}\}.\end{split} \tag{2.37}\]
### \(SG\)-Fourier Integral Operators
In this Section we briefly describe the calculus of \(\mathcal{Q}\)-operators of Andrews [And]. This is the class of FIOs \(A\colon\mathcal{S}\to\mathcal{S}\) which we will need in our discussion in Chapter 4. Most of the material hereafter is taken directly from [And], with some notable exceptions. We begin with the standard class \(\mathcal{Q}\), before introducing a slightly modified (compared with the original source) _generalised type \(\mathcal{Q}\) class_, in that we localise and assume classicality throughout. Subsequently, we give a small, albeit important to our purposes, generalisation of Coriasco's Egorov-type Theorem (Proposition 14 in [Cor99]) for operators in the
class \(\mathcal{Q}\). The functions \(f,g\) appearing in the next definition, and also later in the definition of the generalised class, will be referred to as _phase components_.
**Definition 2.36**.: We say that a real-valued function \(\varphi(x,y,\xi)=f(x,\xi)+g(y,\xi)\) is a _type \(\mathcal{Q}\) phase function_, and write \(\varphi=f+g\in\mathcal{Q}\), if the following assumptions are satisfied:
1. \(f,g\in\mathit{SG}^{1}(\mathbb{R}^{n}\times\mathbb{R}^{n})\);
2. \(\left\langle\nabla_{x}f(x,\xi)\right\rangle,\left\langle\nabla_{y}g(y,\xi) \right\rangle\sim\left\langle\xi\right\rangle\)
3. \(\left\langle\nabla_{\xi}f(x,\xi)\right\rangle\sim\left\langle x\right\rangle;\)
4. \(\left\langle\nabla_{\xi}g(y,\xi)\right\rangle\sim\left\langle y\right\rangle;\)
5. \(\det(\partial_{x}\partial_{\xi_{j}}f(x,\xi)),\det(\partial_{y^{i}}\partial_{ \xi_{j}}g(y,\xi))\gtrsim 1;\)
**Definition 2.37**.: A _type \(\mathcal{Q}\) Fourier Integral Operator_ (\(\mathcal{Q}\)-FIO) is an operator \(A\colon\mathcal{S}(\mathbb{R}^{n})\to\mathcal{S}(\mathbb{R}^{n})\), defined by an oscillatory integral
\[FIO(\varphi,a)u(x)=\int e^{\mathrm{i}(f(x,\xi)+g(y,\xi))}a(x,y,\xi)u(y)\, \mathrm{d}y\,\mathfrak{A}\xi, \tag{2.38}\]
with phase \(\varphi=f+g\in\mathcal{Q}\) and amplitude \(a(x,y,\xi)\in\mathit{SG}^{m_{1},m_{2},m_{3}}(\mathbb{R}^{3n})\).
**Proposition 2.38** (Properties of \(\mathcal{Q}\)-FIOs).: Let \(A\) be a \(\mathcal{Q}\)-FIO with symbol \(a\) and phase \(\varphi=f+g\). Then
1. \(A\colon\mathcal{S}\to\mathcal{S}\) is well-defined and continuous;
2. With respect to the inner product \((u,v)=\int u(x)\overline{v(x)}\,\mathrm{d}x\) we have that the formal adjoint \(A^{\dagger}\) is given by the \(\mathcal{Q}\)-FIO with phase \(\varphi^{\dagger}(x,y,\xi)=-g(x,\xi)-f(y,\xi)\) and symbol \(a^{\dagger}(x,y,\xi)=\overline{a(y,x,\xi)}\);
3. \(A\) extends to \(A\colon\mathcal{S}^{\prime}\to\mathcal{S}^{\prime}\) continuously.
We now introduce a generalised class of phases. We remark first that our upcoming definition differs slightly from the one in the original work of Andrews, in that he defines FIOs whose phases were asked to satisfy asymmetric assumptions in \(x\) and \(y\). Indeed, the conditions below were required to hold true globally in \(y\) and only locally in \(x\), in order to retain the global non-degeneracy of the second phase component. However, for our purposes, we only need the assumption of non-degeneracy to hold true on the supports of locally chosen amplitudes on a certain (singular) Legendrian submanifold. In other words, our discussion will always be localised and we have accordingly decided to ask that the same conditions hold true only locally for \(x\) and \(y\) variables.
We consider functions depending on \(x,y\in\mathbb{R}^{n}\) and \(\theta\in\mathbb{R}^{n+d}\) for some \(n>0,d\geq 0\) (if \(d=0\) then we set \(\mathcal{Q}(a)=\mathcal{Q}\) in what follows). Also we relax some of the conditions imposed on \(\mathcal{Q}\) to hold true only on the support of a given amplitude.
**Definition 2.39**.: Let \(a\in\mathit{SG}^{m_{1},m_{2},m_{3}}(\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{n+d})\) be an \(SG\)-amplitude and \(\varphi(x,y,\theta)=f(x,\theta)+g(y,\theta)\) for some smooth \(f,g\) and \((x,y,\theta)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n+d}\). We write \(\varphi\in\mathcal{Q}(a)\) if, on \(\mathrm{supp}(a)\), the following conditions hold true: we have \(f,g\in\mathit{SG}^{1}\) with \(\left\langle\nabla_{\!\!x}\!f(x,\theta)\right\rangle,\left\langle\nabla_{\! \!y}g(y,\theta)\right\rangle\sim\left\langle\theta\right\rangle\), and we can find (possibly after rearranging) a splitting \(\theta=(\xi,\eta)\in\mathbb{R}^{n}\times\mathbb{R}^{d}\) and an open set \(V_{\varphi}\subset\mathbb{R}^{d}\) with \(\mathrm{supp}(a)\subset\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n} \times V_{\varphi}\) such that:
1. \(\left\langle\nabla_{\!\!\xi}g(y,\theta)\right\rangle\sim\left\langle y\right\rangle\) on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\times V_{\varphi}\);
2. \(\left(\partial_{y^{i}}\partial_{\xi_{j}}g(y,\theta)\right)\) has maximal rank on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\times V_{\varphi}\) and the absolute value of its determinant is uniformly bounded away from \(0\);
3. \(\partial_{y^{i}}\partial_{\xi_{j}}g(y,\theta)\lesssim 1\) on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\times V_{\varphi}\);
4. For every fixed \(y\in\mathbb{R}^{n},\eta\in V_{\varphi}\) we have \(|\mathrm{d}_{y}g(y,\theta)|\to\infty\) as \(|\xi|\to\infty\);
5. The same assumptions 1.-4. hold true also for \(f(x,\theta)\) with \(y^{i}\) replaced by \(x^{i}\).
Given an amplitude \(a\in SG^{m_{1},m_{2},m_{3}}\) and a phase \(\varphi\in\mathcal{Q}(a)\), the Fourier Integral Operator associated with \(a\) and \(\varphi\) is defined by (2.38), replacing the variables \(\xi\) with \(\theta\). We will use the notation \(\mathcal{Q}_{gen}\) to speak about operators in the classes \(\mathcal{Q}(a)\) for an arbitrary \(a\in SG^{m_{1},m_{2},m_{3}}\), and we will refer to the variables \(\xi\) in a splitting as above as the _regular variables_ of the phase.
The next result is (a specialisation to our setting of) Theorem 8.5.1 of [And].
**Theorem 2.40** (Composition of \(\mathcal{Q}_{gen}\)-operators).: Given \(a\in SG^{m_{1},m_{2},m_{3}},b\in SG^{l_{1},l_{2},l_{3}}\), phases \(\varphi=f(x,\theta)+g(y,\theta)\in\mathcal{Q}(a),\eta=u(y,\kappa)+v(z,\kappa) \in\mathcal{Q}(b)\) and corresponding operators \(A=FIO(\varphi,a),B=FIO(\varphi,b)\), the composition \(A\circ B\) is, modulo a compact operator on \(\mathbb{R}^{n}\), a Fourier Integral Operator of type \(\mathcal{Q}_{gen}\). In particular, for each \(p,q\in\mathbb{R}\) such that \(p+q=m_{1}+m_{2}+l_{1}+l_{2}\) we can find an amplitude \(c\in SG^{p,q,m_{3}+l_{3}}\) and a phase \(\Phi\in\mathcal{Q}(c)\) such that \(A\circ B=FIO(\Phi,c)\). If \(\theta=(\xi,\eta)\in\mathbb{R}^{n+d_{1}}\) and \(\kappa=(\mu,\nu)\in\mathbb{R}^{n+d_{2}}\) for \(\xi,\mu\in\mathbb{R}^{n}\), the phase \(\Phi\) has the form \(\Phi(x,z,\gamma)=f(x,\theta)+h(x,\theta,\kappa)+v(z,\kappa)\) where \(\gamma=(\mu,\tilde{y},\theta,\nu)\) are \(3n+d_{1}+d_{2}\) frequency variables with \(\mu\) the regular ones in a splitting as in Definition 2.39. Moreover \(f+h\) and \(v\) are phase components and \(h\) satisfies:
\[d_{1}=d_{2},\;g(y,\xi)=-u(y,\xi)\implies h(x,\theta,\kappa)=0. \tag{2.39}\]
**Corollary 2.41**.: The class \(\mathcal{Q}_{gen}\) satisfies:
1. \(LG\circ\mathcal{Q}_{gen}\circ LG\subset\mathcal{Q}_{gen}\).
2. If \(A\in\mathcal{Q}_{gen}\) with phase \(\varphi=f+g\) and amplitude \(a(x,y,\theta)\), then \(A^{\dagger}\in\mathcal{Q}_{gen}\) with phase \(\varphi^{\dagger}=(-g)+(-f)\) and amplitude \(a^{\dagger}(x,y,\theta)=a(y,x,\theta)\). In particular if \(a(x,y,\theta)=a(y,x,\theta)\) then \(A\in\mathcal{Q}(a)\) if and only if \(A^{\dagger}\in\mathcal{Q}(a)\).
3. For each \(a\in SG^{m_{1},m_{2},m_{3}}\) we have \(\mathcal{Q}(a)\circ\mathcal{Q}(a)^{\dagger}\subset LG,\mathcal{Q}(a)^{\dagger }\circ\mathcal{Q}(a)\subset LG\).
4. \(A\in\mathcal{Q}_{gen}\) extends to a continuous operator \(A\): \(\mathcal{S}^{\prime}\to\mathcal{S}^{\prime}\).
Notice that, in view of the last theorem and in particular of the properties of the function \(h\), the statements in the corollary are a generalisation of the corresponding assertions in Proposition 2.38. The following specialised composition results for the class \(\mathcal{Q}\), Theorem 7.2.1 in [And], also follow at once from the composition theorem for \(\mathcal{Q}_{gen}\).
**Theorem 2.42**.: Let \(A,B\) be \(\mathcal{Q}\)-FIOs with amplitudes \(a,b\) (of arbitrary orders \(m,l\in\mathbb{R}^{3}\)) and phases \(\varphi=f+g,\mathfrak{p}=r+s\).
1. If \(g(y,\xi)=-r(y,\xi)\) then \(AB\in\mathcal{Q}\) with phase \(f+s\) and symbol \(c\in SG^{p,q,m_{3}+l_{3}}\), where we can choose \(p,q\) so that \(p+q=m_{1}+m_{2}+l_{1}+l_{2}\).
2. If in addition \(f(x,\xi)=-s(x,\xi)\), then \(AB\) is an \(SG\)-\(\Psi\)DO and we can choose again \(p,q\) with \(p+q=m_{1}+m_{2}+l_{1}+l_{2}\) so that we have an amplitude \(\tilde{c}\in SG^{p,q,m_{3}+l_{3}}\).
Moreover, the amplitudes of \(AB\) in both cases admits an asymptotic expansion (cf. [And], Proposition 4.0.4).
**Remark 2.43**.: We have to remark that Andrews does not assume classicality for any of the operator classes he introduces. However, looking at the asymptotic expansions he obtains, it becomes clear that the assumptions of classicality (for
both phases and amplitudes) and integer order for all operators are preserved by his composition formulae and therefore we obtain without fuss a sub-calculus modelled after \(LG\).
**Remark 2.44**.: The FIOs of type \(\mathcal{Q}\) are a direct generalisation of the operator calculus introduced by Coriasco [10]. This calculus corresponds to the subclass where we always take \(g(y,\xi)=-y\xi\) (type I operators) or \(f(x,\xi)=x\xi\) (type II operators). It is an important feature for us that the Egorov theorem for type I and II operators (namely, Proposition 14 in [10]) extends to \(\mathcal{Q}\). It suffices for this to look at the first term in the asymptotic expansion of Proposition 4.0.4 of [1] and use the same formal proof as given by Coriasco. At the same time, we remark that \(\mathcal{Q}\)-FIOs are given by arbitrary compositions of type I and type II operators. Indeed according to 1. in Theorem 2.42, when we compose a type I operator with phase \(\varphi=f(x,\xi)-y\xi\) and a type II operator with phase \(\mathfrak{p}=x\xi-g(y,\xi)\), we obtain the \(\mathcal{Q}\)-operator with phase \(\mathfrak{q}=f+g\). Furthermore, applying this line of thought in reverse, we see that any \(\mathcal{Q}\)-operator with phase \(\mathfrak{q}=f+g\) can be written as a composition of a type I operator with phase \(\varphi=f(x,\xi)-y\xi\) and a type II operator with phase \(\mathfrak{p}=x\xi-g(x,\xi)\). This justifies the following definition as a direct generalisation of Definition 9 in [10].
**Definition 2.45**.: An FIO of type \(\mathcal{Q}\) is called _elliptic_ if its amplitude is elliptic in the sense of Definition 2.33.
**Proposition 2.46**.: Let \(A=FIO(\varphi,a)\) be an elliptic FIO of type \(\mathcal{Q}\). Then \(A\) admits a parametrix \(A^{\#}\in\mathcal{Q}\), namely an operator such that \(AA^{\#}-I,A^{\#}A-I\in\mathcal{R}G\).
**Theorem 2.47** (Egorov's Theorem for \(\mathcal{Q}\)).: Let \(A\) be an elliptic global \(\mathcal{Q}\)-FIO with phase \(f(x,\theta)+g(y,\theta)\) and amplitude \(a\in SG^{0}\), and let \(P\in LG^{m}\). Then \(A^{\#}PA\in LG^{m}\) and \(\sigma_{pr}(A^{\#}PA)=C^{*}\sigma_{pr}(P)\) where \(C\) is the triple of homogeneous symplectic maps defined by the principal symbols of the phase function \(f+g\). Namely \(C\) is given as a triple \((C_{e},C_{\psi},C_{\psi e})\) with each map acting by pull-back on the respective component of the principal symbol and such that \(\varphi_{\bullet}\) is a phase function parametrising the graph of \(C_{\bullet}\) as
\[\nabla_{\theta}\varphi_{\bullet}=0\implies\operatorname{graph}C_{\bullet}=(x,\nabla_{x}\varphi_{\bullet},y,\nabla_{y}\varphi_{\bullet}).\]
Proof.: Write \(A=BC\) for \(B\) a type I operator with phase \(f(x,\theta)+y\theta\) and amplitude \(\sqrt{a}\) and \(C\) a type II operator having phase \(x,\theta+g(x,\theta)\) and amplitude \(\sqrt{a}\) in the notation of Remark 2.44. Since \(A\) is elliptic, both \(B\) and \(C\) are elliptic and admit parametrices \(B^{\#}\) and \(C^{\#}\), respectively of type II with phase \(-x\theta-f(y,\theta)\) and of type I with phase \(-g(x,\theta)-y\theta\). Moreover \(APA^{\#}=BCPC^{\#}B^{\#}\), so that applying Proposition 14 in [10] to \(CPC^{\#}=Q\) gives that this composition is a \(SG-\Psi\)DO of order \(m\). A second application of the same result to \(BQB^{\#}\) gives the final claim. QED
## 3. Scattering geometry
### Manifolds with corners and scattering geometry
We give hereafter a short account of basic definitions of manifolds with corners, smooth structures with corners and so on, adopting in essence the same conventions as in [11]. A more detailed exposition can be found, for example, in [12], while a comparison of
the different existing notions can be found in [10]. Notice that, for the sake of simplicity and clarity, we prefer here an extrinsic approach.
**Definition 3.1**.: A _parametrised patch of dimension \(d\), with corners of codimension \(k\)_, \(0\leq k\leq d\), on a para-compact Hausdorff topological space \(Z\), is a pair \((U,\varphi)\) where \(V\subset[0,\infty)^{k}\times\mathbb{R}^{d-k}\) is open and \(\varphi\colon U\to\varphi(U)\subset Z\) is a homeomorphism. If we can choose \(k=0\) then \((U,\varphi)\) is just a parametrisation of an interior patch on \(Z\) (namely, \(\varphi(U)\cap\partial Z=\emptyset\)), while if we can have \(k=1\) we say that \((U,\varphi)\) is a parametrisation of a boundary patch. We say that a pair \((V,\psi)\) is a _chart of dimension \(d\), with corners of codimension \(k\)_, if \((U,\varphi)\equiv(\varphi(U),\psi^{-1})\) is a parametrised patch of dimension \(d\), with corners of codimension \(k\). We adopt the same terminology with respect to interior and boundary charts.
**Definition 3.2**.: A _\(\mathcal{C}^{\infty}\)-manifold of dimension \(d\), with corners of (maximal) codimension \(k\)_ is a para-compact, second countable, Hausdorff topological space \(Z\), together with a collection \(\{(U_{i},\varphi_{i})\}\) of parametrised patches of dimension \(d\) and corners of codimension \(k\), such that \(\{\varphi(U_{i})\}\) covers \(Z\) and, whenever \(\varphi_{j}(U_{j})\cap\varphi_{i}(U_{i})\neq\emptyset\), the changes of coordinates \(\varphi_{ij}=\varphi_{i}^{-1}|_{\varphi_{j}(U_{j})\cap\varphi_{i}(U_{i})} \circ\varphi_{j}\colon U_{j}\to U_{i}\) are smooth maps, in the sense that there exists a smooth map \(\tilde{\varphi}_{ij}\colon\tilde{U}_{j}\to\tilde{U}_{i}\), with open sets \(\tilde{U}_{i},\tilde{U}_{j}\subset\mathbb{R}^{k}\times\mathbb{R}^{d-k}\) containing \(U_{i},U_{j}\), respectively, satisfying \(\tilde{\varphi}_{ij}|_{U_{j}}=\varphi_{ij}\). If, at every point \(p\in Z\), we can find parametrised patches with \(k=0\), then \(Z\) is a _smooth manifold_. Similarly, if all the patches can be picked with \(k=1\), then \(Z\) is a _smooth manifold with boundary_.
**Lemma 3.3**.: Let \(Z\) be a \(\mathcal{C}^{\infty}\)-manifold of dimension \(d>0\), with corners of codimension \(k\). There exists a smooth manifold \(\tilde{Z}\) of dimension \(d\), without boundary, such that \(\tilde{Z}\) is open and non-empty in \(\tilde{Z}\).
**Definition 3.4**.: The space of _\(\mathcal{C}^{\infty}\)-functions_ on \(Z\) is the set \(\mathcal{C}^{\infty}(Z)\) consisting of all restrictions of smooth functions from \(\tilde{Z}\) to \(Z\). If we do not specify further, every geometric object (for example, vector bundles, differential of a smooth map, and so on) defined on \(Z\) is obtained as the restriction of the corresponding concept from \(\tilde{Z}\).
**Convention 1**.: In what follows, we _always_ assume that \(Z\) is compact and that there exist a _finite_ collection of smooth functions \(\rho_{i},i\in I\), on \(\tilde{Z}\) such that \(Z=\{p\in\tilde{Z}\text{ s.t. }\forall i\in I\ \rho_{i}(p)\geq 0\}\) and such that, whenever for a sub-collection \(J\subset I\) it holds true \(\rho_{j}(p)=0\) for all \(j\in J\), then the differentials \(\mathrm{d}\rho_{j}\) are linearly independent. Near points \(p\), for which each patch containing \(p\) has codimension \(k>0\) corners, we _always_ use coordinates in the form \((\rho_{i_{1}},\ldots,\rho_{i_{k}},z_{j_{1}},\ldots,z_{j_{d-k}})\) for \(z\) coordinates on the codimension \(k\) corner in the patch \(U\).
**Remark 3.5**.: The structure of the spaces defined by our axioms corresponds to the notion of _manifold with embedded corners_ of Joyce [10]. Accordingly, we can always pick a local boundary-defining function, and since corners are embedded sub-manifolds we can always find a global one associated with any boundary hypersurface.
**Lemma 3.6**.: Any \(\mathcal{C}^{\infty}\)-manifold \(Z\) of dimension \(d\), with corners of codimension \(k\), admits a stratification \(\cup_{i=0}^{k}Z_{i}\), where \(Z_{i}\) is a \(\mathcal{C}^{\infty}\)-manifold of dimension \(d-i\), with corners of codimension \(k-i\). We call the union of the strata \(Z_{i}\) for \(i\geq 1\) the _boundary_ of the manifold with corners \(Z\).
**Definition 3.7**.: The _depth_ of a point \(p\in Z\), \(\operatorname{depth}(p)\), is the number of independent boundary-defining functions vanishing at \(p\). Equivalently, it is the codimension of the boundary stratum \(Z_{i}\) to which \(p\) belongs.
**Remark 3.8**.: Joyce makes a distinction between the boundary of \(Z\), interpreted as a manifold with corners in its own right and admitting the stratification of Lemma 3.6, and the embedded boundary of \(Z\), which is in general only a topological manifold. In our definition we insist that the corners are embedded, and we must therefore adopt the second point of view. This forces us to define "smooth" functions on \(\partial Z\) as the restriction of a smooth function on \(Z\) to \(\partial Z\), which does not agree in general with the concept of smooth function on the boundary interpreted according to Joyce's point of view. This is however only a minor inconvenience for our future purposes and we shall stick with this definition, more widespread in the context of singular analysis.
**Remark 3.9**.: Notice that our definition of manifolds with corners _does not_ include many other singular situations that have been considered in the literature. Indeed, on manifolds with corners, the function depth has the following property: if \(\operatorname{depth}(p)=k\) and \(0\leq s<k\), then any neighbourhood of \(p\) (in the topology of \(Z\)) contains a point \(q\neq p\) with \(\operatorname{depth}q=s\). This is obviously untrue in other settings. For example, any closed cone (more generally, any manifold with a conical singularity) clearly doesn't have this property.
**Definition 3.10**.: A relatively open set \(U\subset Z\) is said to be _interior_ if \(\overline{U}\cap\partial Z=\emptyset\). In case \(\overline{U}\cap\partial Z\neq 0\), we always assume that \(\overline{U}\cap\partial Z\subset U\), and we call \(U\) either a _boundary neighbourhood_ or a _corner neighbourhood_, depending on whether \(U\) intersects only the stratum of codimension \(1\) or not. The set \(\mathcal{C}^{\infty}(U)\) of smooth functions on \(U\) consists of all the restrictions of smooth functions on \(Z\) to \(U\). For \(U\) a boundary neighbourhood, intersecting a corner of codimension \(s\), the set \(\rho_{i_{1}}^{m_{1}}\dots\rho_{i_{s}}^{m_{s}}\mathcal{C}^{\infty}(U)\) consists of those functions \(h\in\mathcal{C}^{\infty}(\dot{\in}U)\) such that \(\rho_{i_{1}}^{-m_{1}}\dots\rho_{i_{s}}^{-m_{s}}h\) extends to a smooth function \(\tilde{u}\in\mathcal{C}^{\infty}(U)\). We have then a natural notion of smooth functions on a boundary hyper-surface, namely, the restriction of a function on a boundary neighbourhood \(U\) to \(U\cap\partial Z\).
**Remark 3.11**.: Notice that, in our setup, the boundary \(\partial Z\) is not itself a manifold with corners and does not carry a natural smooth structure. We obviate to this problem by choosing the following notion of smoothness. For a relatively open \(V\subset\partial Z\), intersecting the codimension \(2\) stratum \(Z_{2}\) and no higher-codimensional stratum, the smooth functions \(h\in\mathcal{C}^{\infty}(V)\) are given by a pair of smooth functions \(h=(f,g)\) on the two boundary hyper-surfaces such that \(f|_{V\cap Z_{2}}=g|_{V\cap Z_{2}}\). The notion for higher-codimensional strata is defined accordingly. This notion of smoothness _across the corner_, while in a certain sense arbitrary, fulfils the natural requirement that the function \(f|_{Z_{2}}\) identifies with a smooth function _on the corner_. We will see later that, for the scattering calculus of pseudo-differential operators, the principal symbols are identified with continuous function on the boundary of a certain manifold with corners, smooth across the corner according to this definition. Notice how this contrasts with Joyce's convention, which implies that the boundary of \(Z\) carries a natural smooth structure given by considering it as a manifold with corners on its own right.
**Definition 3.12**.: The space \(\dot{\mathcal{C}}^{\infty}(Z)\) consists of those functions \(f\colon Z\to\mathbb{C}\) such that \(f\) and all it derivatives vanish at the boundary. The space of _extendible distributions_
on \(Z\), \(\mathcal{E}^{\prime}(Z)\), is the (topological) dual space of \(\dot{\mathcal{C}}^{\infty}(Z,\Omega(Z))\), the sections of the density bundle having coefficients in \(\dot{\mathcal{C}}^{\infty}(Z)\).
On any manifold with corners there is a natural Lie sub-algebra \(\mathfrak{X}_{b}(Z)\) of \(\mathfrak{X}(Z)\), consisting of vector fields which are tangent to all boundary hyper-surfaces. Namely, on an interior neighbourhood \(U\) we have \(\mathfrak{X}_{b}(U)\cong\mathfrak{X}(U)\), while if \(U\) is a boundary neighbourhood with corners of codimension \(k\) then \(\mathfrak{X}_{b}(U)\) is the Lie algebra generated, over \(\mathcal{C}^{\infty}(Z)\) and in standard local coordinates for a patch with codimension \(k\) corners, by
\[\rho_{1}\partial_{\rho_{1}},\dots,\rho_{k}\partial_{\rho_{k}},\partial_{x_{1} },\dots,\partial_{x_{d-k}}. \tag{3.1}\]
Equivalently, \(V\) is a \(b\)-vector fields if \(V\rho_{i}=\alpha_{i}\rho_{i}\) for any boundary-defining function on \(Z\), where \(\alpha_{i}\) are smooth functions. These have become known as _b-vector fields_ (_\(b\)_ for "boundary\({}^{*}\)_). The dual \(\mathcal{C}^{\infty}(Z)\)-module is the module of \(b\)-differential \(1\)-forms \({}^{b}\Lambda^{1}(Z)\). Namely, it is the module generated, locally near the boundary, by
\[\frac{\mathrm{d}\rho_{1}}{\rho_{1}},\dots,\frac{\mathrm{d}\rho_{k}}{\rho_{k}},\mathrm{d}x^{1},\dots,\mathrm{d}x^{d-k}. \tag{3.2}\]
There is an obvious perfect duality \({}^{b}\Lambda^{1}(Z)\times\mathfrak{X}_{b}(Z)\to\mathbb{C}\). Namely, this pairing identifies \({}^{b}\Lambda^{1}(Z)\) with the dual of \(\mathfrak{X}_{b}(Z)\) and vice-versa.
**Lemma 3.13**.:
1. There exist vector bundles \({}^{b}TZ\to Z\) and \({}^{b}T^{*}Z\) such that \(\mathfrak{X}_{b}(Z)\) and \({}^{b}\Lambda^{1}(Z)\) are respectively the spaces of smooth sections of \({}^{b}TZ\) and \({}^{b}T^{*}Z\) over \(Z\).
2. If \(U\subset Z\) is interior, then \(\mathfrak{X}_{b}(U)\cong\mathfrak{X}(U)\) and \({}^{b}\Lambda^{1}(U)\cong\Lambda^{1}(Z)\).
3. There are natural vector bundle maps \({}^{b}TZ\to TZ\) and \(T^{*}Z\to{}^{b}T^{*}Z\), dual to each other, which are isomorphisms over any interior neighbourhood.
4. The \(b\)-vector fields on a manifold with boundary \((Z,\rho)\) are identified, in a collared neighbourhood of \(\partial Z\), with the sections of \(TZ\) having _bounded length_ with respect to the _exact \(b\)-metric_ (3.3) \[g=\frac{\mathrm{d}\rho^{2}}{\rho^{2}}+h,\] that is, \(g(V,V)<\infty\) for any \(b\)-vector field \(V\). Here \(h\) is the pull-back of a metric \(h_{\partial Z}\) on the embedded boundary \(\partial Z\) to the collared neighbourhood \(\partial Z\times[0,1)\).
**Remark 3.14**.: Lemma 3.13 could be reformulated in the language of Lie algebroids. In particular, \(\mathfrak{X}_{b}(Z)\) is a Lie algebroid with anchor map given by the natural bundle map of item 3.
Having concluded our brief recap on manifolds with corners, we recall the notion of _scattering structure_ of Melrose, which is known to yield a pseudo-differential calculus equivalent on \(\mathbb{R}^{d}\) to the classical \(SG\)-calculus. This will be stated precisely later on.
**Definition 3.15**.: A topological space \(X\) is called a _scattering manifold_ if \(X\) is a compact manifold with boundary, with boundary defining function \(\rho\), equipped with a Riemannian metric \(g\) which in a collared neighbourhood of the boundary takes the form
\[g=\frac{\mathrm{d}\rho^{2}}{\rho^{4}}+\frac{h}{\rho^{2}}. \tag{3.4}\]
In (3.4) \(h\) is a symmetric, \(2\)-covariant tensor field containing no \(\mathrm{d}\rho\) factors. Namely it is the pull-back of metric on \(\partial X\) to the collared neighbourhood \(\partial X\times[0,\varepsilon)\).
On a scattering manifold \(X\), we have obviously a notion of \(b\)-vector fields given by the geometric structure. However, the scattering metric on \(X\) is quite different from a \(b\)-metric. The structure of the manifolds near the boundary ("at infinity") can be identified either with a cone (scattering) or with a cylinder (\(b\)-metric). Correspondingly, there is another Lie algebra of vector fields which describes the scattering structure. These so-called scattering vector fields are the elements of
\[\mathfrak{X}_{sc}(X)\equiv\rho\mathfrak{X}_{b}(X), \tag{3.5}\]
that is, they are sections of \(TX\) which are tangent to the boundary and have _bounded length_ w.r.t. \(g\), i.e. \(g(V,V)<+\infty\) for \(V\in\mathfrak{X}_{sc}(X)\). They are generated (near the boundary \(\rho=0\), parametrized by coordinates \(y\)) by
\[\rho^{2}\partial_{\rho},\quad\rho\partial_{y}. \tag{3.6}\]
Furthermore, they are the sections of a vector bundle over \(X\) called the _scattering tangent bundle_, \({}^{sc}TX\).
**Remark 3.16**.: The process of constructing these Lie sub-algebras of \(\mathfrak{X}(X)\), adapted to the geometric situation of interest, can be described in a much more general framework by the process of _rescaling_ of Lie algebroids. Building on ideas of Melrose (who first dealt with the rescaling of Lie sub-algebras of \(\mathfrak{X}(X)\)) and Scott (see [11]), Lanius [16] described the process for a general Lie algebroid and initiated the study of scattering-symplectic manifolds, at the same time exploring the Poisson-geometric side of the matter. In this picture, the scattering algebroid \({}^{sc}TX\) is exactly the rescaling of the \(b\)-algebroid \({}^{b}TX\) along the algebroid of so-called \(0\)-vector fields of Mazzeo and Melrose, namely those vector fields which vanish at the boundary. However, at the boundary the \(b\)- and \(0\)-calculus are highly non-trivial, in the sense that there is a non-commutative algebra of "indicial operators" which need to be inverted when considering ellipticity. We will see that the situation for the scattering structure is much nicer.
**Example 3.17**.: We can turn \(\mathbb{R}^{n}\) into a scattering manifold by considering the _radial compactification_14. It is obtained from the stereographic projection as follows. Consider \(\mathbb{S}^{n}_{+}\), the upper closed half-sphere of radius \(1\) in \(\mathbb{R}^{n+1}\) with coordinates \((x^{1},\ldots,x^{n+1})\), and identify \(\mathbb{R}^{n}\) with the hyperplane \(x^{n+1}=1\) in \(\mathbb{R}^{n+1}\). A point \(p\in\mathbb{R}^{n}\) is mapped bijectively to \(q\in\mathring{\mathbb{S}}^{n}_{+}\) by taking the line \(l_{p}\) joining \(p\) to the origin and setting \(q=\)intersection of \(l_{p}\) with \(\mathbb{S}^{n}_{+}\). Let us denote by \(R\) this embedding. Then we "add the points at \(\infty\)" to \(\mathbb{R}^{n}\) by embedding \(\mathbb{S}^{n-1}\) as the boundary of \(\mathbb{S}^{n}_{+}=\{(x^{0},\ldots,x^{n})\in\mathbb{S}^{n}\text{ s.t. }x^{n+1}\geq 0\}\) in the radially compactified picture. The lingo is justified by the fact that, approaching \(\mathbb{S}^{n-1}\) along a (half of a) maximal circle of \(\mathbb{S}^{n}\), we are in fact going to infinity along the corresponding ray in \(\mathbb{R}^{n}\). We introduce coordinates near the boundary of \(\mathbb{S}^{n}_{+}\) as follows. Describe \(\mathbb{R}^{n}\), at least outside a compact neighbourhood of \(0\), using polar coordinates \((r,y)\) with \(y\) angular coordinates (that is, coordinates on \(\mathbb{S}^{n-1}\subset\mathbb{R}^{n}\)). Using \(R\) we map this description
to coordinates on the open half-sphere \(\hat{\mathbb{S}}_{+}^{n}\), and take \(\rho\equiv 1/r\). One shows easily that \(\rho\) is a boundary-defining function and that the pull-back via \(R\) of the Euclidean metric to \(\mathbb{S}_{+}^{n}\) produces a metric of the form (3.4), with \(h=\) standard metric on \(\mathbb{S}^{n-1}\) embedded in \(\mathbb{R}^{n}\). We also remark that the only (eventually) homogeneous functions on \(\mathbb{R}^{n}\) which extend to a smooth function to \(\mathbb{S}_{+}^{n}\) are those of non-positive order. Specifically, those of order \(0\) extend to the boundary with their radial limit while those of negative order take value \(0\) at the boundary.
**Remark 3.18**.: In the previous example, one might wonder why we don't consider, as a compactified space, the projection of \(\mathbb{S}_{+}^{n}\) onto the plane \(x^{n+1}=1\), namely a closed ball \(\mathbb{B}^{n}\) of radius \(1\) in \(\mathbb{R}^{n}\). Notice that, with such a choice, the projection map, certainly bijective and smooth \(\mathbb{S}_{+}^{n}\to\mathbb{B}^{n}\), does not have a smooth inverse, since this has a square-root singularity. On the other hand, stereographic projection from \((0,\dots,0,-1)\) onto \(x^{n+1}=1\), restricted to the upper closed half-sphere, gives a diffeomorphism from \(\mathbb{S}_{+}^{n}\) to \(\mathbb{B}^{n}\), so we can understand this from both points of view, if only with the need to make the correct identifications. Notice, in addition, that, in the literature with an \(SG\) approach, one often uses a different boundary defining function, namely one takes a diffeomorphism \(Q\) of \(\mathbb{R}^{n}\) onto the open ball \(B_{1}(0)\), given for \(|x|>3\) by \(Q(x)=\frac{x}{|x|}\left(1-\frac{1}{|x|}\right)\). For \([x]\) any smooth function such that \([x]=|x|\) for \(|x|>3\), we obtain that \((Q^{-1})^{*}[x]\) is a boundary defining function. It can be checked directly that it is equivalent to \((R^{-1})^{*}\left\langle x\right\rangle\). Namely that, in sufficiently small neighbourhoods of the boundary, they are just a multiple of each other by a positive smooth function. It follows that the two approaches are really equivalent. A third approach, yet again equivalent, would be to map a point \(x\) to \(\frac{x}{\left\langle x\right\rangle}\in\mathbb{B}_{1}(0)\) and applying the same process as before. This would result in another choice of "standard" boundary-defining function. We will mainly stick to \(\mathbb{S}_{+}^{n}\) and \((R^{-1})^{*}\left\langle x\right\rangle\) for conceptual purposes. However we will at times switch to a different picture for convenience of notation.
With any scattering manifold, as we have seen, is associated a rescaling of the tangent bundle. The dual construction applied to \(T^{*}X\) yields the _scattering cotangent bundle_\({}^{sc}T^{*}X\). Namely, it is the bundle whose sections are the rescaled \(1\)-forms \({}^{sc}\Lambda^{1}(X)\), generated, as a \(\mathcal{C}^{\infty}(X)\)-module near the boundary, by
\[\frac{\mathrm{d}\rho}{\rho^{2}},\quad\frac{\mathrm{d}y}{\rho}. \tag{3.7}\]
In the scattering approach, it turns out that it's quite convenient to consider a compactified version of this space. Namely, given \(X\) a scattering manifold and \({}^{sc}T^{*}X\) its scattering cotangent bundle, we compactify each fibre from \(\mathbb{R}^{n}\) to \(\mathbb{S}_{+}^{n}\) with the map \(R\) and consider the total space so obtained, which we denote by \({}^{sc}\overline{T}^{*}X\). This is now a manifold with corners. Indeed, we have two boundary defining functions \(\rho_{e}\) and \(\rho_{\psi}\), respectively, for the boundary \(\partial X\) and the boundary of the half-spheres in the compactification of the fibres. The common zero locus of these functions, i.e. the space \(\rho_{e}=\rho_{\psi}=0\), is a codimension \(2\) corner.
**Example 3.19**.: In the example of \(X=\mathbb{S}_{+}^{n}\), i.e. of the compactification of \(\mathbb{R}^{n}\), we have that \({}^{sc}T^{*}X\) is trivial, so the compactification process in the fibres yields the manifold \(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n}\). The boundary of this manifold is traditionally called the _\(SG\)-wave-front space_ (cfr. [11] and [12]) and can be decomposed as
\[\partial(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n})=(\mathbb{S}^{n-1}\times \mathbb{R}^{n})\stackrel{{\cdot}}{{\cup}}(\mathbb{R}^{n}\times \mathbb{S}^{n-1})\stackrel{{\cdot}}{{\cup}}(\mathbb{S}^{n-1} \times\mathbb{S}^{n-1}). \tag{3.8}\]
For later reference we denote the three pieces, respectively, by \(\widetilde{\mathcal{W}}_{e},\widetilde{\mathcal{W}}_{\psi},\widetilde{\mathcal{W} }_{\psi e}\).
The _scattering differential operators_ on \(X\) are the elements of \(\operatorname{Diff}_{sc}(X)\), the \(\mathcal{C}^{\infty}(X)\)-enveloping algebra of \(\mathfrak{X}_{sc}(X)\). That is to say, \(\operatorname{Diff}_{sc}(X)\) is the filtered algebra generated, on a boundary neighbourhood \(U\), by \(\rho^{2}\partial_{\rho},\rho\partial_{y},1\) over \(\mathcal{C}^{\infty}(U)\), and isomorphic to \(\operatorname{Diff}(U)\), the usual differential operators, if \(U\) is an interior neighbourhood. There is a well-defined (principal) symbol map \(\sigma_{sc}\) on scattering operators defined as follows. For a scattering vector field \(V\), consider it as a section of \({}^{sc}TX\). At each point \(p\), we can identify \(V(p)\) with a linear map on the fibres of the dual bundle (since a finite-dimensional vector space is canonically isomorphic with its bi-dual), so \(V(p)\colon{}^{sc}T^{*}_{p}X\to\mathbb{C}\), and obtain a smooth function on \({}^{sc}T^{*}X\). Set then \(\sigma_{sc,1}(V)\equiv\operatorname{i}V\) and extend it multiplicatively to the whole \(\operatorname{Diff}^{m}_{sc}(X)\) to a map \(\sigma_{sc,m}\), taking values in \(\operatorname{Pol}^{(m)}({}^{sc}T^{*}X)\) (as before, round brackets mean homogeneity). This gives the usual short exact sequence
\[0\to\operatorname{Diff}^{m-1}_{sc}(X)\to\operatorname{Diff}^{m}_{sc}(X) \xrightarrow{\sigma_{sc,m}}\operatorname{Pol}^{(m)}({}^{sc}T^{*}X)\to 0. \tag{3.9}\]
Moreover, in view of the homogeneity of the polynomials in (3.9), we can identify the principal symbol\(\sigma_{sc,m}(P)\) with a smooth function on \({}^{sc}\mathbb{S}^{*}X\), the _scattering co-sphere bundle_ of \(X\). This is just the sub-bundle of \({}^{sc}T^{*}X\) with fibre the sphere of radius \(1\) with respect to the inverse of the metric (3.4).
The main difference with the usual differential operators is the fact that invertibility of \(\sigma_{sc,m}(P)\) does not guarantee the existence of a "good" parametrix (one with compact remainder). This is due to the fact that the coefficients of an "elliptic" operator might not have good growth/decay properties as \(|x|\to\infty\). There is, on the other hand, a way to take this behaviour into account, which we describe hereafter. For scattering vector fields, the Lie bracket satisfies
\[[\mathfrak{X}_{sc}(X),\mathfrak{X}_{sc}(X)]\subset\rho\mathfrak{X}_{sc}(X), \tag{3.10}\]
so that for each point \(p\in\partial X\) the evaluation map defines a Lie algebra homomorphism into a trivial (namely, commutative) Lie algebra
\[N_{sc,p}\colon\mathfrak{X}_{sc}(X)\to{}^{sc}T_{p}X. \tag{3.11}\]
Functions can be evaluated at a point, too, and the two evaluations are compatible (that is, \((fV)(p)=f(p)V(p)\)), so we have a unique multiplicative extension to \(\operatorname{Diff}_{sc}(X)\) with values in translation-invariant (namely, constant coefficients) differential operators on \({}^{sc}T_{p}X\). On a vector space, the Fourier transform identifies these with (non-homogeneous) polynomial functions, so that, at each point \(p\in\partial X\), we obtain a map
\[\widehat{N}_{sc,p}\colon\operatorname{Diff}^{m}_{sc}(X)\to\operatorname{Pol}^{ m}({}^{sc}T^{*}_{p}X). \tag{3.12}\]
This is known as the _normal symbol_ or _normal operator_ and gives another short exact sequence,
\[0\to\rho\operatorname{Diff}^{m}_{sc}\to\operatorname{Diff}^{m}_{sc} \xrightarrow{\widehat{N}_{sc,p}}\operatorname{Pol}^{m}({}^{sc}T^{*}_{p}X)\to 0. \tag{3.13}\]
Notice that the only relation between the symbol and the normal operator is that evaluation of the symbol at a boundary point should equal the leading term of the normal operator at that point (compare with the \(SG\)-principal symbol). Namely, at \(p\in\partial X\) it holds true
\[\sigma_{sc,m}(P)|_{p}-\widehat{N}_{sc,p}(P)\in\operatorname{Pol}^{m-1}({}^{sc }T^{*}_{p}X). \tag{3.14}\]
The two symbol maps are combined in the so-called _joint symbol_ map
\[j_{sc,m}(P)\equiv(\sigma_{sc,m}(P),\widehat{N}_{sc}(P))\in{}^{sc}\widetilde{ \operatorname{Pol}}^{m,0}(X), \tag{3.15}\]
where \({}^{sc}\widetilde{\operatorname{Pol}}^{m,0}(X)\) is the space of all pairs of functions \((q,\widehat{N})\) with \(q\in\operatorname{Pol}^{(m)}({}^{sc}T^{*}X)\), \(\widehat{N}\in\operatorname{Pol}^{m}({}^{sc}T^{*}_{\partial X}X)\) and such that \(\widehat{N}-p|_{\partial X}\in\operatorname{Pol}^{m-1}({}^{sc}T^{*}_{\partial X }X)\). There is then a combined short exact sequence:
\[0\to\rho\operatorname{Diff}^{m-1}_{sc}(X)\to\operatorname{Diff}^{m}_{sc}(X) \xrightarrow{j_{sc,m}}{}^{sc}\widetilde{\operatorname{Pol}}^{m,0}(X)\to 0. \tag{3.16}\]
**Lemma 3.20**.: The space \({}^{sc}\widetilde{\operatorname{Pol}}^{m,0}(X)\) can be canonically identified with a subalgebra of \(\rho^{-m}_{\sigma}\mathcal{C}^{\infty}(\partial({}^{sc}\overline{T}^{*}X))\).
Proof.: A function \(f\) is an element of \(\mathcal{C}^{\infty}(\partial({}^{sc}\overline{T}^{*}X))\) if it is given as \(f=(f_{N},f_{\sigma})\) for two smooth functions \(f_{N}\in\mathcal{C}^{\infty}({}^{sc}T^{*}_{\partial X}X)\), \(f_{\sigma}\in\mathcal{C}^{\infty}({}^{sc}\mathbb{S}^{*}X)\) satisfying (3.14). Clearly the normal symbol \(\widehat{N}_{sc}\) is such a \(f_{N}\). On the other hand, identifying the boundary of the fibre-wise compactification with the co-sphere bundle \(\mathbb{S}^{*}X\), the function \(\sigma_{sc,m}\) is determined by homogeneity by an element \(f_{\sigma}\in\mathcal{C}^{\infty}({}^{sc}\mathbb{S}^{*}X)\). For a \(P\in\operatorname{Diff}^{m}_{sc}(X)\), then we obtain a pair of functions as above, The proof is complete. QED
To define pseudo-differential operators on a scattering manifold, we start with the model case of \(\mathbb{S}^{n}_{+}\). The Weyl calculus of Hormander with respect to the temperate metric \(g=\left\langle x\right\rangle^{-2}\mathrm{d}x^{2}+\left\langle\xi\right\rangle ^{-2}\mathrm{d}\xi^{2}\) gives a class of operators on \(\mathcal{S}(\mathbb{R}^{n})\) having distributional kernels given by
\[K(x,y)=\int e^{\mathrm{i}(x-y)\xi}p_{L}(x,\xi)\,\overline{\alpha}\xi, \tag{3.17}\]
where the function \(p_{L}(x,\xi)\) is the _left-symbol_ of the operator \(P\). Then, the function \(p_{L}\) satisfies the estimates (2.3). Namely, \(p_{L}\) is a symbol with respect to the above metric, the standard symplectic form and the order/weight function \(\left\langle x\right\rangle^{l}\left\langle\xi\right\rangle^{m}\). Using the stereographic projection \(R\) (recall the definition in Example 3.17) we can transfer these to \(\mathbb{S}^{n}_{+}\). Set \(\dot{\mathcal{C}}^{\infty}(\mathbb{S}^{n}_{+})\) to be the space of all smooth functions on \(\mathbb{S}^{n}_{+}\) which vanish at the boundary together with all their derivatives.
**Definition 3.21**.: The space of _scattering-onormal pseudo-differential operators_ on \(\mathbb{S}^{n}_{+}\), \(\Psi^{l,m}_{sec}(\mathbb{S}^{n}_{+})\), is the set of all the linear operators \(A\colon\dot{\mathcal{C}}^{\infty}(\mathbb{S}^{n}_{+})\to\dot{\mathcal{C}}^{ \infty}(\mathbb{S}^{n}_{+})\) such that, if \(P\) is defined by \(R^{*}(Au)=P(R^{*}u)\) for all \(u\in\dot{\mathcal{C}}^{\infty}(\mathbb{S}^{n}_{+})\), then \(P\) is given as an operator with Schwartz kernel as in (3.17), with a left symbol \(p_{L}\) of order \((l,m)\).
We let \(R_{2}\equiv R\times R\colon\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{S}^{n }_{+}\times\mathbb{S}^{n}_{+}\) be separate radial compactification in each factor and choose boundary defining functions \(\rho_{\sigma}\colon\mathbb{S}^{n}_{+}\times\mathbb{S}^{n-1}\times[0,1)\to \mathbb{R}\), \(\rho_{N}\colon\mathbb{S}^{n-1}\times[0,1)\times\mathbb{S}^{n}_{+}\to\mathbb{R}\) for the two boundary hyper-surfaces (for example, \(R^{*}\rho_{\sigma}=\left\langle\xi\right\rangle^{-1},R^{*}\rho_{N}=\left\langle x \right\rangle^{-1}\)). Let \(\operatorname{Diff}_{b}(\mathbb{S}^{n}_{+}\times\mathbb{S}^{n}_{+})\) be the enveloping algebra (over \(\mathcal{C}^{\infty}(\mathbb{S}^{n}_{+}\times\mathbb{S}^{n}_{+})\)) of \(\mathfrak{X}_{b}(\mathbb{S}^{n}_{+}\times\mathbb{S}^{n}_{+})\), the so-called _totally characteristic_ or _b-differential operators_. We define a space of distributions, conormal to boundary in the sense of Hormander, of order \((l,m)\) as
\[\begin{split} I^{l,m}(\mathbb{S}^{n}_{+}\times\mathbb{S}^{n}_{+}) \equiv\{& u\in\rho_{N}^{-l}\rho_{\sigma}^{-m}L^{\infty}(\mathbb{S}^{n}_{+} \times\mathbb{S}^{n}_{+})\\ &\text{s.t. }\operatorname{Diff}_{b}(\mathbb{S}^{n}_{+}\times \mathbb{S}^{n}_{+})u\subset\rho_{N}^{-l}\rho_{\sigma}^{-m}L^{\infty}(\mathbb{S }^{n}_{+}\times\mathbb{S}^{n}_{+})\}.\end{split} \tag{3.18}\]
This defines a global space of kernels whose microlocal representation is given by oscillatory integrals as in (3.17). Indeed, it is easily seen that \(p_{L}\) satisfies (2.3) if and only if \(p_{L}\in R_{2}^{*}I^{l,m}(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n})\) with \(m_{e}=l,m_{\psi}=m\) (confer [15], Section 4).
**Remark 3.22**.: For \(b\)- and \(0\)-differential operators, one has a well-defined normal operator \(N_{p}\) at the boundary. However, the condition (3.10) fails. Indeed, no extra vanishing factors appear when commuting elements of the forms \(\rho\partial_{\rho},\rho\partial_{y},\partial_{y}\), so the normal homomorphism takes values in a non-commutative algebra. This is the reason why those structures are much more complicated from an analytical perspective.
To obtain classical operators, we refine Definition 3.21 by asking that the left-symbol is actually a (weighted) smooth function on \(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n}\).
**Definition 3.23**.: The space of _classical scattering pseudo-differential operators_\(\Psi_{sc}^{l,m}(\mathbb{S}_{+}^{n})\) is the subspace of \(\Psi_{scc}^{l,m}(\mathbb{S}_{+}^{n})\) consisting of those operators with
\[p_{L}\in R_{2}^{*}(\rho_{N}^{-l}\rho_{\sigma}^{-m}\mathcal{C}^{\infty}( \mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n})). \tag{3.19}\]
Despite a lot of what follows being true _mutatis mutandis_ also for the larger class \(\Psi_{scc}\), in the sequel we will consider classical operators, avoiding repeated explicit mention.
For the sake of completeness, we include a definition of scattering \(\Psi\)DOs on a general scattering manifold. Although this will not be needed in the sequel (we will only concern ourselves with classical scattering operators on \(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n}\)), it reveals that the more "global" nature of the scattering calculus transfers more easily to general settings in comparison with the \(SG\)-calculus. On the other hand, however, we recall that the class of \(SG\)-manifolds as defined by Schrohe [10] (or even just the class of \(\mathcal{S}\)-manifolds in the sense of Cordes [11]) is significantly larger. The following lemma expresses coordinate invariance and is a direct consequence of the calculus of [15].
**Lemma 3.24**.: Let \(F\colon\mathbb{S}_{+}^{n}\to\mathbb{S}_{+}^{n}\) be a diffeomorphism. Then for any \(P\in\Psi_{sc}^{l,m}(\mathbb{S}_{+}^{n})\) we have \(F_{*}PF^{*}\in\Psi_{sc}^{l,m}(\mathbb{S}_{+}^{n})\), namely, conjugation with a diffeomorphism defines an order-preserving automorphism of \(\Psi_{sc}^{l,m}(\mathbb{S}_{+}^{n})\).
For a scattering manifold \(X\), an operator \(P\colon\dot{\mathcal{C}}^{\infty}(X)\to\dot{\mathcal{C}}^{\infty}(X)\) has a kernel which in general is an extendible distribution \(K_{P}\in\mathcal{E}^{\prime}(X^{2},\pi_{R}^{*}\Omega)\), where \(\pi_{R}\) is the projection onto the second factor and \(\Omega\) is the density bundle on \(X\). We define regularising \(\Psi\)DOs as exactly those integral operators with kernel in \(\dot{\mathcal{C}}^{\infty}(X^{2},\pi_{R}^{*}\Omega)\). We notice that \(R^{*}\dot{\mathcal{C}}^{\infty}(\mathbb{S}_{+}^{n})=\mathcal{S}(\mathbb{R}^{n})\). We can then introduce the calculus on \(X\) along the lines of the usual definition by localisation for manifolds without boundary, simply replacing any instance of'manifold' with'manifold with corners', 'open set in \(\mathbb{R}^{n}\)' with 'open set in \([0,\infty)^{k}\times\mathbb{R}^{n-k}\), and so on (effectively, we are giving the same definition as Definition 18.1.20 in [15] in the new category of'manifolds with corners', modelling our operators on \(\Psi_{scc}^{l,m}(\mathbb{S}_{+}^{n})\)).
We collect some of the properties of this algebra before turning our attention to the symbol calculus for scattering operators.
**Proposition 3.25**.: The following holds true.
1. The spaces \(\Psi^{l,m}_{sc}(X)\) sit in a partial order where \(\Psi^{l,m}_{sc}(X)\subset\Psi^{l^{\prime},m^{\prime}}_{sc}\) if and only if \(l\leq l^{\prime}\) and \(m\leq m^{\prime}\); they form a bi-filtered algebra \(\Psi_{sc}(X)\) under composition.
2. \(\operatorname{Diff}^{m}_{sc}(X)\subset\Psi^{0,m}_{sc}(X)\).
3. Multiplication with a boundary-defining function defines an order reduction for the first filtration, namely \(\Lambda_{N}=M_{\rho}\) is a classical, invertible, scattering operator of order \(\mathbb{1}_{e}\).
4. The operator \(\Lambda_{\sigma}=\sqrt{1-\Delta_{sc}}\), for \(\Delta_{sc}\) the Laplace operator associated with the metric (3.4), is an order reduction for the second filtration, namely \(\Lambda_{\psi}\) is a classical, invertible, scattering operator of order \(\mathbb{1}_{\psi}\).
Recall that for scattering differential operators the principal symbol is a continuous function on the boundary of \({}^{sc}\overline{T}^{*}X\), smooth across the corner in the sense of Remark 3.11. We let \(B_{sc}X\equiv\partial({}^{sc}\overline{T}^{*}X)={}^{sc}\overline{T}^{*}_{ \partial X}X\cup{}^{sc}\mathbb{S}^{*}_{\partial X}X{}^{sc}\mathbb{S}^{*}X\) and denote by \(\mathcal{C}^{\infty}(B_{sc}X)\) the set of smooth functions according to this definition. If we are given vector bundles \(E_{N},E_{\sigma}\) over \({}^{sc}\overline{T}^{*}_{\partial X}X\), \({}^{sc}\mathbb{S}^{*}X\) respectively, with a specified identification of their restrictions to \({}^{sc}\mathbb{S}^{*}_{\partial X}X\), then we can also consider
\[\mathcal{C}^{\infty}(B_{sc}X;(E_{N},E_{\sigma}))=\{(u_{N},u_{ \sigma})\in\mathcal{C}^{\infty}({}^{sc}\overline{T}^{*}_{\partial X}X;E_{N}) \times\mathcal{C}^{\infty}({}^{sc}\mathbb{S}^{*}X;E_{\sigma})\\ \text{s.t. }u_{N}|{}^{sc}\mathbb{S}^{*}_{\partial X}X=u_{\sigma}|{} ^{sc}\mathbb{S}^{*}_{\partial X}X\}. \tag{3.20}\]
Notice in particular that this is the case if we are given a vector bundle over the whole of \({}^{sc}\overline{T}^{*}X\).
**Lemma 3.26**.: The elements of \(\rho^{-l}_{N}\rho^{-m}_{\sigma}\mathcal{C}^{\infty}({}^{sc}\overline{T}^{*}X)\) are the sections of a trivial bundle \(S^{l,m}\) over \({}^{sc}\overline{T}^{*}X\), equipped with a \(b\)-connection. Namely, for every \(V\in\mathfrak{X}_{b}(X)\) and every section \(a\) of \(S^{l,m}\), it holds true \(Va\in S^{l,m}\).
With this notation set, we can finally express the principal symbol sequence.
**Proposition 3.27**.: The maps \(\widehat{N}_{sc}\) and \(\sigma_{sc}\) extend from \(\operatorname{Diff}_{sc}(X)\) to \(\Psi_{sc}(X)\) to give the _scattering joint symbol_\(j_{sc,l,m}\colon\Psi^{l,m}_{sc}(X)\to\mathcal{C}^{\infty}(B_{sc}X;S^{l,m})\), and we have the exact sequence
\[0\to\Psi^{l-1,m-1}_{sc}(X)\to\Psi^{l,m}_{sc}(X)\xrightarrow{j_{sc,l,m}} \mathcal{C}^{\infty}(B_{sc}X;S^{l,m})\to 0. \tag{3.21}\]
Furthermore, the joint symbol is multiplicative. Namely, for any \(A\in\Psi^{l_{1},m_{1}}_{sc}(X),B\in\Psi^{l_{2},m_{2}}_{sc}(X)\), it holds true
\[j_{sc,l_{1}+l_{2},m_{1}+m_{2}}(AB)=j_{sc,l_{1},m_{1}}(A)j_{sc,l_{2},m_{2}}(B), \tag{3.22}\]
with the product given component-wise.
Before moving towards the discussion of the symplectic structure on \(B_{sc}X\), we notice some special properties of the model case \(X=\mathbb{S}^{n}_{+}\). The first is that conjugation with the Fourier transform gives an automorphism of pseudo-differential operators, a peculiar feature of this setting. We report a proof of this fact, since it is instructive about the nice properties of \(SG\) and \(sc\)-calculi.
**Proposition 3.28**.: Let \(\mathcal{F}\colon\mathcal{S}(\mathbb{R}^{n})\to\mathcal{S}(\mathbb{R}^{n})\) be the Fourier transformation and consider the map \(\overline{\mathcal{F}}\colon\dot{\mathcal{C}}^{\infty}(\mathbb{S}^{n}_{+}) \to\dot{\mathcal{C}}^{\infty}(\mathbb{S}^{n}_{+})\) given by \(\overline{\mathcal{F}}\equiv(R^{*})^{-1}\circ\mathcal{F}\circ R^{*}\). Then
\[\overline{\mathcal{F}}\circ\Psi^{l,m}_{sc}(\mathbb{S}^{n}_{+})\circ\overline{ \mathcal{F}}^{-1}=\Psi^{m,l}_{sc}(\mathbb{S}^{n}_{+}). \tag{3.23}\]
Proof.: The action of \(P\in\Psi_{sc}^{l,m}\) is expressed "locally" as \(\mathfrak{P}(v)=R^{*}(Pu)\) for each \(u\in\dot{\mathcal{C}}^{\infty}(\mathbb{S}_{+}^{n})\) and \(v=R^{*}u\in\mathcal{S}(\mathbb{R}^{n})\) (recall that \(R^{*}\dot{\mathcal{C}}^{\infty}(\mathbb{S}_{+}^{n})=\mathcal{S}(\mathbb{R}^{n})\)), in particular
\[\mathfrak{P}v =\int e^{\mathrm{i}\,x\xi}p_{L}(x,\xi)\widehat{v}(\xi)\,\mathrm{ d}\xi,\] \[\mathfrak{P}\widehat{v}(\xi) =\int e^{\mathrm{i}(\xi-\eta)z}p_{L}(\xi,z)\widehat{v}(\eta)\, \mathrm{d}z\,\mathrm{d}\eta. \tag{3.24}\]
Consider \(\widetilde{\mathfrak{P}}=\mathcal{F}^{-1}\circ\mathfrak{P}\circ\mathcal{F}\). This is the "local representation" of \(\overline{\mathcal{F}}\circ P\circ\overline{\mathcal{F}}^{-1}\), since clearly
\[\widetilde{\mathfrak{P}}v =(\mathcal{F}^{-1}\circ\mathfrak{P}\circ\mathcal{F})(R^{*}u)= \mathcal{F}^{-1}\circ\mathfrak{P}\circ R^{*}(\overline{\mathcal{F}}u)\] \[=R^{*}(\overline{\mathcal{F}}^{-1}\circ(R^{*})^{-1}\circ \mathfrak{P}\circ R^{*}\circ\overline{\mathcal{F}}u)\] \[=R^{*}(\overline{\mathcal{F}}^{-1}\circ P\circ\overline{ \mathcal{F}}u). \tag{3.25}\]
Computing the action of \(\widetilde{\mathfrak{P}}\), we observe
\[\widetilde{\mathfrak{P}}v(x) =\mathcal{F}^{-1}(\mathfrak{P}\widehat{v})(x)=\int e^{\mathrm{i }\,x\xi}e^{\mathrm{i}(\xi-\eta)z}p_{L}(\xi,z)\widehat{v}(\eta)\,\mathrm{d}z\, \mathrm{d}\eta\,\mathrm{d}\xi\] \[=\int e^{\mathrm{i}(x+z)\xi}p_{L}(\xi,z)\left(\int e^{-\, \mathrm{i}\,\eta z}\widehat{v}(\eta)\,\mathrm{d}\eta\right)\,\mathrm{d}z\, \mathrm{d}\xi\] \[=(2\pi)^{n}\int e^{\mathrm{i}(x+z)\xi}p_{L}(\xi,z)v(-z)\,\mathrm{ d}z\,\mathrm{d}\xi\] \[=\int e^{\mathrm{i}(x-y)\xi}p_{L}(\xi,-y)v(y)\,\mathrm{d}y\, \mathrm{d}\xi. \tag{3.26}\]
Hence, the claim is in fact just the equivalence of the classes of left- and right-quantised \(SG\)-operators, namely Lemma 2.24. QED
The second aspect relates to the equivalence of the classical \(sc\)- and \(SG\)-calculi, which is especially manifest in the model case, as the next theorem shows (cf. [10], Section 8.2.2, for a proof).
**Theorem 3.29**.: The following properties hold true.
1. For any \(a\in SG^{m},m=(m_{e},m_{\psi})\in\mathbb{R}^{2}\), the function \(\left\langle x\right\rangle^{-m_{e}}\left\langle\xi\right\rangle^{-m_{\psi}}a (x,\xi)\) extends smoothly to \(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n}\).
2. For any \(m\in\mathbb{R}^{2}\) there exist an isomorphism (3.27) \[\begin{split}&\,
**Remark 3.30**.: In view of Remark 2.9, 1. and 2. in Theorem 3.29 are consequences of the fact that \(\mathcal{C}^{\infty}(\mathbb{S}_{+}^{n}\times\mathcal{C}^{\infty}(\mathbb{S}_{+} ^{n}))\cong\mathcal{C}^{\infty}(\mathbb{S}_{+}^{n})\hat{\otimes}_{\pi}\mathcal{C }^{\infty}(\mathbb{S}_{+}^{n})\).
We recall that the compatibility conditions for principal \(SG\)-symbols are actually sufficient for the existence of a global symbol. In the scattering picture, this corresponds to principal symbols being smooth functions on \(B_{sc}X\) according to Remark 3.11. By definition of \(\mathcal{C}^{\infty}({}^{sc}\overline{T}^{\ast}X)\) and \(\mathcal{C}^{\infty}(B_{sc}X)\), we have therefore the extension result of Proposition 3.32 below. First however, let us recall and introduce some notation we will use throughout the rest of our treatment (confer Example 3.19)
**Definition 3.31**.: The space \(B_{sc}X\) is given by the disjoint union \(B_{sc}X=\widetilde{\mathcal{W}}_{\psi}\cup\widetilde{\mathcal{W}}_{e}\cup \widetilde{\mathcal{W}}_{\psi e}\) where the manifolds \(\widetilde{\mathcal{W}}_{\bullet}\) are (the second equality is what happens in the model case \(X=\mathbb{S}_{+}^{n}\))
\[\widetilde{\mathcal{W}}_{\psi} ={}^{sc}\mathbb{S}^{\ast}\dot{X}\quad(=\mathbb{S}_{+}^{n}\times \mathbb{S}^{n-1}),\] \[\widetilde{\mathcal{W}}_{e} ={}^{sc}T_{\partial X}^{\ast}X\quad(=\mathbb{S}^{n-1}\times \mathbb{S}_{+}^{n}),\] \[\widetilde{\mathcal{W}}_{\psi e} ={}^{sc}\mathbb{S}_{\partial X}^{\ast}X\quad(=\mathbb{S}^{n-1} \times\mathbb{S}^{n-1}).\]
We set \(\overline{\mathcal{W}}_{\psi}=\widetilde{\mathcal{W}}_{\psi}\cup\widetilde{ \mathcal{W}}_{\psi e}\), respectively \(\overline{\mathcal{W}}_{e}=\widetilde{\mathcal{W}}_{e}\cup\widetilde{ \mathcal{W}}_{\psi e}\). In particular, \(\overline{\mathcal{W}}_{e}\) and \(\overline{\mathcal{W}}_{\psi}\) are the boundary hyper-surfaces of \({}^{sc}\overline{T}^{\ast}X\), both manifolds with boundary, \(\widetilde{\mathcal{W}}_{e}\) and \(\widetilde{\mathcal{W}}_{\psi}\) are the respective interiors, and \(\widetilde{\mathcal{W}}_{\psi e}\) is the corner.
Finally, we will also find use for the following spaces:
\[\mathcal{W}_{\psi} =\widetilde{\mathcal{W}}_{\psi}\times\mathbb{R}^{+}\cong T^{\ast} X\setminus\{0\}\quad(=\mathbb{S}_{+}^{n}\times\mathbb{R}_{0}^{n}),\] \[\mathcal{W}_{e} =\mathbb{R}^{+}\times\widetilde{\mathcal{W}}_{e}\cong T^{\ast}( \mathbb{R}^{+}\times\partial X)\quad(=\mathbb{R}_{0}^{n}\times\mathbb{S}_{+}^{ n}),\] \[\mathcal{W}_{\psi e} =\mathbb{R}^{+}\times\widetilde{\mathcal{W}}_{\psi e}\times \mathbb{R}^{+}\cong T^{\ast}(\mathbb{R}^{+}\times\partial X)\setminus\{0\} \quad(=\mathbb{R}_{0}^{n}\times\mathbb{R}_{0}^{n}).\]
In the above formulae, \(\{0\}\) denotes the zero section of the involved cotangent bundles. Also notice that \(\mathbb{R}^{+}\times\partial X\) is, topologically, the (interior of) a collared neighbourhood of \(\partial X\), seen however with the metric structure of a cone over \(\partial X\).
**Proposition 3.32**.: Let \(a_{\bullet}\in\mathcal{C}^{\infty}(\overline{\mathcal{W}}_{\bullet}),\bullet \in\{e,\psi\}\) and \(a_{\psi e}\in\mathcal{C}^{\infty}(\widetilde{\mathcal{W}}_{\psi e})\) be smooth functions satisfying \(a_{e}|_{\widetilde{\mathcal{W}}_{\psi}e}=a_{\psi}|_{\widetilde{\mathcal{W}}_{ \psi e}}=a_{\psi e}\). There exists then a function \(a\in\mathcal{C}^{\infty}(\mathbb{B}^{n}\times\mathbb{B}^{n})\) such that \(a|_{\widetilde{\mathcal{W}}_{\bullet}}=a_{\bullet}\).
Under pull-back with the radial compactification map, the associated symbol \(\breve{p}\) of (2.19) is of course nothing else than a particular choice of such an extension to the interior. There is a similar extension result for maps between the boundary faces, namely Theorem 3.34 below (as formulated in [10], Proposition 1.30). To state it we need to discuss scattering maps between scattering manifolds.
**Definition 3.33**.: Given two scattering manifolds \(X,Y\) and a smooth map \(C\colon X\to Y\), we say that \(C\) is a _scattering map_ if, for any given \(m\in\mathbb{R}\) and any \(a\in\rho_{Y}^{-m}\mathcal{C}^{\infty}(Y)\) it holds true that
1. \(C^{\ast}a\in\rho_{X}^{-m}\mathcal{C}^{\infty}(X)\);
2. if \(q=C(p)\) for \(p\in X\) and \(\rho_{Y}^{-m}a(q)>0\), then \(\rho_{X}^{-m}C^{\ast}a(p)>0\).
Locally, this corresponds to the fact that scattering maps (\(sc\)-maps for short) are exactly those maps which pull back \(\rho_{Y}\) to \(\rho_{X}h\) for some positive function \(h\in\mathcal{C}^{\infty}(X)\). We call maps satisfying this condition _local sc-maps_ and extend their
definition to manifolds with corners as follows. Given _complete sets_ of boundary-defining functions \((\rho_{i})\) for \(X\) and \((r_{i})\) for \(Y\) (that is, such that the whole boundary of the manifold can be identified with \(\prod\rho_{i}=0\) or \(\prod r_{i}=0\) respectively), \(C\) is a local \(sc\)-map if there exist positive functions \(h_{i}\in\mathcal{C}^{\infty}(X)\) for which it holds true \(C^{*}r_{i}=h_{i}\rho_{i}\).
In [10] it is proven that \(sc\)-maps are really morphisms in the category of scattering manifolds. We will not need many facts from the theory explored there, so we'll only state what we'll need in the following. In particular, the next theorem is the aforementioned extension result near the corner of a product \(X\times Y\) of manifolds with boundary.
**Theorem 3.34**.: Consider manifolds with boundary \(X_{i},Y_{i},i=1,2\), with boundary defining functions \(\rho_{X_{i}},\rho_{Y_{i}}\), and the products \(B_{i}=X_{i}\times Y_{i}\). Consider, for \(\bullet\in\{e,\psi\}\), local \(sc\)-maps \(C_{\bullet}\colon\overline{\mathcal{W}}_{\bullet}^{2}\to\overline{\mathcal{W} }_{\bullet}^{2}\) defined near a point \(p\in\partial X_{1}\times\partial Y_{1}\), such that \(C_{e}|_{\partial X_{1}\times\partial Y_{1}}=C_{\psi}|_{\partial X_{1}\times \partial Y_{1}}\). There exists then a local \(sc\)-map \(C\) on a neighbourhood \(p\dot{\exists}U\subset B_{1}\) such that \(C_{e}=C|_{\partial X_{1}\times Y_{1}}\), \(C_{\psi}=C|_{X_{1}\times\partial Y_{1}}\) and
\[\frac{\partial C^{*}\rho_{Y_{2}}}{\partial\rho_{X_{1}}}=\frac{\partial C_{*} \rho_{X_{2}}}{\partial\rho_{Y_{1}}}=0. \tag{3.29}\]
Moreover, provided that both \(C_{\bullet}\) are local diffeomorphisms near \(p\), then \(C\) is also a local diffeomorphism near \(p\). If both \(C_{\bullet}\) are diffeomorphisms defined in a neighbourhood of the whole corner in the respective boundary hyper-surface, then we can pick \(C\) to be a diffeomorphism of a neighbourhood of the corner in \(B_{1}\) onto a neighbourhood of the corner in \(B_{2}\).
### Symplectic and contact properties of the scattering bundle
Recall that on every cotangent bundle \(T^{*}X\) a canonical \(1\)-form, known as the Liouville form \(\lambda\), is defined. If \(x\) are local coordinates on \(X\) and \((x,\xi)\) are the induced canonical coordinates on the cotangent bundle, \(\lambda\) takes the form \(\xi\,\mathrm{d}x\). The differential \(\omega=\mathrm{d}\lambda=\mathrm{d}\xi_{i}\wedge\mathrm{d}x^{i}\) is a symplectic form on \(T^{*}X\), which is therefore an _exact symplectic manifold_. In the classical theory of FIOs, the \(1\)-form \(\lambda\) plays a crucial role, in that it determines the conic Lagrangian submanifolds and, therefore, the microlocal form of the operators. In the global scattering calculus, it turns out that \(\lambda\) does not suffice to describe the peculiar features that appear at "spatial infinity" \(\partial X\). In the existing literature, the problem has been obviated to by introducing a similar structure on a collared neighbourhood of \(\partial X\). Since this is paramount for our future discussion, we recall hereafter, following [10] and [11], the important concepts, all the while introducing the notation we shall refer to.
First, we note that the Poisson brackets \(\{\cdot,\cdot\}\) associated with the symplectic form \(\omega\) extends to \(B_{sc}X\). If we keep in mind Theorem 3.29, we understand that, in the model case \(X=\mathbb{S}_{+}^{n}\), we can reformulate Proposition 2.34 in the language of scattering geometry (cf. [12], Proposition 4).
**Proposition 3.35**.: The Poisson structure on \(T^{*}\dot{\tilde{X}}\), induced by the canonical symplectic form \(\mathrm{d}\lambda\), extends to \(B_{sc}X=\partial(^{sc}\overline{T}^{*}X)\) as a filtered bracket between (weighted) smooth functions on \(B_{sc}\). More precisely, the Poisson bracket extends to a map
\[\{\cdot,\cdot\}\colon\mathcal{C}^{\infty}(B_{sc}X;S^{l_{1},m_{1}})\times \mathcal{C}^{\infty}(B_{sc}X;S^{l_{2},m_{2}})\to\mathcal{C}^{\infty}(B_{sc}X;S ^{l_{1}+l_{2}-1,m_{1}+m_{2}-1}),\]
where we understand that we obtain such a map on each boundary hyper-surface and that \(\{\cdot,\cdot\}\) preserves the compatibility condition in the corner. Namely, if we have tuples \((a_{\bullet}),(b_{\bullet})\), extended to functions \(a,b\in\mathcal{C}^{\infty}({}^{sc}\overline{T}^{*}X)\) according to Proposition 3.32, then the Poisson brackets of the tuples \((\{a_{\bullet},b_{\bullet}\})\) satisfy the same conditions and admit therefore an extension to a smooth function \(c\in\mathcal{C}^{\infty}({}^{sc}\overline{T}^{*}X)\). In particular, the extensions \(a,b\) and \(c\) can be chosen so that \(c=\{a,b\}\).
Second, denote the Liouville 1-form by \(\lambda_{\psi}\) and recall that \(\lambda_{\psi}\), being homogeneous of degree 1 in the fibres of \(T^{*}X\), induces a contact structure on \(\mathbb{S}^{*}X\). In particular, \(\lambda_{\psi}\) restricts to a contact form there, so it can be rescaled by a conformal factor without changing the actual structure. Of course, since \(\widetilde{\mathcal{W}}_{\psi}\) is diffeomorphic to \(\mathbb{S}^{*}\hat{X}\), it also is a contact manifold and we give it a specific contact structure as follows. For \(\rho_{\sigma}=R_{*}\left\langle\xi\right\rangle^{-1}\), our standard boundary-defining function, consider the inward-pointing radial vector field \(\rho_{\sigma}\partial_{\rho_{\sigma}}\). This is just the Euler vector field \(\xi_{j}\partial_{\xi_{j}}\) expressed in adapted coordinates at the boundary. Then, since \(\lambda_{\psi}=\rho_{\sigma}\partial_{\rho_{\sigma}}\lrcorner\omega\), and \(\rho_{\sigma}\) is positive in the interior, we can consider the _rescaled_ radial vector \(\rho_{\sigma}^{2}\partial_{\rho_{\sigma}}\), which, inserted into \(\omega\), gives the 1-form \(\alpha_{\psi}\equiv\rho_{\sigma}^{2}\partial_{\rho_{\sigma}}\lrcorner\omega\). This form is conformally equivalent to \(\lambda_{\psi}\)_in the interior_, and we consider it as the "standard" contact form on \(\widetilde{\mathcal{W}}_{\psi}\), the co-sphere bundle at infinity.
A similar process produces our "standard" contact form on \(\widetilde{\mathcal{W}}_{e}\). Choosing a collared neighbourhood and canonical coordinates \((\rho_{N},z,\tau,\mu)\) near \(\partial^{sc}T^{*}X\), with \(\rho_{N}\) a boundary-defining function, \(\omega\) is the differential of a 1-form \(\lambda_{e}\) given by
\[\lambda_{e}=\frac{1}{\rho_{N}}(\mathrm{d}\tau+\mu_{k}\,\mathrm{d}z^{k}). \tag{3.30}\]
Notice that \(\lambda_{e}=\rho_{N}\partial_{\rho_{N}}\lrcorner\omega\) is _not_ a smooth 1-form on \(T^{*}X\), since it blows up at \(\partial X\). It is however smooth as a scattering 1-form on \({}^{sc}T^{*}X\) and can be rescaled to give a smooth 1-form \(\alpha_{e}=\rho_{N}^{2}\partial_{\rho_{N}}\lrcorner\omega\) near the interior of the boundary hypersurface. Since the collared neighbourhood gives a conic structure near \(\widetilde{\mathcal{W}}_{e}\), we obtain a contact structure on the boundary hyper-surface as above. Moreover, the contact distribution over \(\widetilde{\mathcal{W}}_{e}\) is unambiguously determined by the restriction of \(\alpha_{e}\) to \(\widetilde{\mathcal{W}}_{e}\), and \(\alpha_{e}\) is a contact form there. Notice again that this is conformally equivalent to \(\lambda_{e}\)_in the interior_ but not at "spatial infinity" \(\partial X\).
We have then a pair of 1-forms \(\alpha_{\psi},\alpha_{e}\) on \(T^{*}X\) which determine the symplectic structure at either spatial infinity or fibre infinity. In particular, since they induce contact distributions of \(\widetilde{\mathcal{W}}_{\psi}\) and \(\widetilde{\mathcal{W}}_{e}\), we can speak of Legendrian submanifolds at infinity. Recall that, classically, FIOs are operators whose singularities are contained in conic Lagrangian submanifolds of \(T^{*}X\setminus\{0\}\), which in turn can be identified with Legendrian submanifolds of the co-sphere bundle by restriction. Parallel to this situation, Melrose and Zworski [10] introduced a class of Legendrian distributions in \(\widetilde{\mathcal{W}}_{e}\), which has been subsequently generalised by Coriasco, Doll and Schulz in [11] to include singularities in the whole \(B_{sc}X\). As a last step, before introducing our notion of singular symplectomorphism, we recall the notion of the so-called \(SG/sc\)-Lagrangians/Legendrians, that have been studied by Coriasco and collaborators, and refer to the cited literature for more details.
**Definition 3.36** (\(SG\)-Legendrian).: Let \(\Lambda=\overline{\Lambda}_{e}\cup\overline{\Lambda}_{\psi}\subset B_{sc}X\) be a closed submanifold, where \(\Lambda_{e}\subset\overline{\mathcal{W}}_{e}\) and \(\Lambda_{\psi}\subset\overline{\mathcal{W}}_{\psi}\) and \(\overline{\Lambda}_{\bullet}\) denotes the closure of \(\Lambda_{\bullet}\). \(\Lambda\) is called an \(SG\)-Legendrian submanifold if it satisfies the following conditions:
1. \(\Lambda_{\psi}\) is Legendrian in \(\widetilde{\mathcal{W}}_{\psi}\);
2. \(\Lambda_{e}\) is Legendrian in \(\widetilde{\mathcal{W}}_{e}\);
3. \(\overline{\Lambda}_{e}\) has boundary if and only if \(\overline{\Lambda}_{\psi}\) has boundary, in which case \[\Lambda_{\psi e}=\partial\overline{\Lambda}_{e}=\partial\overline{\Lambda}_{ \psi}=\overline{\Lambda}_{e}\cap\overline{\Lambda}_{\psi}\] with clean intersection.
Having finished our recap, we build upon this structure to introduce our notion of symplectomorphism. Here and later, \(X,Y\) are scattering manifolds and we use the notation of Definition 3.31 to refer to subsets of \(B_{sc}X\) and \(B_{sc}Y\) indifferent (no confusion should arise, in particular, in view of the fact that all statements to come can be checked in local coordinates).
**Definition 3.37**.: Let \(U\), respectively \(V\), be open in \(B_{sc}X\), respectively \(B_{sc}Y\). Assume \(\chi\colon U\to V\) is a diffeomorphism (in particular, a \(sc\)-map), given as a pair of maps \(\chi=(\chi_{e},\chi_{\psi})\) defined on \(U\cap\overline{\mathcal{W}}_{e}\) and \(U\cap\overline{\mathcal{W}}_{\psi}\), respectively (if \(U\cap\overline{\mathcal{W}}_{\bullet}=\emptyset,\bullet\in\{e,\psi\}\), we understand that \(\chi_{\bullet}\) is not present). We define the notion of _scattering symplectomorphism_ or _scattering-canonical transformation_ (SCT) depending on whether \(U\) and (consequently) \(V\) intersect the corner: If \(U\cap\widetilde{\mathcal{W}}_{\psi e}=\emptyset\), we say that \(\chi\) is an SCT if \(\chi_{\bullet}\) is a contact diffeomorphism with respect to \(\alpha_{\bullet}\); else, \(\chi\) is an SCT if both \(\chi_{e}\) and \(\chi_{\psi}\) are contact diffeomorphism in the interior of the respective boundary face and \(\chi\) preserves the Poisson bracket on \(\mathcal{C}^{\infty}(U;S^{l,m})\) across the corner.
We have the following easy-to-prove properties of a scattering-symplectomorphism, which reflect classical behaviour of regular symplectomorphisms.
**Lemma 3.38**.: Let \(\chi\colon U\to V\) be an SCT. Then \(\chi\) maps \(SG\)-Legendrian submanifolds in \(U\) to \(SG\)-Legendrian submanifolds in \(V\). Moreover, if \(U\cap\widetilde{\mathcal{W}}_{\psi e}=\emptyset\), then \(\chi_{e}\), respectively \(\chi_{\psi}\), extends to a homogeneous symplectomorphism on a collared neighbourhood of \(\partial T^{*}X\), respectively on \(T^{*}_{0}X\), which admits a local parametrisation via a \(e\)-homogeneous, respectively \(\psi\)-homogeneous, phase function. Finally \(\chi\) induces Poisson maps on the boundaries and corners (i.e. it preserves the Poisson structure in Proposition 3.35).
Proof.: First, recall that a diffeomorphism between contact manifolds \(X\) and \(Y\) is contact if and only if it preserves each Legendrian submanifold (cf. [10]). Then, since \(\chi\) is a diffeomorphism by definition, we obtain at once that \(SG\)-Legendrians are preserved if \(U\cap\widetilde{\mathcal{W}}_{\psi e}=\emptyset\). On the other hand \(\chi\) preserves clean intersection so the preservation of \(SG\)-Legendrians for \(U\cap\widetilde{\mathcal{W}}_{\psi e}\neq\emptyset\) also follows.
The second statement is obtained by applying the classical procedure of symplectisation to the contact manifolds \(\widetilde{\mathcal{W}}_{\psi}\) and \(\widetilde{\mathcal{W}}_{e}\) separately. Indeed notice that, if \(U\) does not intersect the corner, then \(U\) can be "conified" to a subset of \(T^{*}X\). Near the boundary we just have to pick a collared neighbourhood \(\partial X\times[0,1)\) and pull-back \((U\cap\widetilde{\mathcal{W}}_{e})\times[0,1)\) with \(R\times\mathrm{id}\), while on the co-sphere at \(\infty\) we just identify \(U\cap\mathcal{W}_{\psi}\) with a subset of \(\mathbb{S}^{*}X\) and consider, as in the classical theory, the associated conic neighbourhood. Let us spend a few extra words to describe, for the \(e\)-component, how one obtains a symplectic form and can extend \(\chi_{e}\) to a homogeneous symplectomorphism. More details can be found in [11], Section <<Symplectization of contact manifolds>>, or in [10], Appendix 4. We are given
the contact form \(\alpha_{e}=\mathrm{d}\tau+\mu_{k}\,\mathrm{d}z^{k}\) on \(\widetilde{\mathcal{W}}_{e}\). In the conified neighbourhood \(\mathbb{R}^{+}\times U_{e}\), we are introducing the new coordinate \(\rho_{N}\), effectively identifying \(U_{e}\) with \(\{1\}\times U_{e}\), and can consider
\[\omega_{e}=-\,\mathrm{d}(\rho_{N}\alpha_{e})=-\rho_{N}\left(\frac{\mathrm{d} \rho_{N}}{\rho_{N}}\wedge\alpha_{e}+\mathrm{d}\alpha_{e}\right).\]
It is readily checked that this is now a symplectic form on the conified neighbourhood. By definition of contact transformation, \(\chi^{*}\alpha_{e}=g\alpha_{e}\) for a positive smooth function \(g\). Then, the extension \(C_{e}(\rho_{N},z,\tau,\mu)\equiv(\rho_{N}/g(z,\tau,\mu),\chi(z,\tau,\mu))\) is homogeneous symplectic. Indeed,
\[C_{e}^{*}\omega_{e}=-\,\mathrm{d}C_{e}^{*}(\rho_{N}\alpha_{e})=-\,\mathrm{d} \!\left(\frac{\rho_{N}}{g}g\alpha_{e}\right)=\omega_{e},\]
proving that \(C_{e}\) is symplectic. The homogeneity is, on the other hand, manifest.
We have now the homogeneous symplectic extensions \(C_{e}\) and \(C_{\psi}\). Now, the local parametrisation of \(C_{\psi}\) is the classical result of Hormander, Proposition 25.3.3 in [10]. On the other hand, for \(C_{e}\) it suffices to exchange the roles of variables and covariables (also cf. [11, Section 6]).
Concerning the last statement, observe that, at the corner, the Poisson structure is preserved by definition. On the other hand, the homogeneous symplectic extensions just constructed guarantee that \(\{,\}\) is preserved away from the corner. QED
In the next Theorem 3.39 we present a more thorough analysis of the structure of a \(sc\)-symplectomorphism defined near the corner. To avoid overburdening the notation, let us first clarify that the local expressions given below hold true in coordinates \((x,\xi)\), obtained as the pull-back of standard systems of coordinates near the boundary faces (or possibly the corner). In particular, the \(\alpha\)'s are angular coordinates on \(\mathbb{S}^{n-1}\) and the boundary-defining function is the inverse of the radial coordinate in polar coordinates. That is, \(x^{i}=\left|x\right|X^{i}(\alpha)\) for smooth functions \(X^{i}\) such that \((X^{1})^{2}+\dots+(X^{n})^{2}=1\) and \(\rho_{\partial}=1/\left|x\right|\). We employ the same convention for \(\xi\)'s and \(\beta\)'s. Also, in this notation we will consider homogeneous extensions of functions \(\mathbb{S}^{n-1}\times\mathbb{R}^{n}\) to \(\mathbb{R}^{n}_{0}\times\mathbb{R}^{n}\). To be precise, we will look for \(\mathbb{R}^{+}\)-equivariant maps \(C_{\bullet}\colon\mathcal{W}_{\bullet}\to\mathcal{W}_{\bullet}\) agreeing with \(\chi_{\bullet}\) on \(\widetilde{\mathcal{W}}_{\bullet}\). Any such map is of the form (for example \(\bullet=e\), w.l.o.g.)
\[C_{e}(r,\alpha,\xi)=(f_{e}(\alpha,\xi)r,\chi_{e}(\varphi,\xi)) \tag{3.31}\]
for some smooth \(f_{e}\in\mathcal{C}^{\infty}(\widetilde{\mathcal{W}}_{e})\), where \((r=\left|x\right|,\alpha)\) are the above polar coordinates on \(\mathbb{R}^{n}_{0}\) and \(\xi\) are coordinates on \(\mathbb{R}^{n}\). The inverse of such map, again taking polar coordinates on the first factor and global coordinates on the second, is given by
\[C_{e}^{-1}(s,\alpha,\eta)=\left(\frac{s}{f_{e}(\chi_{e}^{-1}(\alpha,\eta))}, \chi_{e}^{-1}(\alpha,\eta)\right). \tag{3.32}\]
Recalling again Section <<Symplectization of contact manifolds>> in [16], it must be possible to choose \(f_{e}\) appropriately to ensure that \(C_{e}\) so extended is symplectic. We will find the explicit form of the section \(f_{e}\) (and \(f_{\psi}\) too, of course) in the next chapter, in the course of the proof of Lemma 4.5. For the moment, we content ourselves with saying that such a choice is possible.
**Theorem 3.39**.: Let \(\chi\) be a \(sc\)-canonical transformation, between open sets \(U,V\) as above, with \(U\cap\widetilde{\mathcal{W}}_{\psi e}\neq\emptyset\). Then \(\chi\) is given as the datum of a triple of diffeomorphisms \((\chi_{e},\chi_{\psi},\chi_{\psi e})\), for \(\chi_{\bullet}\colon\widetilde{\mathcal{W}}_{\bullet}\to\widetilde{\mathcal{W }}_{\bullet}\), such that
1. If \(\chi_{e}(\alpha,\xi)=(T(\alpha,\xi),H(\alpha,\xi))\) for \(T\colon\mathbb{S}^{n-1}\times\mathbb{R}^{n}\to\mathbb{S}^{n-1},H\colon\mathbb{S} ^{n-1}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\), then the components of \(H\) are elements of \(\mathcal{C}^{\infty}(\mathbb{S}^{n-1};S^{1}(\mathbb{R}^{n}))\);
2. If \(\chi_{\psi}(x,\beta)=(Y(x,\beta),G(x,\beta))\) for \(G\colon\mathbb{R}^{n}\times\mathbb{S}^{n-1}\to\mathbb{S}^{n-1},Y\colon\mathbb{ R}^{n}\times\mathbb{S}^{n-1}\to\mathbb{R}^{n}\) then the components of \(Y\) are elements of \(\mathcal{C}^{\infty}(\mathbb{S}^{n-1};S^{1}(\mathbb{R}^{n}))\);
3. If \(\chi_{\psi e}(\alpha,\beta)=(A(\alpha,\beta),B(\alpha,\beta))\) and we write \(\chi_{e},\chi_{\psi}\) as above, the principal symbol of \(Y\), respectively \(H\), restricted to \(\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\) coincides with \(T\), respectively \(F\). More generally, it holds true that (3.33) \[A(\alpha,\beta) =\lim_{\lambda\to+\infty}T(\alpha,\lambda\xi)=\lim_{\lambda\to+ \infty}\frac{1}{\lambda}Y(\lambda x,\beta),\] (3.34) \[B(\alpha,\beta) =\lim_{\lambda\to+\infty}\frac{1}{\lambda}H(\alpha,\lambda\xi)= \lim_{\lambda\to+\infty}G(\lambda x,\beta);\]
4. We can pick homogeneous extensions \(C_{e}\) of \(\chi_{e}\) in \(\alpha\), \(C_{\psi}\) of \(\chi_{\psi}\) in \(\beta\) and \(C_{\psi e}\) of \(\chi_{\psi e}\) in \(\alpha\) and \(\beta\) separately, so that \(C_{\bullet}\) is a symplectomorphism, homogeneous in the respective variables;
5. Writing these extensions as \(C_{\bullet}(x,\xi)=(Y_{\bullet}(x,\xi),H^{\bullet}(x,\xi))\) for \(Y_{\bullet}=(Y_{\bullet}^{1},\ldots,Y_{\bullet}^{n})\), \(H^{\bullet}=(H_{1}^{\bullet},\ldots,H_{n}^{\bullet})\), we have that each triple \((Y_{\bullet}^{j})\), \((H_{k}^{\bullet})\) can be continued to a classical \(SG\)-symbol near "infinity". In particular there is a diffeomorphism \(C(x,\xi)=(Y^{j}(x,\xi),H_{k}(x,\xi))\) near "infinity" having \((C_{e},C_{\psi},C_{\psi e})\) as "principal symbol".
Proof.: For 1., notice that \(\chi_{e}\) is given as a diffeomorphism of \(\mathbb{S}^{n-1}\times\mathbb{B}^{n}\). We pull it back to a diffeomorphism of \(\mathbb{S}^{n-1}\times\mathbb{R}^{n}\) using \(\operatorname{id}\times R\). But then the \(\mathbb{R}^{n}\)-components of \(\chi_{e}\) must be classical symbols of order 1 in \(\xi\), depending in a smooth way on \(\alpha\in\mathbb{S}^{n-1}\). This is exactly the claim.
For 2., we argue exactly as in 1., exchanging the roles of the variables.
To prove 3. notice that the expressions involving \(H\) and \(Y\) are just the standard formulae to compute the principal symbol for the classes \(S^{1}(\mathbb{R}^{n})\), depending on a parameter on \(\mathbb{S}^{n-1}\). Recalling Theorem 3.29 we see immediately that we can compute it also by restriction to \(\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\). Now, \(\chi_{\bullet}\) is obtained as a diffeomorphism of \(\partial(\mathbb{S}^{n}_{+}\times\mathbb{S}^{n}_{+})\), so in the corner \(\chi_{e}=\chi_{\psi}\). Pulling this back with \(\operatorname{id}\times R\) and \(R\times\operatorname{id}\) and comparing the respective components gives the claimed formulas.
4. is clear if one exploits the close relation between canonical transformations and contact diffeomorphisms. However, a more explicit construction will be given in the proof of Lemma 4.5, where we will see that the choice of order reductions uniquely determines the homogeneous extensions to be symplectic.
5. is now a consequence of the above facts. Indeed, the components of the homogeneous extensions \(C_{\bullet}\) satisfy symbol estimates in the non-homogeneous variables. In particular, each pair of components \((Y_{e}^{j},Y_{\psi}^{j})\), respectively \((H_{k}^{e},H_{k}^{\psi})\), can be continued to a symbol \(Y^{j}\in\mathit{SG}^{1_{e}}(\mathbb{R}^{n}\times\mathbb{R}^{n})\), respectively \(H_{k}\in\mathit{SG}^{1_{\psi}}(\mathbb{R}^{n}\times\mathbb{R}^{n})\). We can choose them so that the resulting map \(C(x,\xi)=(Y(x,\xi),H(x,\xi))\) is a diffeomorphism, in view of Theorem 3.34. Indeed, our maps are all globally defined on the boundary hyper-surfaces and the corner, so they can be patched together correctly. QED
**Remark 3.40**.: One would certainly hope that the extension in 5. of the above theorem could be achieved symplectic. However, the best of our efforts could not deduce this desirable fact from the properties of \(C_{e},C_{\psi}\) and \(C_{\psi e}\).
**Remark 3.41**.: Notice that, for a scattering map on a manifold with corners, one preassigns an ordering on the set of the boundary-defining functions, so that at a corner we are specifying which boundary hyper-surface is mapped to which. For example on \(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n}\), seen as \({}^{sc}\overline{T^{*}\mathbb{S}_{+}^{n}}\), we use the ordered set of boundary-defining functions \((\rho_{N},\rho_{\sigma})\). Then, it is easily seen that the symplectic rotation \(F\colon(x,\xi)\to(\xi,-x)\), extended to the compactification as in Theorem 3.29, is _not_ a scattering map in this sense, since \(F^{*}\rho_{N}=\rho_{\sigma}\) and vice-versa. This reflects the fact that, on a manifold with boundary \(X\), the two components of the joint scattering symbol live as smooth functions on two in principle different compact manifolds, namely, the scattering co-sphere bundle and the boundary of \(X\) (pulled back to the compactified scattering cotangent bundle). Of course, nothing in principle prevents us from considering \(F\) as some sort of "generalised scattering map" on the model case \(\mathbb{S}_{+}^{n}\). However we notice that pull-back along \(F\)_does not_ preserve \(SG\)-classes, since it exchanges the two filtrations as in Proposition 3.28. We will therefore assume that SCTs cannot exhibit this kind of behaviour, although we will comment again on this point at the very end of Chapter 4.
We now come to the core of this section: the relation between scattering-symplectic maps and the classical \(SG\)-phase functions.
**Theorem 3.42** (Parametrising \(sc\)-symplectomorphisms).: Let \(\chi\colon\partial(\mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n})\to\partial( \mathbb{S}_{+}^{n}\times\mathbb{S}_{+}^{n})\) be a (possibly only locally defined) scattering canonical transformation. Then, at each point \((p,q)\) on the graph of \(\chi\), we can find a neighbourhood \(\tilde{U}\) of \(p\), a neighbourhood \(\tilde{V}\) of \(q\) and an \(SG\)-phase function \(\varphi(x,y,\xi)\in SG^{1}_{(x,y),\xi}\), parametrising a neighbourhood of \((p,q)\). More explicitly, if \(p\) does not lie on the corner, then we can parametrise the homogeneous symplectic extension \(C\) of \(\chi\) near \(p\) via a homogeneous phase function in the classical sense. On the other hand, if \(p\) is in the corner then there is a conic neighbourhood \(U_{e}\), respectively \(U_{\psi}\), associated with a neighbourhood \(\tilde{U}_{e}\), respectively \(\tilde{U}_{\psi}\), of \(p\) in \(\mathbb{S}^{n-1}\times\mathbb{S}_{+}^{n}\), respectively \(\mathbb{S}_{+}^{n}\times\mathbb{S}^{n-1}\), and we find a phase \(\varphi\) as above such that \(\varphi_{e}=\sigma_{e}(\varphi)\), respectively \(\varphi_{\psi}=\sigma_{\psi}(\varphi)\), parametrises the graph of \(C_{e}\), respectively \(C_{\psi}\), in the usual sense for conic Lagrangians, and \(\varphi_{\psi e}\) parametrises the bi-homogeneous extension \(C_{\psi e}\) of \(\chi_{\psi e}\).
Proof.: The case \((p,q)\in\widetilde{\mathcal{W}}_{\psi}\times\widetilde{\mathcal{W}}_{\psi}\) is just an instance of the classical parametrization result for homogeneous symplectomorphisms of Hormander, namely Proposition 25.3.6 in [10]. To see this, consider neighbourhoods \(U\) of \(p\) and \(V\) of \(q\) which are away from the corner \(\tilde{\mathcal{W}}_{\psi e}\). Then, we can exploit the triviality of the bundle \(\pi\colon U\times\mathbb{R}_{0}^{n}\to U\times\mathbb{S}^{n}\) to define a homogeneous extension \(C\) of \(\chi\). By picking the correct section of \(\pi\), we can ensure that \(C\) is actually a homogeneous canonical transformation (namely it preserves the canonical 1-form \(\lambda_{\psi}\)) and apply Hormander's result. Similarly, if \((p,q)\in\widetilde{\mathcal{W}}_{e}\times\widetilde{\mathcal{W}}_{e}\), we can consider the trivial bundle \(\mathbb{R}_{0}^{n}\times\mathbb{R}^{n}\to\mathbb{S}^{n-1}\times\mathbb{R}^{n}\). Here we can pick homogeneous extensions in \(x\) and reproduce the proof of Hormander (notice also [11], Section 6) exchanging the roles of \(x\) and \(\xi\) (in particular, using the fact that \(C^{*}\lambda_{e}=\lambda_{e}\) for \(\lambda_{e}=x^{j}\operatorname{d}\!\xi_{j}\)). The case \((p,q)\in\widetilde{\mathcal{W}}_{\psi e}\times\widetilde{\mathcal{W}}_{\psi e}\) is a bit more involved and we adapt the careful analysis of [11].
Mimicking the construction there, we work in a chart \(\widetilde{U}\subset\widetilde{\mathcal{W}}\) around \(p\) where on \(\widetilde{U}\cap\widetilde{\mathcal{W}}_{\psi e}\) we have coordinates in the form
\[(\alpha^{1},\dots,\alpha^{n-1},\sqrt{1-(\alpha^{1})^{2}-\dots-(\alpha^{n-1})^{2 }},\sqrt{1-(\beta_{2})^{2}-\dots-(\beta_{n})^{2}},\beta_{2},\dots,\beta_{n}). \tag{3.35}\]
On \(\widetilde{\mathcal{W}}_{e}\) and \(\widetilde{\mathcal{W}}_{\psi}\) we use adapted coordinates
\[(\alpha^{1},\dots,\alpha^{n-1},\sqrt{1-(\alpha^{1})^{2}-\dots-( \alpha^{n-1})^{2}},\rho_{2},\beta_{2},\dots,\beta_{n})\in\widetilde{\mathcal{ W}}_{\psi},\] \[(\alpha^{1},\dots,\alpha^{n-1},\rho_{1},\sqrt{1-(\beta_{2})^{2}- \dots-(\beta_{n})^{2}},\beta_{2},\dots,\beta_{n})\in\widetilde{\mathcal{W}}_{e}, \tag{3.36}\]
where \(\rho_{1}=\sqrt{1-(\alpha^{1})^{2}-\dots-(\alpha^{n-1})^{2}}\) and \(\rho_{2}=\sqrt{1-(\beta_{2})^{2}-\dots-(\beta_{n})^{2}}\) are defining equations for the common boundary \(\widetilde{U}\cap\widetilde{\mathcal{W}}_{\psi e}\). We can similarly choose coordinates \((\theta,r_{1},r_{2},\gamma)\) satisfying the same relations in a chart \(\tilde{V}\) around \(q\). In these coordinates the map \(\tilde{C}=(\tilde{C}_{e},\tilde{C}_{\psi})\) can be expressed as
\[\tilde{C}_{\bullet}(\alpha,\rho_{1},\rho_{2},\beta)=(T_{\bullet}(\alpha,\rho _{1},\rho_{2},\beta),r_{1}^{\bullet}(\alpha,\rho_{1},\beta),r_{2}^{\bullet}( \alpha,\rho_{2},\beta),G^{\bullet}(\alpha,\rho_{1},\rho_{2},\beta)), \tag{3.37}\]
namely \(\theta=T_{\bullet}\) and \(\gamma=G^{\bullet}\) are equations defining the graph of \(\tilde{C}_{\bullet}\) in \(\tilde{U}\times\tilde{V}\). Let \(\widetilde{U_{\bullet}}=\widetilde{U}\cap\widetilde{\mathcal{W}_{\bullet}}\) and \(U_{\bullet}\) be the conic set associated with \(\widetilde{U_{\bullet}}\) under inverse radial compactification. Then, on \(U_{\bullet}\) we can introduce "polar coordinates" and extend \(C_{\bullet}\) homogeneously. For example, on \(U_{e}\) we choose a section \(f_{e}(\alpha,\rho_{2},\beta)\colon\mathbb{S}^{n-1}\times\mathbb{R}^{n}\to \mathbb{R}_{0}^{n}\times\mathbb{R}^{n}\), pull back the covariables using \(R\), and set, for \(\mu>0\) and \(\rho_{2}>0\),
\[(x,\xi)\equiv(\mu\alpha^{1},\dots,\mu\alpha^{n-1},\mu\sqrt{1-\left| \alpha\right|^{2}},R^{-1}(\rho_{2}\beta_{1},\dots,\rho_{2}\beta_{n})),\] \[C_{e}(x,\xi)=\left(T_{e}\left(\frac{x}{\mu},\mu,R(\xi)\right),r_{ 1}^{e}\left(\frac{x}{\mu},R(\xi)\right),r_{2}^{e}\left(\frac{x}{\mu},R(\xi) \right),G^{e}\left(\frac{x}{\mu},\mu,R(\xi)\right)\right). \tag{3.38}\]
Again as in [17], the section \(f_{e}\) can be appropriately chosen to ensure that \(C_{e}\) is symplectic and homogeneous in the \(x\) variables. Therefore, it preserves the \(1\)-form \(\lambda_{e}\). Similarly, we have an extension \(C_{\psi}\) which preserves the Liouville \(1\)-form \(\lambda_{\psi}\) (using a section \(f_{\psi}\colon\mathbb{S}^{n-1}\times U_{\psi}\to\mathbb{R}_{0}^{n}\times U_{ \psi}\)), and we can also define a map \(C_{\psi e}\) by extending \(\chi_{\psi e}\) using both sections \(f_{e},f_{\psi}\). Then, in Cartesian coordinates on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\), taking into account Theorem 3.29, we have then \(3\) symplectomorphisms \(C_{e},C_{\psi},C_{\psi e}\) defined for \(x\neq 0,\xi\neq 0\) and \(x,\xi\neq 0\), respectively. Their components are parts of principal \(SG\)-symbols:
\[C_{e}(x,\xi) =(Y_{e}^{1}(x,\xi),\dots,Y_{e}^{n}(x,\xi),H_{1}^{e}(x,\xi),\dots, H_{n}^{e}(x,\xi)),\] \[C_{\psi}(x,\xi) =(Y_{\psi}^{1}(x,\xi),\dots,Y_{\psi}^{n}(x,\xi),H_{1}^{\psi}(x, \xi),\dots,H_{n}^{\psi}(x,\xi)),\] \[C_{\psi e}(x,\xi) =(Y_{\psi e}^{1}(x,\xi),\dots,Y_{\psi e}^{n}(x,\xi),H_{1}^{\psi e }(x,\xi),\dots,H_{n}^{\psi e}(x,\xi)),\] \[Y_{e}^{j} \in SG^{(1),0},\quad Y_{\psi}^{j}\in SG^{1,(0)},\sigma_{e}(Y_{ \psi}^{j})=\sigma_{\psi}(Y_{e}^{j})=Y_{\psi e}^{j}\] \[H_{k}^{e} \in SG^{(0),1},\quad H_{k}^{\psi}\in SG^{0,(1)},\sigma_{e}(H_{k} ^{\psi})=\sigma_{\psi}(H_{k}^{e})=H_{k}^{\psi e}. \tag{3.39}\]
The \(SG\) estimates for these functions follow directly from the previous considerations, Chapter 6 in [And], and our particular choice of coordinates. Now, the twisted graphs \(\Lambda_{\bullet}=\mathrm{gr}^{\prime}(C_{\bullet})\) of these maps are conic Lagrangians (either in \(x,\xi\) or both). As in the classical theory of canonical graphs, we can find (possibly after rearranging) a partition \(I=(1,\dots,d),J=(d+1,\dots,n)\) so that \((x^{J},\xi_{I},\eta)\) can be taken as coordinates on \(\Lambda_{\bullet}\).
Notice that, in principle, in what follows we should choose different sets of coordinates for each map. However, the boundary hyper-surfaces intersect cleanly and all the changes of coordinates just defined are either diffeomorphism or homogeneous extensions outside a compact neighbourhood of \(0\), so they preserve this structure. Hence, near the corner we can always take the same partitions \(I,J\).
From here to the end of the proof, we employ a modified Einstein convention, namely: the indices \(i\) belong to \(I\), \(j\) to \(J\), \(k\) to \(\{1,\dots,n\}\), and repeated \(i\) or \(j\) means summing only over \(I\) and \(J\). The other coordinates are defined implicitly on \(\Lambda_{\bullet}\) as
\[x^{i}=X^{i}_{\bullet}(x^{J},\xi_{I},\eta),\quad\xi_{j}=\Xi^{\bullet}_{j}(x^{J },\xi_{I},\eta),\quad\eta_{k}=H^{\bullet}_{k}(x^{J},\xi_{I},\eta). \tag{3.40}\]
In view of Chapter 6 of [And], we see that these function \(X,\Xi,H\) must satisfy \(SG\)-estimates. In particular, they belong to the following classes (the classicality is implied by the fact that we are pulling-back smooth functions on the compactified space along \(R\)):
\[\begin{split} X^{i}_{e}&\in SG^{(1),0}(\mathbb{R}^{ n+d}\times\mathbb{R}^{n-d}),\quad X^{i}_{\psi}\in SG^{1,(0)}(\mathbb{R}^{n+d} \times\mathbb{R}^{n-d}),\\ \Xi^{e}_{j}&\in SG^{(0),1}(\mathbb{R}^{n+d}\times \mathbb{R}^{n-d}),\quad\Xi^{\psi}_{j}\in SG^{0,(1)}(\mathbb{R}^{n+d}\times \mathbb{R}^{n-d}),\\ H^{e}_{k}&\in SG^{(0),1}(\mathbb{R}^{n+d}\times \mathbb{R}^{n-d}),\quad H^{\psi}_{k}\in SG^{0,(1)}(\mathbb{R}^{n+d},\mathbb{R} ^{n-d}).\end{split} \tag{3.41}\]
Still following the ideas of [CS17], we now define directly homogeneous phase functions that parametrise locally the graphs of these diffeomorphisms and show they can be patched together to a single \(SG\)-phase function. To begin with, we look at the condition that \(C_{e}\) be an \(e-\)homogeneous canonical transformation. This amounts to \(C_{e}\) preserving the \(1\)-form \(\alpha_{e}\), namely \(C_{e}^{*}\alpha_{e}=\alpha_{e}\). In the above coordinate patches, this is expressed as
\[\begin{split} 0&=(x\,\mathrm{d}\xi-y\,\mathrm{d} \eta)|_{\Lambda_{e}}\\ &=X^{i}_{e}\,\mathrm{d}\xi_{i}+x^{j}\left(\frac{\partial\Xi^{e}_ {j}}{\partial x^{j_{1}}}\,\mathrm{d}x^{j_{1}}+\frac{\partial\Xi^{e}_{j}}{ \partial\xi_{i}}\,\mathrm{d}\xi_{i}+\frac{\partial\Xi^{e}_{j}}{\partial y^{k} }\,\mathrm{d}y^{k}\right)-Y^{k}_{e}\,\mathrm{d}\eta_{k}\\ &=\left(X^{i}_{e}-x^{j}\frac{\partial\Xi^{e}_{j}}{\partial\xi_{i }}\right)\mathrm{d}\xi_{i}+x^{j_{1}}\frac{\partial\Xi^{e}_{j_{1}}}{\partial x ^{j}}\,\mathrm{d}x^{j}+\left(x^{j}\frac{\partial\Xi^{e}_{j}}{\partial\eta_{k }}-Y^{k}_{e}\right)\mathrm{d}\eta_{k}.\end{split} \tag{3.42}\]
Therefore, all expressions in parenthesis must vanish on the graph. Very similar relations hold true for \(C_{\psi}\), which we give explicitly hereafter:
\[\begin{split}\Xi^{\psi}_{j}+\xi_{i}\frac{\partial X^{i}_{\psi}} {\partial x^{j}}-\eta_{k}\frac{\partial Y^{k}_{\psi}}{\partial x^{j}}& =0,\\ \xi_{i}\frac{\partial X^{i}_{\psi}}{\partial\xi_{I}}-\eta_{k}\frac {\partial Y^{k}_{\psi}}{\partial\xi_{I}}&=0,\\ \xi_{i}\frac{\partial X^{i}_{\psi}}{\partial\eta_{l}}-\eta_{k} \frac{\partial Y^{k}_{\psi}}{\partial\eta_{l}}&=0.\end{split} \tag{3.43}\]
Let us consider first the functions \(S_{\bullet}\) defined by
\[\begin{split} S_{e}(x^{J},\xi_{I},\eta)&=x^{j}\Xi^{ e}_{j}(x^{J},\xi_{I},\eta),\\ S_{\psi}(x^{J},\xi_{I},\eta)&=-X^{i}_{\psi}(x^{J },\xi_{I},\eta)\xi_{i}+Y^{k}_{\psi}\eta_{k}.\end{split} \tag{3.44}\]
We prove that they are generating functions for the canonical transformations \(C_{\bullet}\). First looking at \(S_{e}\) we have, using (3.42), that
\[\begin{split}\frac{\partial S_{e}}{\partial x^{j}}&= \Xi_{j}^{e}+x^{j}\,\frac{\partial\Xi_{j1}^{e}}{\partial x^{j}}=\Xi_{j}^{e},\\ \frac{\partial S_{e}}{\partial\xi_{i}}&=x^{j}\frac{ \partial\Xi_{j}^{e}}{\partial\xi_{i}}=X_{e}^{i},\\ \frac{\partial S_{e}}{\partial\eta^{k}}&=x^{j}\frac{ \partial\Xi_{j}^{e}}{\partial\eta_{k}}=Y_{e}^{k};\end{split} \tag{3.45}\]
hence \(S_{e}\) generates \(\Lambda_{e}\). The computation for \(S_{\psi}\) using (3.43) is very similar and gives
\[\frac{\partial S_{\psi}}{\partial x^{j}}=\Xi_{j}^{\psi},\quad\frac{\partial S _{\psi}}{\partial\xi_{i}}=-X_{\psi}^{i},\quad\frac{\partial S_{\psi}}{ \partial\eta_{k}}=Y_{\psi}^{k}. \tag{3.46}\]
We have then established that \(S_{\bullet}\) is a generating function for \(\Lambda_{\bullet}\), so we now consider the phase functions
\[\begin{split}\varphi_{e}(x^{I},x^{J},y,\xi_{I},\eta)& \equiv x^{i}\xi_{i}+y^{k}\eta_{k}+x^{j}\Xi_{j}^{e}\\ &=x^{i}\xi_{i}+y^{k}\eta_{k}+S_{e}(x^{J},\xi_{I},\eta)\in SG^{(1 ),1}(\mathbb{R}^{2n}\times\mathbb{R}^{n+d}),\\ \varphi_{\psi}(x^{I},x^{J},y,\xi_{I},\eta)&\equiv x ^{i}\xi_{i}-y^{k}\eta_{k}-\xi_{i}X_{\psi}^{i}+\eta_{k}Y_{\psi}^{k}\\ &=x^{i}\xi_{i}-y^{k}\eta_{k}+S_{\psi}(x^{J},\xi_{I},\eta)\in SG^{ 1,(1)}(\mathbb{R}^{2n}\times\mathbb{R}^{n+d}).\end{split} \tag{3.47}\]
Then \(\mathrm{d}_{(\xi_{I},\eta)}\varphi_{\bullet}=0\) if and only if (3.42) and (3.43) hold true. Then, computing the other derivatives gives the desired parametrisation.
It remains to show that the functions \(\varphi_{\bullet}\) can be realised as the principal symbol of an \(SG\)-function. To this end, the methods of [10] and [10] still prove viable: using (2.18) and keeping in mind that the tuples \((X_{e}^{i},X_{\psi}^{i},X_{\psi e}^{i}),(\Xi_{j}^{e},\Xi_{j}^{\psi},\Xi_{j}^{ \psi e})\) and \((H_{k}^{e},H_{k}^{\psi},H_{k}^{\psi e})\) are principal symbols, we compute \(\sigma_{e}(\varphi_{\psi})-\sigma_{\psi}(\varphi_{e})\) restricted to the graph of \(C_{\psi e}\):
\[\begin{split}\sigma_{e}(\varphi_{\psi})&=\lim_{ \lambda\to\infty}\frac{1}{\lambda}\varphi_{\psi}(\lambda x,\xi_{I},\lambda y) \\ &=\lim_{\lambda\to\infty}\frac{1}{\lambda}(\lambda x^{i}\xi_{i}- \lambda y^{k}\eta_{k}-X_{\psi}^{i}(\lambda x^{J},\xi_{I},\eta)\xi_{i}+Y_{\psi }^{k}(\lambda x^{J},\xi_{I},\eta))\\ &=x^{i}\xi_{i}-y^{k}\eta_{k}-X_{\psi e}^{i}\xi_{i}+Y_{\psi e}^{k} \eta_{k},\\ \sigma_{\psi}(\varphi_{e})&=\lim_{\lambda\to\infty} \frac{1}{\lambda}\varphi_{e}(x,\lambda\xi_{I},y)\\ &=\lim_{\lambda\to\infty}\frac{1}{\lambda}(\lambda x^{i}\xi_{i}+ \lambda y^{k}\eta_{k}+x^{j}\Xi_{j}^{e}(x^{J},\lambda\xi_{I},y))\\ &=x^{i}\xi_{i}+y^{k}\eta_{k}+x^{j}\Xi_{j}^{\psi e}\\ \Longrightarrow&(\sigma_{\psi}(\varphi_{e})-\sigma_{e }(\varphi_{\psi}))|_{\Lambda_{\psi e}}=(2y^{k}\eta_{k}+x^{j}\Xi_{j}^{\psi e}+X _{\psi e}^{i}\xi_{i}-Y_{\psi e}^{k}\eta_{k})|_{\Lambda_{\psi e}}\\ &=X_{\psi e}^{i}\xi_{i}+x^{j}\Xi_{j}^{\psi e}+Y_{\psi e}^{k}\eta_ {k}=((x,y),(\xi,\eta))\,|_{\Lambda_{\psi e}}\end{split} \tag{3.48}\]
However, recall that \(\Lambda_{\psi e}\) is bi-conic, so the phase \(\varphi_{\psi e}\) parametrising it is bi-homogeneous of degree \(\mathbb{1}\) in \((x,y)\) and \((\xi_{I},\eta)\). Since on the graph we have \((\xi,\eta)=\mathrm{d}_{(x,y)}\,\varphi_{\psi e}(x,y,\xi_{I},\eta)\), we can apply Euler's equation for homogeneous functions twice
to obtain
\[\begin{split}((x,y),(\xi,\eta))\,|_{\Lambda_{\psi e}}&= \big{(}(x,y),\mathrm{d}_{(x,y)}\,\varphi_{\psi e}(x,y,\xi_{I},\eta)\big{)}\\ &=\varphi_{\psi e}(x,y,\xi_{I},\eta)|_{\Lambda_{\psi e}}=\big{(}( \xi_{I},\eta),\mathrm{d}_{(\xi_{I},\eta)}\,\varphi_{\psi e}(x,y,\xi_{I},\eta) \big{)}\\ &=0,\end{split} \tag{3.49}\]
where for the last equality we noticed that \(\mathrm{d}_{(\xi_{I},\eta)}\,\varphi_{\psi e}=0\) is exactly the relation defining the set in \(\mathbb{R}^{3n+d}\) which parametrises the graph of \(C_{\psi e}\). Therefore, on the graph of \(C_{\psi e}\) we have that \(\sigma_{e}(\varphi_{\psi})=\sigma_{\psi}(\varphi_{e})\), which is the compatibility condition for \(SG\)-principal symbols. This proves that \((\varphi_{e},\varphi_{\psi})\) can be realised as the principal symbol of a function \(\varphi\in SG^{1}(\mathbb{R}^{2n}\times\mathbb{R}^{n+d})\). This concludes the proof.
QED
**Remark 3.43**.: Looking at the phase functions (3.47), it is clear that, in bi-conic neighbourhoods of infinity, they actually belong to the class \(\mathcal{Q}\). Namely, they are given as a sum of two terms \(f(x,\theta)+g(y,\theta)\) for \(\theta=(\xi_{I},\eta)\) satisfying appropriate \(SG\) estimates.
**Remark 3.44**.: We remark that, while our discussion above was limited to the model case \(\mathbb{B}^{n}\times\mathbb{B}^{n}\), our definition of SCTs and the related results all have a local character (in the sense that they can be checked in coordinates near the corners). Therefore, they apply _mutatis mutandis_ to general scattering manifolds, their scattering cotangent bundles, and the fibre-wise compactifications thereof. However, the theory of FIOs on scattering manifolds has not yet reached a completely satisfactory status. In particular, the concept of elliptic FIO in this setting has not yet been defined and analysed to the extent that we need. So, for our purposes in the coming Chapter 4 we will stick to the model case.
## 4. Order-preserving isomorphisms
### Preliminary definitions and auxiliary results
In this chapter, we work exclusively in the model case of \(\mathbb{R}^{n}\) and its compactification \(\mathbb{S}^{n}_{+}\). Although we believe that most of what follows should hold true in general for operators defined on an asymptotically Euclidian manifold \(X\) (or even scattering), the theory of FIOs in this setting has not been studied in the required depth to allow us to formulate certain results below. On the other hand the nature of the argument is such that, given the existence of a sufficiently precise calculus structure, the computations need only to be performed locally, thus reducing them to the model case. Therefore our choice of fixing \(X=\mathbb{S}^{n}_{+}\) and working with \(SG\)-classes below.
Recall that, until now, we have specialised to the sub-classes of classical symbols, in order to study the analytical and geometrical properties of the principal symbol. Here, we specialise further by assuming that the order of the involved operators is \((m_{e},m_{\psi})\in\mathbb{Z}^{2}\). There will be only a single exception to this rule, which will be mentioned explicitly. Again, we always omit to write \(cl,cl(e),cl(\psi)\) in the corresponding notations in Definition 2.8.
We want to address the question of the order-preserving isomorphisms in the \(SG\)-setting. Our main object of investigation is the following.
**Definition 4.1**.: Consider the algebra \(\mathit{LG}\) and an algebra isomorphism (not necessarily topological nor a *-isomorphism) \(\imath\colon\mathit{LG}\to\mathit{LG}\). We say that \(\imath\) is an _SG-order
preserving isomorphism_ (SGOPI) if for any \(m\in\mathbb{Z}^{2}\) it holds true
\[\imath(LG^{m})\subset LG^{m}, \tag{4.1}\]
that is, \(\imath\) preserves the double filtration on \(LG\).
The approach for this result is very much in line with the original paper [10], but a number of differences arise, due to the introduction of the second filtration. In particular, we have to work with products of manifolds with boundary, and many of the ideas in [10] and [11], as we developed further in the previous chapters, are useful.
For later reference, we list various, easy properties of SGOPIs in the following lemma. These are direct algebraic consequences of Definition 4.1
**Lemma 4.2**.: Let \(\imath\) be a SGOPI. Then:
1. \(\imath\) maps ideals to ideals and in particular maximal ideals to maximal ideals;
2. \(\imath(\mathcal{RG})=\mathcal{RG}\).
We will need to employ, in the course of the proof of Theorem 4.19, the principle which has come to be known as _Milnor's exercise_ (namely Problem 1-C in Section 1 of [12]). It is generally presented in the following form:
For a compact smooth manifold \(X\), the maximal ideals in \(\mathcal{C}^{\infty}(X)\) are given by functions vanishing at a point. Namely, \(I\triangleleft\mathcal{C}^{\infty}(X)\) is maximal if and only if \(I=I_{p}\equiv\{f\in\mathcal{C}^{\infty}(X)\;\text{s.t.}\;f(p)=0\}\) for some \(p\in X\).
A direct corollary is that any algebra isomorphism \(F\colon\mathcal{C}^{\infty}(X)\to\mathcal{C}^{\infty}(Y)\) is induced by a diffeomorphism \(C\colon X\to Y\) via pull-back and, hence, automatically continuous. For our purposes, we need to consider \(X,Y\) manifolds with corners and an algebra isomorphism \(F\colon\mathcal{C}^{\infty}(\partial X)\to\mathcal{C}^{\infty}(\partial Y)\), and ask ourselves the question whether \(F\) is also induced by a diffeomorphism \(C\colon\partial X\to\partial Y\). We state here a slight generalisation of this principle, applicable to certain sub-algebras of continuous functions on a compact topological space \(X\). This version has arisen in a discussion with Philipp Schmitt, concerning the minimal conditions which such a subalgebra has to satisfy in order to be able to characterise the maximal ideals.
**Proposition 4.3** (Milnor's exercise).: Let \(X\) be a compact topological space and \(\mathcal{A}\subset\mathcal{C}(X)\) a sub-algebra having the same unit as \(\mathcal{C}(X)\). Assume the following:
1. \(\mathcal{A}\) is spectrally invariant in \(\mathcal{C}(X)\), namely, \(\mathcal{C}(X)^{-1}\cap\mathcal{A}=\mathcal{A}^{-1}\), where the superscript \(-1\) denotes the group of invertibles;
2. \(\mathcal{A}\) is closed under complex conjugation (or simply \(\mathcal{A}\) consists of real-valued functions).
Then, every maximal ideal in \(\mathcal{A}\) is of the form \(I_{p}\) for some \(p\in X\). In particular it has codimension \(1\).
Proof.: Let \(I\triangleleft\mathcal{A}\). We claim that there exists \(p\in X\) such that \(f(p)=0\) for every \(f\in I\). Arguing by contradiction, assume that for each point \(x\in X\) we can find a function \(f_{x}\in I\) with \(f_{x}(x)\neq 0\). In particular, by continuity of the elements in \(\mathcal{A}\) there exists an open cover \(\{U_{x}\}_{x\in X}\) of \(X\) where \(f_{x}(y)\neq 0\) for all \(y\in U_{x}\). By compactness, we can look at a finite sub-cover \(\{U_{0},\ldots,U_{n}\}\) associated to the points \(x_{0},\ldots,x_{n}\) and the functions \(f_{0},\ldots,f_{n}\). Then for all \(i\), \(\left|f_{i}\right|^{2}=\overline{f_{i}}f_{i}\) are non-negative elements of \(I\) which only vanish (if anywhere) outside \(U_{i}\). The pointwise sum
\(\sum_{i=0}^{n}\left|f_{i}\right|^{2}\) is therefore everywhere positive and belongs to \(I\). By spectral invariance, \(f\) is invertible in \(\mathcal{A}\) so \(I=\mathcal{A}\), contradicting our assumption of maximality. The proof is complete. QED
Applying this to \(\mathcal{C}^{\infty}(B_{sc}X)\) and \(\mathcal{C}^{\infty}(B_{sc}Y)\) (algebras which clearly satisfy the conditions above), for two scattering manifolds, \(X,Y\) gives that any algebraic isomorphism is induced by a diffeomorphism \(B_{sc}X\to B_{sc}Y\). Notice that there is a little extra structure hidden here: smooth functions of \(B_{sc}X\) are actually pairs of smooth functions on manifolds with boundary together with an identification of the boundaries, so this really means that we obtain a triple of compatible diffeomorphisms. In the model case \(\mathbb{B}^{n}\times\mathbb{B}^{n}\), this excludes directly the possibility of the symplectic rotation of Remark 3.41. For the sake of clarity and to push the analogy between scattering and \(SG\) as far as possible, we give in Lemma 4.5 an argument adapted to this situation.
Next, we give a complete proof of (an adaptation of) the spectral argument used in [10] to exclude the possibility of a skew-symplectic diffeomorphism.
**Lemma 4.4**.: Let \(\imath\) be an SGOPI. Then the following holds true:
1. If \(A\in LG^{m}\) is elliptic, \(\imath(A)\) is elliptic as well;
2. Let \(A_{\bullet}\) be a \(\bullet\)-order reduction, that is \(A_{\bullet}\in LG^{1_{\bullet}}\) is elliptic and self-adjoint, and let \(B_{\bullet}=\imath(A_{\bullet})\). If \(a_{\bullet}>0\) and \(\mathcal{I}m\,b_{\bullet}=0\), we also have \(b_{\bullet}>0\).
Proof.:
1. Consider a parametrix \(R\) of \(A\). Then there exist operators \(K_{1},K_{2}\in\mathcal{R}G\) such that \(AR-\delta=K_{1},RA-\delta=K_{2}\). Applying \(\imath\) to these relations gives immediately that \(\imath(R)\) is a parametrix of \(\imath(A)\), therefore \(\imath(A)\) is elliptic.
2. First notice that the assumption of self-adjointness is not really restrictive, since any elliptic operator with positive symbol is equal to a self-adjoint one modulo lower order operators. Therefore, assume \(A_{\bullet}\in LG^{1_{\bullet}}\) has the required properties and let \(B_{\bullet}=\imath(A_{\bullet})\). By the previous point, \(B_{\bullet}\) is elliptic as well, so its symbol can be either positive or negative everywhere in view of the assumption \(b_{\bullet}\in\mathbb{R}\). Assume, arguing by contradiction, that \(b_{\bullet}<0\). We can then find an operator \(N_{\bullet}\in LG^{0}\) so that \(B_{\bullet}=B_{\bullet}^{W}+N_{\bullet}\) where \(B_{\bullet}^{W}\) is the Weyl operator associated with \(b_{\bullet}\). Notice, in particular, that \(B_{\bullet}\) is a bounded perturbation of its Weyl counterpart. By assumption, \(B_{\bullet}^{W}\) is unbounded self-adjoint and has real spectrum bounded from above. We can also estimate (4.2) \[\operatorname{spec}(B_{\bullet})\subset\{\lambda\in\mathbb{C}\text{ s.t. } \operatorname{dist}(\lambda,\operatorname{spec}(B_{\bullet}^{W}+N_{\bullet})) \leq\|N_{\bullet}\|\}.\] We conclude that there exists a constant \(K_{\bullet}\in\mathbb{R}\) such that \(B_{\bullet}^{W}+N_{\bullet}-t\) is invertible for all \(t\notin(-\infty,K_{\bullet}]\times[-\|N_{\bullet}\|,\|N_{\bullet}\|]\), with inverse being an operator lying in \(LG^{-1_{\bullet}}\). Let now \(M_{\bullet}=\imath^{-1}(N_{\bullet})\) and consider \(A_{\bullet}+M_{\bullet}-t\), which has to be invertible with inverse in \(LG^{-1_{\bullet}}\) for the same \(t\)'s. The spectrum of \(A_{\bullet}\) is real and bounded from below since \(A_{\bullet}\) is positive, moreover \(M_{\bullet}\) is bounded, so that, just like in (4.2), the spectrum of \(A_{\bullet}+M_{\bullet}\) is unbounded but contained in a tube \([D_{\bullet},+\infty)\times[-\|M_{\bullet}\|,\|M_{\bullet}\|]\) for some \(D_{\bullet}\in\mathbb{R}\). Then, \(A_{\bullet}+M_{\bullet}-t\) is invertible for all \(t\) outside this set, but, at the same time, there exists at least one \(\tilde{t}_{\bullet}\in\operatorname{spec}(B_{\bullet}^{W}+N_{\bullet})\) for which \(A_{\bullet}+M_{\bullet}-\tilde{t}_{\bullet}\) cannot be invertible, since the spectra are unbounded. This is a contradiction. We conclude that \(b_{\bullet}\) has to be positive as well, completing the proof.
### The case of the formal symbol algebra
We begin our investigation with the formal symbol algebra \(\mathcal{B}G={}^{LG}\diagup_{\mathcal{R}G}\cong{}^{SG}\diagup_{SG^{-\infty}}\). At this level, we can exploit the explicit relation between (asymptotic expansions of) symbols and operators.
**Lemma 4.5**.: Given an SGOPI \(\imath\), there exists a scattering canonical transformation \(C\colon\partial(\mathbb{B}^{n}\times\mathbb{B}^{n})\to\partial(\mathbb{B}^{n} \times\mathbb{B}^{n})\) such that for all \((a_{\bullet})\in\Sigma G^{m}\) it holds true \(\imath(a_{\bullet})=a_{\bullet}\circ C_{\bullet}^{-1}\).
Proof.: The algebraic properties of \(\imath\) guarantee that
\[\imath\left(LG^{m_{1},m_{2}}\diagup\mathcal{R}G\right)=LG^{m_{1},m_{2}} \diagup\mathcal{R}G.\]
\(\imath\) also acts on the space of principal symbols. Indeed, it preserves \(SG^{-1}{}_{\varepsilon}\), \(SG^{-1}{}_{\psi}\), \(SG^{-1}{}_{\varepsilon}\oplus SG^{-1}{}_{\psi}\) as ideals in \(SG^{0}\), so it descends to a map \(\imath_{pr}\) on \(\Sigma G^{0}\cong\mathcal{C}^{\infty}(\partial(\mathbb{S}^{n}_{+}\times \mathbb{S}^{n}_{+}))\), whose elements can be identified with pairs of functions \((a_{e},a_{\psi})\) on the respective (open) boundary face \(\widetilde{\mathcal{W}}_{e}=\mathbb{S}^{n-1}\times\mathbb{R}^{n},\widetilde{ \mathcal{W}}_{\psi}=\mathbb{R}^{n}\times\mathbb{S}^{n-1}\), having the same "limit" in the corner \(\widetilde{\mathcal{W}}_{\psi e}=\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\). That is, they extends smoothly to the whole \(\partial(\mathbb{S}^{n}_{+}\times\mathbb{S}^{n}_{+})\). We have then the commutative diagram
\[\begin{CD}LG^{0}@>{\imath}>{}>LG^{0}\\ @V{\sigma_{pr}}V{}V@V{\sigma_{pr}}V{}V\\ \Sigma G^{0}@>{\imath_{pr}}>{}>\Sigma G^{0},\end{CD} \tag{4.3}\]
meaning that we have maps \(\imath_{\bullet}\) satisfying
\[\sigma_{pr}(\imath(A))=(\imath_{e}a_{e},\imath_{\psi}a_{\psi},\imath_{\psi e }a_{\psi e}). \tag{4.4}\]
In view of the multiplicative properties of \(\sigma_{pr}\) in Proposition 2.16 we see that the maps \(\imath_{\bullet}\) are multiplicative on the respective spaces. We can then apply a Milnor-type argument to obtain bijections of \(\widetilde{\mathcal{W}}_{\bullet}\). Let \(I^{\bullet}\) be a maximal ideal in \(\mathcal{C}^{\infty}(\widetilde{\mathcal{W}}_{\bullet})\). This is given by the set \(I^{\bullet}_{p^{\bullet}}\) of those functions on \(\widetilde{\mathcal{W}}_{\bullet}\) which vanish at \(p^{\bullet}\in\widetilde{\mathcal{W}}_{\bullet}\). Then, \(\imath_{\bullet}\) gives a correspondence \(\chi_{\bullet}\colon\widetilde{\mathcal{W}}_{\bullet}\to\widetilde{\mathcal{W }}_{\bullet}\) defined by \(\imath_{\bullet}(I^{\bullet}_{p^{\bullet}})=I^{\bullet}_{\chi_{\bullet}(p^{ \bullet})}\).
We may repeat the same argument with \(\imath^{-1}\) to obtain another triple of bijections \(\zeta_{\bullet}\colon\widetilde{\mathcal{W}}_{\bullet}\to\widetilde{\mathcal{ W}}_{\bullet}\). By writing \(I^{\bullet}_{p}=\imath_{\bullet}^{-1}\imath_{\bullet}(I^{\bullet}_{p})\) we find then that \(\zeta_{\bullet}=\chi_{\bullet}^{-1}\). Furthermore, we see that it holds true
\[\imath_{\bullet}a_{\bullet}=a_{\bullet}\circ\chi_{\bullet}^{-1}; \tag{4.5}\]
indeed for \(a_{\bullet}\in\mathcal{C}^{\infty}(\widetilde{\mathcal{W}}_{\bullet})\) we have \(a_{\bullet}-a_{\bullet}(p^{\bullet})1\in I^{\bullet}_{p_{\bullet}}\), hence \(\imath_{\bullet}(a_{\bullet})(\chi_{\bullet}(p^{\bullet}))-a_{\bullet}(p^{ \bullet})1=0\), as claimed. We remark here in addition that the identification of principal symbols of order \(0\) with smooth (in the sense of Remark 3.11) functions on \(\partial(\mathbb{B}^{n}\times\mathbb{B}^{n})\) is _canonical_, since it does not depend on the choice of a boundary defining function.
These maps must be smooth. To see this, for example, for \(\bullet=e\), it suffices to choose local coordinates \((\theta^{i},\xi_{j})\) on \(\widetilde{\mathcal{W}}_{e}\) near a point \(p_{e}\). By definition, these coordinates are smooth function on \(\mathbb{S}^{n-1}\times\mathbb{R}^{n}\), so they can be identified canonically with homogeneous symbols \(\tilde{\theta}^{i},\tilde{\xi}_{j}\) of order \(0\), namely, with elements of \(SG^{(0),1}\). We can then apply \(\imath\) to obtain
\[\begin{split}\theta^{i}\circ\chi_{e}^{-1}&=\imath( \tilde{\theta}_{i})\in SG^{(0),1},\\ \xi_{j}\circ\chi_{e}^{-1}&=\imath(\tilde{\xi}_{j}) \in SG^{(0),1}.\end{split} \tag{4.6}\]
It follows that the components of \(\chi_{e}^{-1}\) are smooth functions by composition, so \(\chi_{e}^{-1}\) is smooth itself. The same argument, using \(\imath_{e}^{-1}\), gives that \(\chi_{e}\) is actually a diffeomorphism.
Having determined the action of \(\imath\) on principal symbols of order \((0,0)\), we use order reductions to extend it to \(\Sigma G^{m}\) for any \(m\in\mathbb{Z}^{2}\). Namely, recalling Lemma 2.32 we write \(A\in LG^{m}\) as
\[A=P^{m_{e}}Q^{m_{\psi}}B \tag{4.7}\]
for \(B\in LG^{0}\) and \(P\), resp. \(Q\), an \(e\)-order reduction, resp. a \(\psi\)-order reduction. Thus the image of \(A\) via \(\imath\) can be computed as
\[\imath(A)=\tilde{P}^{m_{e}}\tilde{Q}^{m_{\psi}}\imath(B), \tag{4.8}\]
where \(\tilde{P}=\imath(P)\in LG^{1_{e}},\tilde{Q}=\imath(Q)\in LG^{1_{\psi}}\) are elliptic and \(\imath(B)\) is an operator of order \((0,0)\). To determine the action of \(\imath\) on principal symbols of any order it suffices then to describe it on the order reductions. Looking at the pairs picture of principal symbols (cf. Proposition 2.16), we see that \(\sigma_{pr}(\imath(B))=(b_{e}\circ\chi_{e}^{-1},b_{\psi}\circ\chi_{\psi}^{-1})\) and
\[\sigma_{pr}(\imath(A))=(\tilde{p}_{e}^{m_{e}}\tilde{q}_{e}^{m_{\psi}}b_{e} \circ\chi_{e}^{-1},\tilde{p}_{\psi}^{m_{e}}\tilde{q}_{\psi}^{m_{\psi}}b_{\psi }\circ\chi_{\psi}^{-1}) \tag{4.9}\]
for \((\tilde{p}_{e},\tilde{p}_{\psi})\) and \((\tilde{q}_{e},\tilde{q}_{\psi})\) the principal symbols of \(\tilde{P}\) and \(\tilde{Q}\), respectively. In particular we analyse closely the case \(m=1\), for which we know that \(\Sigma G^{1}\) is a Lie algebra by Lemma 2.34.
For any \(a,\alpha\in\mathit{SG}^{1}\) consider the relation \(\imath\{\sigma_{pr}(\alpha),\sigma_{pr}(a)\}=\sigma_{pr}(\{\imath(\alpha), \imath(a)\})\) and write \(a\) and \(\alpha\) as in (4.7), namely
\[\begin{split} a&=pqb,\\ \alpha&=pq\beta.\end{split} \tag{4.10}\]
Denoting \(r=pq,\tilde{r}=\imath(r)\), we pass to principal symbols and look at the single components. Let us work out in detail what happens for \(\bullet=e\), since for the case \(\bullet=\psi\) the same proof suffices up to an exchange in the homogeneities, and the exit behaviour is the main novelty here. The above relation for the Poisson brackets reads
\[\begin{split}\{\tilde{r}_{e}b_{e}\circ\chi_{e}^{-1},\tilde{r}_{ e}\beta_{e}\circ\chi_{e}^{-1}\}&=\imath_{e}(\{r_{e}b_{e},r_{e}\beta_{e}\})\\ &=\imath_{e}(r_{e})\imath_{e}(r_{e})^{-1}\imath(\{r_{e}b_{e},r_{ e}\beta_{e}\})\\ &=\tilde{r}_{e}\imath(r_{e}^{-1}\{r_{e}b_{e},r_{e}\beta_{e}\}). \end{split} \tag{4.11}\]
In particular, if we choose \(\beta_{e}=1\), we obtain that, for any \(b_{e}\),
\[\{\tilde{r}_{e},b_{e}\circ\chi_{e}^{-1}\}=\{r_{e},b_{e}\}\circ\chi_{e}^{-1}. \tag{4.12}\]
We claim now that we can extend the map \(\chi_{e}\) homogeneously to a map \(C_{e}\) so that \(\tilde{r}_{e}=r_{e}\circ C_{e}^{-1}.\) This choice will give that \(\{r_{e}\circ C_{e}^{-1},b_{e}\circ C_{e}^{-1}\}=\{r_{e},b_{e}\}\circ C_{e}^{-1}\), so that going back to (4.11) we find that for any two symbols \(a,\alpha\) of order \(1\) it holds true
\[\{\alpha_{e}\circ C_{e}^{-1},a_{e}\circ C_{e}^{-1}\}=\{\alpha_{e},a_{e}\}\circ C _{e}^{-1}. \tag{4.13}\]
This is equivalent to \(C_{e}\) being a canonical transformation.
To see that we can indeed make such a choice, recall that homogeneous maps in \(\mathbb{R}_{0}^{n}\times\mathbb{R}^{n}\) are written as in (3.31) for some \(f_{e}\) real-valued and smooth on \(\mathbb{S}^{n-1}\times\mathbb{R}^{n}\). We now define \(C_{e}\) to be the \(1\)-homogeneous extension of \(\chi_{e}\) given by
\[C_{e}(\rho_{1},\theta,\xi)=\left(\frac{p_{e}(1,\theta,\xi)}{\tilde{p}_{e}(1, \chi_{e}(\theta,\xi))}\rho_{1},\chi_{e}(\theta,\xi)\right). \tag{4.14}\]
Recall that here \(\theta\) are coordinates on \(\mathbb{S}^{n-1}\), \(\rho_{1}\in\mathbb{R}^{+}\) and \(\xi\in\mathbb{R}^{n}\). This choice satisfies \(\tilde{r}_{e}=r_{e}\circ C_{e}^{-1}\). Indeed, notice that both \(q_{e}\) and \(\tilde{q}_{e}\) do not play a role here since they are \(e\)-homogeneous of degree \(0\). More precisely, \(q_{e}(1,\theta,\xi)=\tilde{q}_{e}(1,\chi_{e}(\theta,\eta))\). Keeping this in mind, and denoting \(\chi_{e}(\theta,\xi)=(\varphi,\eta)\) with \(\sigma_{1}\in\mathbb{R}^{+}\) the newly introduced coordinate in the target space, the check is immediate, using the \(e\)-homogeneity of \(p_{e}\) and \(\tilde{p}_{e}\):
\[\begin{split}\tilde{r}_{e}(\sigma_{1},\varphi,\eta)& =\tilde{p}_{e}(\sigma_{1},\varphi,\eta)\tilde{q}_{e}(1,\varphi, \eta)\\ r_{e}\circ C_{e}^{-1}(\sigma_{1},\varphi,\eta)&=r_{ e}\left(\frac{p_{e}(1,\chi_{e}^{-1}(\varphi,\eta)}{\tilde{p}_{e}(1,\varphi, \eta)}\sigma_{1},\varphi,\eta\right)\\ &=\frac{p_{e}(1,\chi_{e}^{-1}(\varphi,\eta)}{\tilde{p}_{e}(1, \varphi,\eta)}\sigma_{1}\cdot p_{e}(1,\chi_{e}^{-1}(\varphi,\eta))q_{e}(1, \chi_{e}^{-1}(\varphi,\eta))\\ &=\tilde{p}(\sigma_{1},\varphi,\eta)q_{e}(1,\chi_{e}^{-1}( \varphi,\eta)).\end{split} \tag{4.15}\]
With this choice, \(C_{e}\) is then a homogeneous diffeomorphism \(\mathcal{W}_{e}\to\mathcal{W}_{e}\), preserving the Poisson bracket for \(e\)-principal symbols.
Having constructed the required extensions, it follows that \(\chi\) is a scattering canonical transformation. The proof is complete.
By Theorem 3.42, we can locally parametrise the graph of \(\chi\) via \(SG\)-phase functions of order \(1\) and type \(\mathcal{Q}_{gen}\). Covering \(\operatorname{graph}\chi\) with these coordinate patches and picking local amplitudes, we can construct an elliptic FIO \(F\) of type \(\mathcal{Q}_{gen}\) associated with \(\chi\), in the sense that its principal symbol can be identified with a function on the graph. If we denote by \(F^{\#}\) the parametrix of \(F\), which is again elliptic of type \(\mathcal{Q}_{gen}\) and is associated with the inverse map \(\chi^{-1}\), we have then that \(\jmath(P)\equiv FPF^{\#}\) is an automorphism of \(\sfrac{{}^{LG}}{{}^{\prime}}_{RG}\) preserving principal symbols, in view of Theorem 2.47. Our next goal is to analyse \(\jmath\) more closely by mirroring the argument of [10] and refining it to \(SG\)-classes. Before we start with that task, we need to prove an auxiliary result, adapted from Theorem 2.2.10 in [1].
**Lemma 4.6**.: Consider \(X=\mathbb{R}_{0}^{n}\times\mathbb{R}^{k}\) and the space \(\mathcal{H}^{m,l}(X)\) of functions of \((x,y)\in X\) being positively homogeneous of degree \(m\) in \(x\) and \(y-\)classical of degree \(l\). \(\mathcal{H}=\bigcup\mathcal{H}^{m,l}\) is a filtered algebra. Consider a derivation \(\theta\colon\mathcal{H}\to\mathcal{H}\). Then there exists \(V\in\mathfrak{X}(X)\) such that \(\theta=\mathcal{L}_{V}\) as a derivation.
Proof.: Let us make a few remarks to begin with. First, given a vector field \(V\in\mathfrak{X}(X)\), the Lie derivative \(\mathcal{L}_{V}\) is well defined for functions in \(\mathcal{H}\subset\mathcal{C}^{\infty}(X)\). In particular, it is a derivation. However, for a general \(V\), there is no guarantee that \(V(\mathcal{H})\subset\mathcal{H}\) since the local expressions of the coefficients of \(V\) need not be \(y-\)classical.
Second, \(\theta\) is a local operator on \(\mathcal{H}\), i.e. if \(a\in\mathcal{H},(x_{0},y_{0})\dot{\in}U\) and \(a|_{U}=0\), then \(\theta(a)(x_{0},y_{0}))=0\). To see this, pick a smooth cut-off function \(g\) such that \(g=1\) on \(U\dot{\ni}(x_{0},y_{0}),U\subset V\) and \(g=0\) outside \(V\). It follows that \(a=(1-g)a\) everywhere,
so by the derivation property and the assumptions we have
\[\begin{split}\theta(a)(x_{0},y_{0})&=\theta(a)(x_{0},y_{ 0})(1-g(x_{0},y_{0}))-\theta(g)(x_{0},y_{0})a(x_{0},y_{0})\\ &=0,\end{split} \tag{4.16}\]
as required.
We can now proceed to the main part of the proof. The locality property implies that we can define restrictions of \(\theta\) to open subsets \((x,y)\dot{\in}V\subset X\) by
\[\theta|_{V}(a)(x,y)\equiv\theta(ga)(x,y), \tag{4.17}\]
where \(g\) is a smooth cut-off such that \(g=0\) outside \(V\) and \(g=1\) on some \((x,y)\dot{\in}U\subset V\). Again by locality, it follows that \(\theta|_{V}\) doesn't actually depend on the choice of such a cut-off. We keep denoting by \(\theta\) the restrictions to open subsets.
Pick a chart \((U,\rho)\) on \(X\) with coordinates \((x^{i},y^{\alpha})\), let \(p\in U\) and \(a\in\mathcal{H}\). Assume \(\rho(p)=q=(x_{0}^{i},y_{0}^{\alpha})\). Then, we can write, in a sufficiently small \(W\dot{\ni}q\) and denoting \(c_{q}(t)\equiv q+t(x^{i}-x_{0}^{i},y^{\alpha}-y_{0}^{\alpha})\)
\[\begin{split}(\rho_{*}a)(x^{i},y^{\alpha})&=(\rho_ {*}a)(q)+\int_{0}^{1}\frac{\partial}{\partial t}[(\rho_{*}a)(c_{q}(t))]\, \mathrm{d}t\\ &=(\rho_{*}a)(q)+(x^{i}-x_{0}^{i})\int_{0}^{1}\frac{\partial\rho_ {*}a}{\partial x^{i}}(c_{q}(t))\,\mathrm{d}t+(y^{\alpha}-y_{0}^{\alpha})\int_{ 0}^{1}\frac{\partial\rho_{*}a}{\partial y^{\alpha}}(c_{q}(t))\,\mathrm{d}t. \end{split} \tag{4.18}\]
Let \(U^{\prime}=\rho^{-1}(W)\dot{\ni}p\) and \(u\in U^{\prime}\). There exist then functions \(g_{i},g_{\alpha}\in\mathcal{C}^{\infty}(U)\) such that
\[g_{i}(p)=\frac{\partial\rho_{*}a}{\partial x^{i}}(q),\quad g_{\alpha}(p)= \frac{\partial\rho_{*}a}{\partial y^{\alpha}}(q) \tag{4.19}\]
and
\[a(u)=a(p)+(x^{i}-x_{0}^{i})g_{i}(u)+(y^{\alpha}-y_{0}^{\alpha})g_{\alpha}(u). \tag{4.20}\]
We apply \(\theta\) to (4.20) to obtain
\[\theta(a)(u)=\theta(x^{i})(u)g_{i}(u)+(x^{i}-x_{0}^{i})\theta(g_{i})(u)+ \theta(y^{\alpha})(u)g_{\alpha}(u)+(y^{\alpha}-y_{0}^{\alpha})\theta(g_{ \alpha})(u),\]
so that, evaluating this expression at \(p\) and using (4.19), we have
\[\theta(a)(p)=\theta(x^{i})(p)\frac{\partial\rho_{*}a}{\partial x^{i}}(p)+ \theta(y^{\alpha})(p)\frac{\partial\rho_{*}a}{\partial y^{\alpha}}(p). \tag{4.21}\]
It is readily seen that a change of coordinates does not affect this expression. We define then a vector field \(V_{\rho}\) on \(U\) by setting
\[V_{\rho}(x^{i},y^{\alpha})\equiv\big{(}(x^{i},y^{\alpha}),(\theta(x^{i})(u), \theta(y^{\alpha})(u))\big{)} \tag{4.22}\]
for \(\rho(u)=(x^{i},y^{\alpha})\). It follows that \(V_{\rho}|_{U}\) is independent of the chart, so that the collection of these objects defines a vector field \(V\in\mathfrak{X}(X)\). Now, by definition, the Lie derivative with respect to \(V\) in a local chart \((U,\rho)\) of \(a\in\mathcal{H}\) is
\[\begin{split}\mathcal{L}_{V}a|_{U}&=D(a\circ\rho^{- 1})(x^{i},y^{\alpha})V_{\rho}(x^{i},y^{\alpha})\\ &=\frac{\partial}{\partial x^{i}}(a\circ\rho^{-1})(x^{i},y^{ \alpha})\theta(x^{i})(x^{i},y^{\alpha})+\frac{\partial}{\partial y^{\alpha}}( a\circ\varphi^{-1})(x^{i},y^{\alpha})\theta(y^{\alpha})(x^{i},y^{\alpha})\\ &=\theta(a)(u),\end{split} \tag{4.23}\]
and that the claim follows. The proof is complete.
**Remark 4.7**.: This lemma shows that it suffices to have a derivation on (certain) subalgebras of \(\mathcal{C}^{\infty}(\mathbb{R}_{0}^{n}\times\mathbb{R}^{n})\) to determine a vector field on \(\mathbb{R}_{0}^{n}\times\mathbb{R}^{n}\). We will find use for this fact in the proof of Lemma 4.10 below. In addition, we will see that the properties of the subalgebra (in this case, homogeneity in \(x\) and classicality of order \(0\) in \(y\)) are reflected in the properties of the coefficients of the obtained vector field.
**Remark 4.8**.: Notice that Duistermaat and Singer do not need this specialised result. Indeed in their setting they obtain derivations on the whole \(\mathcal{C}^{\infty}(\mathbb{S}^{*}X)\), which are given by vector fields by the standard theory.
We can now begin our analysis of the map \(\jmath\), which will lead to the following first main result.
**Theorem 4.9**.: Assume given an automorphism \(\jmath\colon\raisebox{-0.86pt}{\scalebox{1.0}{${}^{LG}$}}\raisebox{-0.86pt}{ \scalebox{1.0}{${}^{\sim}$}}_{\mathcal{R}G}\to\raisebox{-0.86pt}{\scalebox{1.0 }{${}^{LG}$}}\raisebox{-0.86pt}{\scalebox{1.0}{${}^{\sim}$}}_{\mathcal{R}G}\) preserving principal symbols, namely \(\jmath(P)-P\in\raisebox{-0.86pt}{\scalebox{1.0}{${}^{LG^{m-1}}$}}\raisebox{-0.86 pt}{\scalebox{1.0}{${}^{\sim}$}}_{\mathcal{R}G}\) whenever \(P\in\mathit{LG}^{m}\). Then, \(\jmath\) is given by conjugation with some elliptic \(B\in\mathit{LG}^{s}\) for some \(s\in\mathbb{C}^{2}\).
For the sake of clarity, we split the proof into a series of lemmata.
**Lemma 4.10**.: Assume that, for some \(l\geq 1\), we have \(\jmath(P)-P\in\mathit{LG}^{m-l1}\) for any \(P\in\mathit{LG}^{m}\) with principal symbol \((p_{e},p_{\psi})\). Then \(\sigma_{pr}(\jmath(P)-P)\) only depends on the principal symbol of \(P\) and it is obtained as \((\beta_{e}p_{e},\beta_{\psi}p_{\psi})\) for two vector fields \(\beta_{e},\beta_{\psi}\). Moreover these vector fields are Hamiltonian, that is, there exist functions \(f_{e},f_{\psi}\) such that \(\beta_{\bullet}p_{\bullet}=H_{f_{\bullet}}p_{\bullet}\equiv\{f_{\bullet},p_{ \bullet}\}\).
Proof.: For a fixed \(l\geq 1\) consider, for all \(m\in\mathbb{Z}\), the map \(Z^{m}\colon\mathit{LG}^{m}\to\raisebox{-0.86pt}{\scalebox{1.0}{${}^{LG^{m-l1}}$ }}\raisebox{-0.86pt}{\scalebox{1.0}{${}^{LG^{m-(l+1)1}}$}}\raisebox{-0.86pt}{ \scalebox{1.0}{${}^{LG^{m-(l+1)1}}$}}\), \(Z^{m}(P)\equiv\jmath(P)-P\mod\mathit{LG}^{m-(l+1)1}\). \(Z^{m}\) only depends on the principal symbol of \(P\). Indeed, if \(Q=P+W\) for some \(W\in\mathit{LG}^{m-1}\), it follows that
\[\begin{split} Z^{m}(Q)&=Z^{m}(P)+Z^{m}(W)\\ &=\jmath(P)-P+\jmath(W)-W\mod\mathit{LG}^{m-(l+1)1}\\ &=Z^{m}(P)\end{split} \tag{4.24}\]
since, by assumption, \(\jmath(W)-W\in\mathit{LG}^{m-(l+1)1}\). Hence, by composition with the principal symbol map, \(Z^{m}\) descends to a map \(\beta^{m}\colon\Sigma G^{m}\to\Sigma G^{m-le}\). We show that \(\beta^{m}\) (or rather the direct sum \(\beta=\bigoplus_{m\in\mathbb{Z}^{2}}\beta^{m}\)) is a bi-derivation of the bi-algebra \(\Sigma G\). To this end, consider \(Z(PQ)\) for some operators \(P\in\mathit{LG}^{m},Q\in\mathit{LG}^{k}\). Recalling the algebraic properties of \(\jmath\), we have by definition,
\[\begin{split} Z^{m+k}(PQ)&=\jmath(PQ)-PQ\mod \mathit{LG}^{m+k-l1}\\ &=\jmath(P)\jmath(Q)-\jmath(P)Q+\jmath(P)Q-PQ\mod\mathit{LG}^{m+ k-l1}\\ &=\jmath(P)Z^{k}(Q)+Z^{m}(P)Q\mod\mathit{LG}^{m+k-l1}\\ &=Z^{m}(P)Z^{k}(Q)+PZ^{k}(Q)+Z^{m}(P)Q\mod\mathit{LG}^{m+k-l1}\\ &=PZ^{k}(Q)+Z^{m}(P)Q\mod\mathit{LG}^{m+k-l1},\end{split}\]
where we noticed that \(Z^{m}(P)Z^{k}(Q)\in\mathit{LG}^{m-l1}\cdot\mathit{LG}^{k-l1}\subset\mathit{LG} ^{m+k-2l1}\subset\mathit{LG}^{m+k-l1}\). Taking principal symbols gives the Leibniz rule. Similarly, for \(Z[P,Q]\) we obtain
\[Z^{m+k-1}[P,Q]=[Z^{m}(P),Q]+[P,Z^{k}(Q)]\mod\mathit{LG}^{m+k-(l+1)1},\]
so that \(\beta\) is a derivation with respect to the Poisson bracket. Therefore, \(\beta\) acts as a bi-derivation on the space of principal symbols \(\Sigma G\). Keeping in mind the pairs picture of Proposition 2.16, denote the action of \(\beta\) as
\[\beta(p_{\psi},p_{e})=(\beta_{\psi}p_{\psi},\beta_{e}p_{e}). \tag{4.25}\]
Similarly to (4.24), one sees that \(\beta_{\psi}p_{\psi}\), respectively \(\beta_{e}p_{e}\), only depends on the component \(p_{\psi}\), respectively \(p_{e}\), of the principal symbol, and it holds true that \(\sigma_{\psi}^{m_{\psi}-l}(\beta_{e}p_{e})=\sigma_{e}^{m_{e}-l}(\beta_{\psi}p_ {\psi})\). We can write, more explicitly,
\[\begin{split}\beta_{\psi}p_{\psi}&=\sigma_{\psi}^{ m_{\psi}-l}(\beta\tilde{p})\\ \beta_{e}p_{e}&=\sigma_{e}^{m_{e}-l}(\beta\tilde{p} ).\end{split} \tag{4.26}\]
Applying Lemma 4.6, we obtain that both \(\beta_{\psi}\) and \(\beta_{e}\) are given by vector fields on \(\mathbb{R}^{n}\times\mathbb{R}^{n}_{0}\) and \(\mathbb{R}^{n}_{0}\times\mathbb{R}^{n}\), respectively. We have then
\[\begin{split}\beta_{\psi}&=\gamma_{\psi}^{i}\frac{ \partial}{\partial x^{i}}+\rho_{k}^{\psi}\frac{\partial}{\partial\xi_{k}},\\ \beta_{e}&=\gamma_{e}^{i}\frac{\partial}{\partial x ^{i}}+\rho_{k}^{e}\frac{\partial}{\partial\xi_{k}},\end{split} \tag{4.27}\]
where, by definition, \(\gamma_{\bullet}^{i}=\beta_{\bullet}x^{i},\rho_{k}^{\bullet}=\beta_{\bullet} \xi_{k}\). In particular, by definition of \(\beta\), it holds true that
\[\begin{split}\gamma_{\psi}^{i}(x,\xi)&=\beta_{\psi }x^{i}\in SG_{cl(x)}^{(-l),1-l},\\ \rho_{k}^{\psi}(x,\xi)&=\beta_{\psi}\xi_{k}\in SG_{ cl(x)}^{(1-l),-l},\\ \gamma_{e}^{i}(x,\xi)&=\beta_{e}x^{i}\in SG_{cl( \xi)}^{-l,(1-l)},\\ \rho_{k}^{e}(x,\xi)&=\beta_{e}\xi_{k}\in SG_{cl( \xi)}^{1-l,(-l)},\end{split} \tag{4.28}\]
so that the components of the obtained vector fields mirror the extra properties of the algebra of functions from which they are derived. Recall also that \(\{x^{i},x^{j}\},\{x^{i},\xi_{j}\}\) and \(\{\xi_{i},\xi_{j}\}\) are all constant, hence, if we apply \(\beta_{\bullet}\), to them we obtain \(0\). On the other hand, by using the derivation property, we see that it must hold true that
\[\frac{\partial\gamma_{\bullet}^{i}}{\partial\xi_{j}}=\frac{\partial\gamma_{ \bullet}^{j}}{\partial\xi_{i}},\quad\frac{\partial\rho_{i}^{\bullet}}{ \partial x^{j}}=\frac{\partial\rho_{j}^{\bullet}}{\partial x^{i}},\quad\frac{ \partial\gamma_{\bullet}^{i}}{\partial x^{j}}=-\frac{\partial\rho_{j}^{ \bullet}}{\partial\xi^{i}}. \tag{4.29}\]
But this is the same as saying that the (symplectic) dual \(1\)-form to \(\beta_{\bullet}\) is closed, hence locally equal to \(\mathrm{d}f_{\bullet}\) for some smooth \(f_{\bullet}\). Locally, we have then shown \(\beta_{\bullet}=H_{f_{\bullet}}\), the Hamiltonian vector field defined by \(f_{\bullet}\). The proof is complete. QED
The next step consists in establishing under which conditions \(\jmath\) is given by conjugation with an \(SG\Psi\mathrm{DO}\) at the level of principal symbols.
**Lemma 4.11**.: There exist \(B\in LG^{s}\), \(s\in\mathbb{C}^{2}\), such that \(\sigma_{pr}(\jmath(P))=\sigma_{pr}(BPB^{-1})\) if and only if \(\sigma_{pr}(B)=(e^{-\,\mathrm{i}\,f_{e}},e^{-\,\mathrm{i}\,f_{\psi}})\) for functions \(f_{\bullet}\) such that \(\beta_{\bullet}=H_{f_{\bullet}}\).
Proof.: Notice first that \(BPB^{-1}-P=[B,P]B^{-1}\), so that taking principal symbols yields
\[\begin{split}\sigma_{pr}(BPB^{-1}-P)&=\sigma_{pr}([B, P])\sigma_{pr}(B^{-1})\\ &=\left(\frac{1}{\operatorname{i}b_{e}}\{b_{e},p_{e}\},\frac{1}{ \operatorname{i}b_{\psi}}\{b_{\psi},p_{\psi}\}\right)\\ &=\left(H_{\operatorname{i}\log b_{e}}(p_{e}),H_{\operatorname{i} \log b_{\psi}}(p_{\psi})\right).\end{split} \tag{4.30}\]
Remark that, for an invertible \(b\), both logarithms exist. Then applying Lemma 4.10 with \(l=1\) gives that \(\sigma_{pr}(BPB^{-1}-P)=\sigma_{pr}(\jmath(P)-P)=(H_{f_{e}}(p_{e}),H_{f_{\psi}} (p_{\psi}))\) for some smooth \(f_{e},f_{\psi}\) if and only if
\[\begin{cases}b_{\psi}=e^{-\operatorname{i}f_{\psi}}\\ b_{e}=e^{-\operatorname{i}f_{e}}\end{cases} \tag{4.31}\]
are homogeneous in \(\xi\), respectively \(x\), of degree \(s_{\psi}\), respectively \(s_{e}\). Using Euler's equation, we can rewrite this as
\[\begin{cases}\xi_{k}\frac{\partial f_{\psi}}{\partial\xi_{k}}=\operatorname{i }s_{\psi},\\ x^{j}\frac{\partial f_{e}}{\partial x^{j}}=\operatorname{i}s_{e}.\end{cases} \tag{4.32}\]
Recalling that \(\partial_{\xi_{k}}f_{\bullet}\) and \(\partial_{x^{j}}f_{e}\) are, respectively, the components \(\gamma_{\psi}^{k}\) and \(\rho_{j}^{e}\) of the vector fields \(\beta_{\bullet}\), we see that the claim is equivalent to \(\xi_{k}\gamma_{\psi}^{k}\) and \(x^{j}\rho_{j}^{e}\) being constant. We check this directly by computing the derivatives of these expressions w.r.t. \(x^{r}\) and \(\xi_{r}\). Considering, for instance, the vector field \(\beta_{\psi}\), we have \(\partial_{x^{j}}\gamma_{\psi}^{i}=-\partial_{\xi^{i}}\rho_{j}^{\psi}\) and \(\partial_{\xi_{j}}\gamma_{\psi}^{i}=\partial_{\xi_{i}}\gamma_{\psi}^{j}\), in view of the symmetry relations of (4.29). Then, recalling the homogeneities of (4.28) and that \(l=1\), we obtain
\[\begin{split}\frac{\partial(\xi_{k}\gamma_{\psi}^{k})}{\partial x ^{r}}&=\xi_{k}\frac{\partial\gamma_{\psi}^{k}}{\partial x^{r}}=- \xi_{k}\frac{\partial\rho_{r}^{\psi}}{\partial\xi_{k}}=0,\\ \frac{\partial(\xi_{k}\gamma_{\psi}^{k})}{\partial\xi_{r}}& =\gamma_{\psi}^{r}+\xi_{k}\frac{\partial\gamma_{\psi}^{k}}{ \partial\xi_{r}}=\gamma_{\psi}^{r}+\xi_{k}\frac{\partial\gamma_{\psi}^{r}}{ \partial\xi_{k}}=0.\end{split} \tag{4.33}\]
In a completely analogous fashion one proves the corresponding results for \(\beta_{e}\). This concludes the proof. QED
**Lemma 4.12**.: Assume that for some \(l>1\) we have \(\jmath(P)-P\in\mathit{LG}^{m-l1}\) for any \(P\in\mathit{LG}^{m}\). Then there exist \(C\in\mathit{LG}^{(1-l)1}\) such that \((I-C)\circ\jmath(P)\circ(I-C)^{-1}-P\in\mathit{LG}^{m-(l+1)1}\).
Proof.: We apply Lemma 4.10 again by defining directly the Hamiltonian functions of the vector fields. Recall that \(\gamma_{\bullet}^{j}\) and \(\rho_{k}^{\bullet}\) are the components of the vector field \(\beta_{\bullet}\) determining the action of \(\jmath\) on principal symbols. Using these functions, set
\[\begin{cases}c_{\psi}\equiv\frac{1}{1-l}\xi_{j}\gamma_{\psi}^{j}\in\mathit{SG }_{\mathit{cl}(x)}^{(1-l),1-l},\\ c_{e}\equiv\frac{1}{l-1}x^{j}\rho_{j}^{e}\in\mathit{SG}_{\mathit{cl}(\xi)}^{ 1-l,(1-l)}.\end{cases} \tag{4.34}\]
With these definitions, we see that \(H_{c_{\bullet}}=\beta_{\bullet}\). Indeed, for instance
\[\begin{split}\frac{\partial c_{\psi}}{\partial x^{k}}& =\frac{1}{1-l}\xi_{j}\frac{\partial\gamma_{\psi}^{j}}{\partial x^ {k}}=\frac{1}{l-1}\xi_{j}\frac{\partial\rho_{k}^{\psi}}{\partial\xi_{j}}\\ &=\frac{1}{l-1}(1-l)\rho_{k}^{\psi}=-\rho_{k}^{\psi},\\ \frac{\partial c_{\psi}}{\partial\xi_{k}}&=\frac{1}{ 1-l}\left[\gamma_{\psi}^{k}+\xi_{j}\frac{\partial\gamma_{\psi}^{j}}{\partial \xi_{k}}\right]\\ &=\frac{1}{1-l}\left[\gamma_{\psi}^{k}+\xi_{j}\frac{\partial \gamma_{\psi}^{k}}{\partial\xi_{j}}\right]=\frac{1}{1-l}\left[\gamma_{\psi}^{ k}-l\gamma_{\psi}^{k}\right]\\ &=\gamma_{\psi}^{k},\end{split} \tag{4.35}\]
where we have used again (4.28) and (4.29). Wit the aim of assiociating an operator with \(c=(c_{e},c_{\psi})\), we verify that, indeed, \(c\in\Sigma G^{(1-l)1}\), namely, that \(\sigma_{e}^{1-l}(c_{\psi})=\sigma_{\psi}^{1-l}(c_{e})\). Computing these symbols, we have to prove that
\[\xi_{j}\gamma_{\psi,1-l}^{j}=-x^{j}\rho_{j}^{e,1-l}, \tag{4.36}\]
where \(\gamma_{\psi,1-l}^{j}\), respectively \(\rho_{j}^{e,1-l}\), is the \((1-l)\)-homogeneous component in the asymptotic expansion of \(\gamma_{\psi}^{j}\), respectively \(\rho_{j}^{e}\). To this end, consider the third relation in (4.29), multiply it by \(\xi_{k}\) and take the trace to obtain
\[\xi_{k}\frac{\partial\gamma_{\psi}^{k}}{\partial x^{j}}=-\xi_{k}\frac{ \partial\rho_{k}^{\psi}}{\partial\xi_{j}}=(l-1)\rho_{k}^{\psi}. \tag{4.37}\]
Here we took again advantage of the homogeneity properties in (4.28). The first and last sides of (4.37) are \(x-\)classical symbols, so we can expand them in \(x-\)homogeneous functions. Since asymptotic expansions are uniquely determined, we find that the two must be equal term by term, so the top order relation reads
\[\xi_{k}\frac{\partial\gamma_{\psi,1-l}^{k}}{\partial x^{j}}=(l-1)\rho_{j}^{ \psi,-l}. \tag{4.38}\]
Multiplying by \(x^{j}\) (namely, taking the trace of the matrix \((x^{r}\xi_{k}\partial_{x^{j}}\gamma_{\psi,1-l}^{k})\)), we obtain by homogeneity
\[(l-1)x^{j}\rho_{j}^{\psi,-l}=(1-l)\xi_{j}\gamma_{\psi,1-l}^{j}. \tag{4.39}\]
On the other hand, considering \(p=\xi_{r}\) as a symbol and applying \(\beta\) gives
\[\begin{split}(\beta p)_{\psi}&=\rho_{r}^{\psi}\in \mathit{SG}_{cl(x)}^{(1-l),-l}\\ (\beta p)_{e}&=\rho_{r}^{e}\in\mathit{SG}_{cl(\xi)}^{ 1-l,(-l)}\end{split}\implies\rho_{r}^{\psi,-l}=\rho_{r}^{e,1-l}, \tag{4.40}\]
where we computed the principal symbol of \(\beta p\). Since (4.40) holds true for any \(r\), it follows that we can substitute it in (4.39) to obtain (4.36), as required.
Let then \(C\in\mathit{LG}^{-l1}\) be an operator whose principal symbol is \(\mathrm{i}\,c\) and let \((I-C)^{\#}\) be a parametrix of \((I-C)\), that is
\[(I-C)(I-C)^{\#}=(I-C)^{\#}(I-C)=I+R\]
for some \(R\) smoothing. Recall here that \(l>1\) so \(C\) has negative integer order and thus \(I-C\in\mathit{LG}^{0}\) with \(\sigma_{pr}^{0}(I-C)=1\). Therefore, the parametrix of \(I-C\) exists and has a compact remainder since \(R\) has kernel in the Schwartz class. Moreover,
\((I-C)^{\#}=I-C^{\prime}\) for some \(C^{\prime}\in LG^{-1}\). Let then \(P\in LG^{m}\). By commuting \(P\) and \(I-C\) and noticing that \([I,P]=0\), we obtain
\[\begin{split}(I-C)^{\#}P(I-C)&=(I-C)^{\#}\left((I-C )P+[P,I-C]\right)\\ &=(I-C)^{\#}(I-C)P+(I-C)^{\#}[C,P]\\ &=P+[C,P]\mod LG^{m-(l+1)1}.\end{split} \tag{4.41}\]
Thence, the principal symbol of \((I-C)^{\#}P(I-C)-P\) equals the principal symbol of the commutator, namely, the Poisson bracket of the principal symbols. Specifically, we have
\[\begin{split}\sigma_{pr}\left((I-C)^{\#}P(I-C)-P\right)& =-\operatorname{i}H_{\operatorname{i}c}(p)=H_{c}(p)\\ &=\{c,p\}=\beta(p)\\ &=\sigma_{pr}(j(P)-P).\end{split} \tag{4.42}\]
It follows that \(\jmath(P)-P=(I-C)^{\#}P(I-C)-P\mod LG^{m-(l+1)1}\), so in particular there exists \(Q\in LG^{m-(l+1)1}\) such that \(\jmath(P)=(I-C)^{\#}P(I-C)+Q\). Conjugating with the parametrix \((I-C)^{\#}\) gives now
\[\begin{split}(I-C)\jmath(P)(I-C)^{\#}&=(I-C)(I-C )^{\#}P(I-C)(I-C)^{\#}+(I-C)Q(I-C)^{\#}\\ &=P+RP+PR+RPR+(I-C)Q(I-C)^{\#}.\end{split} \tag{4.43}\]
Now, on the one hand \(PR\) and \(RP\) are smoothing, since \(R\in\mathcal{R}G\), on the other hand we have, thanks to Theorem 2.47,
\[(I-C)Q(I-C)^{\#}\in LG^{m-(l+1)1}. \tag{4.44}\]
Thence, it holds true that \((I-C)^{\#}\jmath(P)(I-C)-P\in LG^{m-(l+1)1}\). The proof is complete.
Proof of Theorem 4.9.: Exploiting Lemmas 4.11 and 4.12 we can set up an inductive procedure which constructs a sequence of pseudo-differential operators \(B_{0},C_{1},C_{2},\dots\), where:
1. \(B_{0}\) is elliptic of some order \(s\in\mathbb{C}\) and \(C_{j}\in LG^{-j1}\);
2. conjugation with \(B_{l}=(I-C_{l})\dots(I-C_{1})B_{0}\) gives an automorphism of \(\nicefrac{{LG}}{{\mathcal{R}G}}\), approximating \(\jmath\) up to order \(m-(l+2)1\).
A computation of the asymptotic expansion of the symbol of \(B_{l}\) shows that \((I-C_{l+1})B_{l}\) only changes the symbol up to \(s-(l+1)1\). There exists, then, an elliptic operator \(B\in LG^{s}\), such that, for each \(l\), we have \(B-B_{l}\in LG^{s-(l+1)1}\). Thus, the difference \(B\jmath(P)B^{-1}-P\) is smoothing, in view of said asymptotic expansion. This proves the claim.
The preceding results proven thus far enable us to prove the main result of this section, namely the characterisation of the SGOPS of the formal symbol algebra \(\mathcal{B}G\).
**Theorem 4.13** (OPIs of the formal \(SG\)-algebra).: Let \(\imath\colon\mathcal{B}G\to\mathcal{B}G\) be an \(SG\)OPI on the formal symbol algebra \(\mathcal{B}G\). There exists then an elliptic SGFIO \(A\), of type \(\mathcal{Q}_{gen}\), such that \(\imath(P+\mathcal{R}G)=A^{\#}PA+\mathcal{R}G\) for any \(P\in LG^{m}\).
Proof.: On the one hand, we know that there exists an elliptic SGFIO \(F\) of type \(\mathcal{Q}_{gen}\) such that \(F\imath(P)F^{\#}-P\) is an automorphism of \(\mathcal{B}G\) preserving principal symbols. On the other hand, Theorem 4.9 guarantees that every such automorphism is given by conjugation with an elliptic \(B\in\mathit{LG}^{s}\) for some \(s\in\mathbb{C}^{2}\). Therefore, we see that, \(\mathrm{mod}\ \mathcal{R}G\), \(F\imath(P)F^{\#}=BPB^{\#}\), so that setting \(A\equiv B^{\#}F\) gives that \(\imath(P)=A^{\#}PA\). This concludes the proof. QED
### Lifting the characterisation to \(\mathit{LG}\)
We now turn to the problem of lifting this characterisation to the whole algebra. We notice first that we are able to take advantage of the Eidelheit Lemma of [10] without any further hassle.
**Lemma 4.14** (Eidelheit-type Lemma).: Given an algebra isomorphism \(\varphi\colon\mathcal{R}G\to\mathcal{R}G\) there exists a topological isomorphism \(V\colon\mathcal{S}\to\mathcal{S}\) such that \(\varphi(P)=VPV^{-1}\) for any \(P\in\mathcal{R}G\).
Proof.: We show that the assumptions of Lemma 3 in [10] hold true for \(\mathcal{E}=\mathcal{\tilde{E}}=\mathcal{S},\mathcal{U}=\mathcal{\tilde{U}}= \mathcal{R}G\). Indeed, \(\mathcal{S}\) is an infinite dimensional Frechet space and \(\mathcal{R}G\) comprises linear bounded operators on \(\mathcal{S}\). Moreover, for \(u,v\in\mathcal{S}\), the rank \(1\) operator \(u\otimes v\) lies in \(\mathcal{R}G\). Then, picking a sequence \(v_{j}\) converging to \(v\) in the weak topology, we see that \(u\otimes v_{j}\) converges to \(u\otimes v\) in the operator topology, so that \(\mathcal{S}^{\prime}=\mathcal{F}=\mathcal{\tilde{F}}\) in the notation of [10]. The claim follows then directly from the quoted result. QED
On the other hand, Lemma 4 of [10] is not as straightforward to generalise to \(SG\)-operators. Indeed, when looking at the proof there, one is confronted with the possible incapability of choosing a function \(u\in\mathcal{S}(\mathbb{R}^{n})\) with the property that \(u(x)\neq u(y)\) for each \(x,y\in\mathbb{R}^{n}\). Accordingly, it is not clear whether such a claim is at all true. We set out then to prove directly that the composition of the Eidelheit isomorphism with the FIO coming from the formal algebra is a multiple of the identity up to some operator with Schwartz kernel. Consider to this end the composition \(E=VA\) of the Eidelheit isomorphism \(V\) with the FIO \(A\) coming from Theorem 4.13.
**Lemma 4.15**.: \(E\colon\mathcal{S}\to\mathcal{S}\) is bounded and extends to a bounded operator \(E\colon\mathit{HG}_{k1}\to L^{2}\) for some \(k\in\mathbb{N}\).
Proof.: \(E\) is clearly bounded as an operator \(\mathcal{S}\to L^{2}\). Then, there is a finite set of semi-norms \(\{p_{0},\dots,p_{n}\}\) on \(\mathcal{S}\) which estimate \(\|Eu\|_{L^{2}}\), namely, \(\|Eu\|_{L^{2}}\leq\max_{i\in\{0,\dots,n\}}p_{i}(u)\). Thus, there is an integer \(k\) so that \(\|E\|_{L^{2}}\leq\|u\|_{H_{k}}\). This shows that \(E\) extends as a bounded operator \(\mathit{HG}_{k1}\to L^{2}\). The proof is complete. QED
We choose now order reductions \(P,Q\), as in the proof of Lemma 4.5 and consider \(K\equiv ER^{-k}\) as an operator \(L^{2}\to L^{2}\), where we denote \(R=PQ\). Our goal is to prove that \(K\) is an \(SG\)-pseudo-differential operator of order \((0,0)\). For this, we look at commutators and use the characterisation of Schrohe [11] of \(SG\)-pseudo-differential operators on the weighted Sobolev spaces \(\mathit{HG}_{(l,k)}\). Here and later we write \(\mathrm{ad}\,K\) for the operator on \(\mathit{LG}\) acting by commutation with \(K\), namely \((\mathrm{ad}\,K)(P)=[K,P]\). We owe the idea of the following strategy to Ryszard Nest, whom we thank for the helpful suggestion. We start with the following easy lemma.
**Lemma 4.16**.: \(\mathrm{ad}\,K\) preserves the double filtration and \(K\) extends to a bounded operator \(K\colon\mathit{HG}_{r1}\to\mathit{HG}_{r1}\) for every \(r\in\mathbb{R}^{2}\).
Proof.: Fix \(r\in\mathbb{R}^{2}\) and consider \(v\in\mathit{HG}_{r1}\). Setting \(v_{0}\equiv\Lambda^{r1}v\) for \(\Lambda\) an elliptic \(SG\Psi\mathsf{DO}\) of order \(\mathbb{1}\), we have that \(v_{0}\in L^{2}\) and
\[Kv=K\Lambda^{-r1}v_{0}=K\Lambda^{-r1}K^{-1}Kv_{0}.\]
Remarking that \(\operatorname{ad}E\) preserves the double filtration of \(\mathit{LG}\), we see that
\[KPK^{-1}=\Lambda^{-l1}E\Lambda^{-k1}P\Lambda^{k1}E^{-1}\Lambda^{l1},\]
so that also \(\operatorname{ad}K\) preserves the double filtration. It follows directly that \((\operatorname{ad}K)(\Lambda^{-r1})\) has order \(-r\mathbb{1}\), and since \(Kv_{0}\in L^{2}\) by assumption, we have \(Kv\in\mathit{HG}_{r1}\).
**Proposition 4.17**.: \(K\) is a (non necessarily classical) \(SG\)-pseudo-differential operator of order \((0,0)\).
Proof.: We prove that for every \(\alpha,\beta\in\mathbb{N}^{n}\) there exists an operator \(R^{\alpha}_{\beta}\in\mathit{LG}^{-|\alpha|,-|\beta|}\) such that, continuously,
\[(\operatorname{ad}M_{x})^{\alpha}(\operatorname{ad}\partial)^{\beta}K=R^{ \alpha}_{\beta}K\colon\mathit{HG}_{r}\to\mathit{HG}_{r+(|\alpha|,|\beta|)}, \tag{4.45}\]
where \(M_{x^{j}}\) is the multiplication operator by \(x^{j}\). This is known by [10] to be equivalent to \(K\in\mathit{LG}^{0}\). We show first that, for every \(\beta\in\mathbb{N}^{n}\), there exists \(Q_{\beta}\in\mathit{LG}^{0,-|\beta|}\) such that
\[(\operatorname{ad}\partial)^{\beta}K=Q_{\beta}K. \tag{4.46}\]
We argue by induction on \(|\beta|\).
For \(|\beta|=1\) we have
\[(\operatorname{ad}\partial_{x^{j}})K =[\partial_{x^{j}},K]=(\partial_{x^{j}}-K\partial_{x^{j}}K^{-1})K\] \[=Q_{j}K.\]
Here, \(Q_{j}\in\mathit{LG}^{0,-1}\), since \(\operatorname{ad}K\) is an automorphism preserving the principal symbol. So the base step holds true.
Assume the claim holds true for every \(|\beta|\leq r\) and consider then \(\gamma\in\mathbb{N}^{n}\) with \(|\gamma|=r+1\). Then \(\gamma=\beta+\mathbb{1}_{j}\) for some \(j\in\{1,\ldots n\}\) and some \(\beta\) with \(|\beta|=n\). We write
\[(\operatorname{ad}\partial)^{\gamma}K =(\operatorname{ad}\partial_{x^{j}})\left[(\operatorname{ad} \partial)^{\beta}K\right]\] \[=(\operatorname{ad}\partial_{x^{j}})(Q_{\beta}K)=(\operatorname{ ad}\partial_{x^{j}})(Q_{\beta})K+Q_{\beta}(\operatorname{ad}\partial_{x^{j}})K\] \[=[\partial_{x^{j}},Q_{\beta}]K+Q_{\beta}Q_{j}K,\]
having used the inductive hypothesis twice and the properties of \(\operatorname{ad}\). Now, \(Q_{\beta}Q_{j}=\widetilde{Q}_{\beta j}\in\mathit{LG}^{0,-|\beta|-1}\) by composition, on the other hand \(\partial_{x^{j}}Q_{\beta},Q_{\beta}\partial_{x^{j}}\in\mathit{LG}^{1,-|\beta|}\). However, they have the same principal symbol, so that in view of 5. in Proposition 2.16 we have \([\partial_{x^{j}},Q_{\beta}]=\tilde{Q}_{\beta j}\in\mathit{LG}^{0,-|\beta|-1}\). It follows now that
\[(\operatorname{ad}\partial)^{\gamma}K=(\tilde{Q}_{\beta j}+\widetilde{Q}_{ \beta j})K\equiv Q_{\gamma}K, \tag{4.47}\]
with \(Q_{\gamma}\in\mathit{LG}^{0,-|\beta|-1}=\mathit{LG}^{0,-|\gamma|}\). By induction, then, (4.46) holds true for any \(\beta\in\mathbb{N}^{n}\), as claimed.
We now prove (4.45) for any \(\alpha,\beta\in\mathbb{N}^{n}\). For \(|\alpha|=0\) there is nothing to prove. For clarity's sake, we spell out the case \(\alpha=\mathbb{1}_{j}\). Then,
\[\operatorname{ad}x^{j}(\operatorname{ad}\partial)^{\beta}K=\operatorname{ad} x^{j}(Q_{\beta}K)=\left[x^{j},Q_{\beta}\right]K+Q_{\beta}\left[x^{j},K\right].\]
Similarly as above \([x^{j},K]=(x^{j}-Kx^{j}K^{-1})K=\widetilde{P}_{j}K\) with \(\widetilde{P}_{j}\in\mathit{LG}^{-1,0}\), so that \(Q_{\beta}\widetilde{P}_{j}=\widetilde{R}_{\beta}^{j}\in\mathit{LG}^{-1,-| \beta|}\) by composition. On the other hand, \([x^{j},Q_{\beta}]=\overline{R}_{\beta}^{j}\in\mathit{LG}^{-1,-|\beta|}\).
\(LG^{-1,-|\beta|}\) in view of the observation above about the order of commutators in the \(SG\)-calculus.
Assume now that (4.47) holds true for any \(\alpha\in\mathbb{N}^{n}\) such that \(|\alpha|\leq n\), and let \(\gamma=\alpha+\mathbb{1}_{j}\) for some \(j\). Then, using the properties of \(\operatorname{ad}\) as a derivation,
\[\begin{split}(\operatorname{ad}x)^{\gamma}(\operatorname{ad} \partial)^{\beta}(K)&=\operatorname{ad}x^{j}(\operatorname{ad}x )^{\alpha}(\operatorname{ad}\partial)^{\beta}(K)\\ &=\operatorname{ad}x^{j}(P_{\beta}^{\alpha}K)=\left[x^{j},P_{ \beta}^{\alpha}\right]K+P_{\beta}^{\alpha}\left[x^{j},K\right]\\ &=\left(\left[x^{j},P_{\beta}^{\alpha}\right]+P_{\beta}^{\alpha}P ^{j}\right)K,\end{split} \tag{4.48}\]
with \(P_{\beta}^{\alpha}\) given by inductive hypothesis and \(P^{j}\) given by the previous step with \(|\beta|=0\). Now, \(\widetilde{P}_{\beta}^{\alpha j}\equiv\left[x^{j},P_{\beta}^{\alpha}\right] \in LG^{-|\alpha|-1,-|\beta|}\) by assumption and the properties of \([\,,]\). On the other hand, by composition, \(\overline{P}_{\beta}^{\alpha j}\equiv P_{\beta}^{\alpha}P^{j}\in LG^{-|\alpha |-1,-|\beta|}\) as well. Thus, setting
\[P_{\beta}^{\alpha+1_{j}}\equiv\widetilde{P}_{\beta}^{\alpha j}+\overline{P}_{ \beta}^{\alpha j} \tag{4.49}\]
it follows \(P_{\beta}^{\gamma}\in LG^{-|\gamma|,-|\beta|}\). The induction is complete.
Armed with this relation, it is now easy to show that the required mapping properties hold true. Indeed, \(K\) maps \(HG_{r}\to HG_{r}\) continuously for each \(r\in\mathbb{R}^{2}\) by Lemma 4.16. On the other hand, \(P_{\beta}^{\alpha}\in LG^{-|\alpha|,-|\beta|}\) gives exactly that \(P_{\beta}^{\alpha}\colon HG_{r}\to HG_{r+(|\alpha|,|\beta|)}\) continuously for each \(r\in\mathbb{R}^{2}\). The composition \(P_{\beta}^{\alpha}K\) satisfies then the same properties. Thus, we have proven the characterization (4.45) and \(K\) is a pseudo-differential operator of \(SG\)-type of order \(0,0\). QED
Notice, in addition, that, while Lemmas 4.15 and 4.16, together with Proposition 4.17, imply that \(E\) is a pseudo-differential operator as well, its order is not necessarily integral. Therefore, \(\operatorname{ad}E\) is not an inner automorphism, cf. also [10]. This is the reason why, in general, we cannot expect to obtain an FIO of integer order.
If we start with the inverse of the Eidelheit isomorphism, \(V^{-1}\), we obtain, by the same argument, another pseudo-differential operator, \(\tilde{E}\). They satisfy \(EP-PE\in\mathcal{R}G,\tilde{E}P-P\tilde{E}\in\mathcal{R}G\), for any \(P\in LG^{m}\). We notice that this means, in particular, that \(E\) almost commutes with Shubin operators since \(\Gamma^{m}(\mathbb{R}^{n})\subset SG^{m,m}\). Notice that here we are disregarding classicality, notion which has different meanings for \(\Gamma\) and \(SG\). However, Lemma 4.18 below, suggested in a private communication by Elmar Schrohe, is a statement about smooth, bounded functions on \(\mathbb{R}^{2n}\) which does not need classicality in any sense. Therefore, it can be proven almost exactly as in the Master thesis of Robert Hesse [11]. We reproduce here the proof since the aforementioned work may not be readily available.
**Lemma 4.18**.: Let \(E,\tilde{E}\colon\mathcal{S}\to\mathcal{S}\) be \(\Psi\)DOs of \(SG\)-type, parametrices of each other, such that \([E,P],[\tilde{E},P]\in\mathcal{R}G\) for each \(P\in LG\). Then, \(E=cI+R\) for some \(c\in\mathbb{C},R\in\mathcal{R}G\).
Proof.: First, notice that the conditions on \(E\) and \(\tilde{E}\) imply that their symbols \(e,\tilde{e}\) are of order at most \(0\) and their derivatives are rapidly decreasing. Indeed, \(\{e,p\}\in\mathcal{S}\,\forall p\in SG\iff\partial_{x}e,\partial_{\xi}e\in \mathcal{S}\), and it follows that \(e\) is bounded. Moreover, \(\nabla e\) is a conservative vector field with potential \(e\). Namely, for each path \(\gamma\colon[a,b]\to\mathbb{R}^{2n}\)
we have
\[e(\gamma(a))-e(\gamma(b))=\int_{a}^{b}\nabla(e)(\gamma(s))\cdot\dot{\gamma}(s)\, \mathrm{d}s. \tag{4.50}\]
Fix now some point \(z\) in \(\mathbb{S}^{2n-1}\) (an oriented direction in \(\mathbb{R}^{2n}\)) so that, for \(1<t_{1}<t_{2}\), it holds true
\[e(t_{2}z)-e(t_{1}z)=\int_{t_{1}}^{t_{2}}\nabla(e)(sz)\cdot z\,\mathrm{d}s. \tag{4.51}\]
By assumption, \(\nabla e\) has rapidly decaying component. It follows that, for any \(v\in\mathbb{R}^{2n}\), we have \(\left|\nabla e(v)\right|\lesssim_{N}\left\langle v\right\rangle^{-N}\) and, for each \(M\geq 2\), we can estimate the integral as
\[\left|\int_{t_{1}}^{t_{2}}\nabla e(sz)\cdot z\,\mathrm{d}s\right|\lesssim_{M} \left\langle s\right\rangle^{2-2M}|_{t_{1}}^{t_{2}}. \tag{4.52}\]
So, the integral converges to \(0\) uniformly in \(t_{2}\) as we take the limit \(t_{1}\to\infty\). We can, on the other hand, also estimate \(|e(t_{2}z)-e(t_{1}z)|\lesssim_{k}\left\langle t_{1}\right\rangle^{k}\). In particular then, we can pass to the limit \(t_{2}\to\infty\) in this expression to obtain, for the radial limit \(l(z)\equiv\lim_{t\to\infty}e(tz)\), the bound
\[\left|l(z)-e(tz)\right|\lesssim_{k}\left\langle t\right\rangle^{-k}. \tag{4.53}\]
We claim now that \(l(z)\) does not depend on \(z\), tht is, the radial limit is constant on the sphere \(\mathbb{S}^{2n-1}\). To see this, choose another \(w\neq z\) on \(\mathbb{S}^{2n-1}\) and assume, possibly after having applied an orthogonal transformation, that \(z=(1,0,\dots,0)\) and \(w=(\cos\alpha,\sin\alpha,0,\dots,0)\). We consider a family of paths \(\gamma_{t}\colon[0,\alpha]\to\mathbb{R}^{2n}\) given by \(\gamma_{t}(\theta)=(t\cos\theta,t\sin\theta,0,\dots,0)\). Pick an \(\varepsilon>0\). Using (4.53) for both directions \(z,w\) we know that there exists a \(T>1\) such that for all \(t\geq T\) we have \(\left|l(z)-e(tz)\right|<\varepsilon\) and \(\left|l(w)-e(tw)\right|<\varepsilon\). On the other hand, for each fixed \(t\geq T\), we have
\[e(tz)-e(tw)=\int_{\gamma_{t}}\nabla e(z)\cdot\mathrm{d}z=\int_{0}^{\alpha} \nabla e(\gamma_{t}(\theta))\cdot\dot{\gamma}_{t}(\theta)\,\mathrm{d}\theta, \tag{4.54}\]
so that taking absolute values and noticing that \(\left|\gamma_{t}\right|=\left|\dot{\gamma}_{t}\right|=t\), we estimate, for each \(M\geq 2\),
\[\left|e(tz)-e(tw)\right|\leq\int_{0}^{\alpha}\left|\nabla e(\gamma_{t}(\theta ))\cdot\dot{\gamma}_{t}(\theta)\right|\mathrm{d}\theta\lesssim_{M}\frac{ \alpha t}{\left\langle t\right\rangle^{M}}, \tag{4.55}\]
where we again used the fact that \(\nabla e\) has rapidly decaying components. Clearly, for each \(M\), the right-hand side decays to zero. We collect then
\[\left|l(w)-l(z)\right|\leq\left|l(w)-e(tw)\right|+\left|e(tz)-l(z)\right|+ \left|e(tw)-e(tz)\right|\leq 3\varepsilon, \tag{4.56}\]
which shows that \(l(z)=l(w)\) as claimed. We let \(c\) be the constant value of \(l\) on \(\mathbb{S}^{2n-1}\) and look at the function \(f=e-c\). Clearly, \(f\) has rapidly decreasing derivatives. On the other hand, we can compute, for \(z\in\mathbb{S}^{2n-1}\), that
\[f(tz)=e(tz)-l(z)=-\int_{t}^{+\infty}\nabla e(sz)\cdot z\,\mathrm{d}s \tag{4.57}\]
and conclude, as before, that \(f(tz)\) is rapidly decreasing as a function of \(t\). As above, the convergence to zero is also uniform in \(\mathbb{S}^{2n-1}\) and we discover that \(f\) is rapidly decreasing itself. Summing up, we have proven that \(f\in\mathcal{S}(\mathbb{R}^{2n})\).
Now, set \(R=\operatorname{Op}(f)\in\mathcal{R}G\). Then, \(e=c+f\) implies \(E=cI+R\), and, since \(\tilde{E}\) is a parametrix for \(E\), we have, for some \(R^{\prime}\in\mathcal{R}G\),
\[(cI+R)\tilde{E}=I+R^{\prime},\implies c\tilde{E}=I\mod\mathcal{R}G. \tag{4.58}\]
Therefore, \(c\) must be different from \(0\), and we deduce that also \(\tilde{E}=\frac{1}{c}I+S\) for some \(S\in\mathcal{R}G\). This concludes the proof. QED
With all the above pieces in place, we can now state and prove our third main result.
**Theorem 4.19** (Characterisation of \(SG\)-order-preserving isomorphisms).: Let \(\imath\colon\mathit{LG}\to\mathit{LG}\) be an SGOPI. Then, there exists an invertible, classical SGFIO \(A\) of type \(\mathcal{Q}_{gen}\) such that, for all \(P\in\mathit{LG}\), we have
\[\imath(P)=A^{-1}PA. \tag{4.59}\]
Proof.: Consider \(E=FV^{-1}\) where \(V\) is the Eidelheit isomorphism of Lemma 4.14 and \(F\) the SGFIO obtained from Theorem 4.9 by considering the induced isomorphism on the formal symbol algebra \(\mathcal{B}G\). Then, by the above discussion, we have that \(E\colon\mathcal{S}(\mathbb{R}^{n})\to\mathcal{S}(\mathbb{R}^{n})\) is continuous. Moreover, it is an elliptic \(SG\Psi\mathrm{DO}\) of order \((0,0)\), with parametrix \(\tilde{E}=VF^{\#}\). Furthermore, both \(E\) and \(\tilde{E}\) commute \(\mod\mathcal{R}G\) with every \(P\in\mathit{LG}\). By Lemma 4.18, it follows that \(E=cI+R\) with \(R\in\mathcal{R}G\), so that \(F=cV+RV\). But then \(V=c^{-1}(F-RV)\) and \(V^{-1}=c(F-RV)^{\#}+S\) with some \(S\in\mathcal{R}G\), so that \(V\) is an invertible SGFIO, as claimed. QED
The following corollary is now completely straightforward.
**Corollary 4.20**.: Let \(\imath\colon\mathit{LG}\to\mathit{LG}\) be an algebra isomorphism satisfying the condition
\[\imath(\mathit{LG}^{m_{1},m_{2}})\subset\mathit{LG}^{m_{2},m_{1}}\quad \forall(m_{1},m_{2})\in\mathbb{Z}^{2}. \tag{4.60}\]
Then, \(\imath(P)=(\mathcal{F}\,A)^{-1}P\,\mathcal{F}\,A\), where \(A\) is an invertible \(\mathcal{Q}\)-FIO and \(\mathcal{F}\) is the Fourier transform.
Proof.: This is obtained by combining Theorem 4.19 with Proposition 3.28. Namely, consider the isomorphism \(\jmath(P)=\mathcal{F}\,\imath(P)\,\mathcal{F}^{-1}\), which, by assumption, is now an SGOPI. It follows that \(\jmath(P)=A^{-1}PA\) for some invertible \(\mathcal{Q}\)-FIO. Then \(\imath(P)=\mathcal{F}\,A^{-1}PA\,\mathcal{F}\), as claimed. QED
|
2303.17430 | Products of conjugacy classes in simple algebraic groups in terms of
diagrams | For a simple algebraic group $G$ over an algebraically closed field, we study
products of normal subsets. For this we mark the nodes of the Dynkin diagram of
$G$. We use two types of labels, a binary marking and a labeling with
non-negative integers. The first is used to recognize large conjugacy classes
which appear in a product of two conjugacy classes while the second is used to
keep track of multiplicities of regular diagrams. In particular, we formulate
sufficient conditions in terms of marked diagrams, for a product of normal
subsets in $G$ to contain regular semisimple elements. | Iulian Ion Simion | 2023-03-30T14:55:17Z | http://arxiv.org/abs/2303.17430v1 | # Products of conjugacy classes in simple algebraic groups in terms of diagrams
###### Abstract.
For a simple algebraic group \(G\) over an algebraically closed field we study products of normal subsets. For this we mark the nodes of the Dynkin diagram of \(G\). We use two types of labels, a binary marking and a labeling with non-negative integers. The first is used to recognize large conjugacy classes which appear in a product of two conjugacy classes while the second is used to keep track of multiplicities of regular diagrams. In particular, we formulate sufficient conditions in terms of marked diagrams, for a product of normal subsets in \(G\) to contain regular semisimple elements.
Key words and phrases:conjugacy class, simple algebraic group, Dynkin diagram 2020 Mathematics Subject Classification: Primary 20G99; Secondary 05E16 I am grateful to Prof. Attila Maroti for many discussions on this topic. This work was supported by a grant of the Ministry of Research, Innovation and Digitalization, CNCS/CCCDI-UEFISCDI, project number PN-III-P1-1.1-TE-2019-0136, within PNCDI III
their dimension and in terms of their (co)ranks in [25] for \(G\) a simple algebraic group over an algebraically closed field \(k\) of good characteristic. Recall that the characteristic \(p\) of \(k\) is good for \(G\) if \(p\neq 2\) when \(G\) is not of type \(A\), \(p\neq 3\) if \(G\) is an exceptional group and \(p\neq 5\) if \(G\) is of type \(E_{8}\). Theorem C in [25] is extended and improved by [18, Theorem 1] where it is shown that there exists an absolute constant \(c\leq 120\) such that whenever \(C\) is a non-central conjugacy class of a simple algebraic group \(G\) then \(C^{k}=G\) for any integer \(k\) at least \(c\cdot(\dim(G)/\dim(C))\).
The methods used in the context of Chevalley groups give explicit constants for the upper bounds on the (extended) covering numbers which are missing in [17, Theorem 1.1] and [21, Theorem 1.2]. On the other hand the results in [17, 21] take into account the size of the normal subsets and should have analogous statements for Chevalley groups. Bridging the two types of results should entail an analysis of simple algebraic groups over algebraically closed fields which are easier to deal with than finite simple groups of Lie type or Chevalley groups over arbitrary fields, yet resembles these type of groups closely through the BN-pair structure. Central in this context is to measure the size of the normal subset \(C_{1}C_{2}\) for two conjugacy classes \(C_{1}\) and \(C_{2}\) of \(G\).
The product of small conjugacy classes in simple groups grows rapidly in the following sense. By a result of Liebeck, Schul and Shalev [15, Theorem 1.3], given any \(\epsilon>0\), there exists \(\delta>0\) such that if \(N_{1}\), \(N_{2}\) are normal subsets of a non-abelian finite simple group \(G\) satisfying \(\left|N_{i}\right|\leq\left|G\right|^{\delta}\) for \(i=1\), \(2\), then \(\left|N_{1}N_{2}\right|\geq(\left|N_{1}\right|\left|N_{2}\right|)^{1-\epsilon}\). An analogue of this statement for algebraic groups is [15, Theorem 1.5]. Given any \(\epsilon>0\), there exists \(\delta>0\) such that if \(C_{1}\) and \(C_{2}\) are conjugacy classes in a simple algebraic group \(G\) defined over an algebraically closed field and satisfying \(\dim(C_{i})\leq\delta\dim(G)\) for \(i=1\), \(2\), then the product \(C_{1}C_{2}\) contains a conjugacy class of dimension at least \((1-\epsilon)(\dim(C_{1})+\dim(C_{2}))\).
Products of large conjugacy classes cover \(G\) rapidly in the following sense. By a result of Gow [9, Theorem 2], if \(G\) is a finite simple group of Lie type, then for any two regular semisimple conjugacy classes \(C_{1}\) and \(C_{2}\) the product \(C_{1}C_{2}\) contains any non-identity semisimple element of \(G\). Hence, the product of four such classes equals \(G\). Many of the results on large classes are motivated by Thompson's conjecture. The analogue for a simple algebraic group \(G\) states that there exists a conjugacy class \(C\) such that \(C^{2}=G\). We refer to the survay in [19] for more background on this. For a simple algebraic group \(G\) over an algebraically closed field the product of \(4\) regular conjugacy classes equals \(G\) (see for example [18, Lemma 2.1]). When considering a product \(N_{1}\cdots N_{k}\) of several normal subsets which equals \(G\) one would like to understand which normal subsets can make up such a product. By the above, if \(k=4\) then regular classes are possible. As \(k\) increases, one would like to understand which smaller classes can be used in such a product.
From a different perspective, the Arad-Herzog conjecture states that the product of two non-trivial conjugacy classes in a non-abelian finite simple group \(G\) is never a conjugacy class in \(G\). Guralnick, Malle and Tiep [11] prove a strong version of the Arad-Herzog conjecture for simple algebraic groups and in particular show that almost always the product of two conjugacy classes in a simple algebraic group consists of infinitely many conjugacy classes. Guralnick and Malle [10] classify pairs of conjugacy classes in almost simple algebraic groups whose product consists of finitely many classes.
In this paper, \(G\) denotes a simple algebraic group defined over an algebraically closed field of characteristic \(p>0\). To a normal subset \(N\subseteq G\) we attach a set of marked diagrams by means of the representatives in a Borel subgroup \(B\). More precisely, for \(g\in N\cap B\), the marked diagram of \(g\) is the Dynkin diagram of \(G\) in which we mark the node corresponding to a simple root \(\alpha\) if the projection of \(g\) on the root group \(U_{\alpha}\) is not \(1\) (see SS3). The set of all these diagrams - obtained from elements in \(N\cap B\) - is denoted by \(\mathcal{D}(N)\). If \(p\) is a good prime for \(G\), i.e. \(p\neq 2\) if \(G\) is not of type \(A\), \(p\neq 3\) if \(G\) is an exceptional group and \(p\neq 5\) if \(G\) is of type \(E_{8}\), then marked diagrams extend the notion of distinguished diagrams used in the Bala-Carter classification of unipotent conjugacy classes [4, SS5.11].
Marked diagrams offer a way of measuring the'size of a conjugacy class' not only for algebraic groups over algebraically closed fields but for simple groups of Lie type and Chevalley groups over arbitrary fields as well: for a small conjugacy class \(C\) the set \(\mathcal{D}(C)\) contains diagrams with few marked nodes while for a large class \(C\) there are diagrams with many marked nodes in \(\mathcal{D}(C)\). If \(\mathcal{D}^{\circ}\) denotes the diagram with all nodes marked then \(\mathcal{D}^{\circ}\in\mathcal{D}(N)\) whenever \(N\) contains a regular conjugacy class (see Propositions 14 and 15). Moreover, for two classes \(C_{1}\) and \(C_{2}\) of \(G\) if \(D_{1}\in\mathcal{D}(C_{1})\) and \(D_{2}\in\mathcal{D}(C_{2})\) then \(D_{1}\boxplus D_{2}\subseteq\mathcal{D}(C_{1}C_{2})\) where \(D_{1}\boxplus D_{2}\) is the marked diagram obtained by marking exactly those nodes which are marked in both \(D_{1}\) and \(D_{2}\) (see Proposition 19).
Marked diagrams can also be viewed as elements \(\sum_{\alpha\in\Delta}n_{\alpha}\alpha\) of the monoid \(\mathbb{N}\Delta=\mathbb{N}^{|\Delta|}\) where \(\Delta\) is a set of simple roots and \(n_{\alpha}\in\mathbb{N}\) (see SS3.1). In this notation \(\mathcal{D}^{\circ}\) is the marked diagram \(\sum_{\alpha\in\Delta}\alpha\). Let \(D=\sum_{\alpha\in\Delta}n_{\alpha}\alpha\) and \(D^{\prime}=\sum_{\alpha\in\Delta}m_{\alpha}\alpha\) be two diagrams with \(n_{\alpha},m_{\alpha}\in\mathbb{N}\). Their sum is \(D+D^{\prime}=\sum_{\alpha\in\Delta}(n_{\alpha}+m_{\alpha})\alpha\). There is a partial order \(\geq\) on \(\mathbb{N}\Delta\) defined by \(D\geq D^{\prime}\) if and only if \(n_{\alpha}-m_{\alpha}\geq 0\) for all \(\alpha\in\Delta\). This gives a way of addressing questions on products of classes in terms of calculations in the monoid \(\mathbb{N}\Delta\). A first statement in this direction is the following.
**Proposition A**.: _Let \(G\) be a simple algebraic group, defined over an algebraically closed field. Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\) and let \(D_{i}\in\mathcal{D}(N_{i})\) for all \(1\leq i\leq k\). If \(\sum_{i=1}^{k}D_{i}\geq 12\cdot\operatorname{rk}(G)\cdot\mathcal{D}^{\circ}\), then \(\prod_{i=1}^{k}N_{i}=G\)._
Comparing this result to [8], where \(\operatorname{ecn}(G)\) is shown to be less than \(4\cdot\operatorname{rk}(G)\), we don't obtain anything new in terms of the extended covering number of \(G\). The condition on the diagrams \(D_{i}\) implies that there are at least \(12\cdot\operatorname{rk}(G)\) conjugacy classes in the product, which by [8] has to equal \(G\).
Since the product of two open subsets of \(G\) equals \(G\), it is natural to ask when a product \(N_{1}\cdots N_{k}\) of normal subsets contains an open subset of \(G\). For this, notice that any set \(A\) of diagrams is partially ordered. For a diagram \(D\in A\), let \(A(\geq D):=\{E\in A:E\geq D\}\). We say that \(A\) is of type \((r,s)\) with respect to the diagrams \(D_{1},\ldots,D_{r}\in A\) if there is a partition of \(A\) into \(r\) subsets \(A_{1},\ldots,A_{r}\) such that \(|A_{i}\cap A(\geq D_{i})|\geq s\) for all \(1\leq i\leq r\).
**Proposition B**.: _Let \(G\) be a simple algebraic group, defined over an algebraically closed field. Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\) and let \(D_{i}\in\mathcal{D}(N_{i})\) for all \(1\leq i\leq k\). If the set \(\{D_{i}:1\leq i\leq k\}\) is of type \((r,6)\) with respect to \(D_{i_{1}},\ldots,D_{i_{r}}\) and \(\sum_{j=1}^{r}D_{i_{j}}\geq\mathcal{D}^{\circ}\) then \(\dim\prod_{i=1}^{m}N_{i}=\dim G\)._
Since the product of four subsets of \(G\), each of which contains a regular conjugacy class, equals \(G\), it is natural to ask when a product of normal subsets \(N_{1}\cdots N_{n}\) contains a regular element.
**Theorem C**.: _Let \(G\) be a simple algebraic group, defined over an algebraically closed field. Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\) and let \(D_{i}\in\mathcal{D}(N_{i})\) for all \(1\leq i\leq k\). If \(\sum_{i=1}^{k}D_{i}\geq 16\mathcal{D}^{\circ}\), then \(\prod_{i=1}^{k}N_{i}\) contains regular semisimple elements._
Proposition B shows in particular that if \(\sum_{i=1}^{k}D_{i}\geq 4\mathcal{D}^{\circ}\) and each diagram \(D_{i}\) appears at least \(6\) times then the product of the corresponding normal subsets contains an open subset of \(G\). It follows that if we replace \(6\) by \(12\) the product of the corresponding subsets is \(G\). Theorem C shows that if the diagrams can be partitioned into \(4\) subsets, each of which sum up to a regular diagram (a diagram which is greater than or equal to \(\mathcal{D}^{\circ}\)) then the product of the corresponding subsets is \(G\). This suggests that it should be possible to remove \(\operatorname{rk}(G)\) in Proposition A.
**Question D**.: _Let \(G\) be a simple algebraic group, defined over an algebraically closed field. Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\) and let \(D_{i}\in\mathcal{D}(N_{i})\) for all \(1\leq i\leq k\). If \(\sum_{i=1}^{k}D_{i}\geq c\cdot\mathcal{D}^{\circ}\) for some constant which does not depend on \(G\), does it follow that \(\prod_{i=1}^{k}N_{i}=G\)?_
An affirmative answer to this question would give in particular a means of recognizing which conjugacy classes can appear in a product \(N_{1}\cdots N_{k}=G\) for a fixed \(k\). By Proposition 10, conjugacy classes in the same Jordan class have the same marked diagrams and Proposition 13 exhibits conjecturally maximal marked diagrams of a conjugacy class. Notice that it suffices to give an answer to the above question for classical groups of heigh rank, a treatment of the bounded rank case is needed for a good upper bound on the constant \(c\).
The paper is structured as follows: in SS2 we fix notation, we collect results on conjugacy classes in algebraic groups which are relevant to marked diagrams and we recall a factorization of \(G\) which is needed in the sequel. In SS3 we introduce the notion of a marked diagram and the associated monoids. In SS4 we describe the link between marked diagrams and unipotent elements. The proofs of Propositions A and B and of Theorem C are given in SS5.
## 2. Preliminaries
In this paper \(G\) denotes a simple algebraic group of rank \(r=\operatorname{rk}(G)\) defined over an algebraically closed field \(F\) of characteristic \(p>0\). We fix a Borel subgroup \(B\) with unipotent radical \(U\) and maximal torus \(T\). We let \(\Phi\) denote the roots of \(G\) with respect to \(T\), the set of positive roots \(\Phi^{+}\) are with respect to \(U\) and \(\Delta\) denotes the set of simple roots of \(\Phi\) in \(\Phi^{+}\). We denote by \(U^{-}\) the radical of the Borel subgroup opposite to \(B\), i.e. \(U=U^{\dot{w}_{0}}\) for some representative \(\dot{w}_{0}\in N_{G}(T)\) of the longest element (with respect to \(\Delta\)) of the Weyl group \(N_{G}(T)/T\).
For each root \(\alpha\in\Phi\) let \(u_{\alpha}:F\to U_{\alpha}\) be an isomorphism from the additive group of the ground field \(F\) onto the root subgroup \(U_{\alpha}\). For each \(\alpha\in\Phi\) we denote by \(\alpha^{\vee}:F^{\times}\to T\) the cocharacter corresponding to the root \(\alpha\). Then
\[{}^{\alpha^{\vee}(t)}u_{\beta}(x)=\alpha^{\vee}(t)u_{\beta}(x)\alpha^{\vee}( t)^{-1}=u_{\beta}(\beta(\alpha^{\vee}(t))x)=u_{\beta}(t^{\langle\beta,\alpha \rangle}x) \tag{1}\]
for all \(\alpha,\beta\in\Phi\), \(t\in F^{\times}\), \(x\in F\) (see [14, IISS1.3] and [3, Ch.7]).
Any element \(g\in B\) has a unique factorization of the form \(g=s\prod_{\alpha\in\Phi^{+}}u_{\alpha}(x_{\alpha})\) for some \(s\in T\), \(x_{\alpha}\in F\) and where the product is in a fixed (but arbitrary) ordering of \(\Phi^{+}\) (see for example [20, Theorem 11.1]). The projections \(g\mapsto u_{\alpha}(x_{\alpha})\) depend in general on the ordering of \(\Phi^{+}\). However, the projections on simple root groups
do not depend on this order as can be seen from the commutator relations (see for example [20, Theorem 11.8]). We point out that \([U,U]\subseteq\prod_{\alpha\in\Phi^{+}-\Delta}U_{\alpha}\) (this can be deduced from [20, Proposition 11.5] and the commutator relations). In what follows we make use of this fact without further notice. For a unipotent element \(u=\prod_{\alpha\in\Phi^{+}}u_{\alpha}(x_{\alpha})\) we denote by \(\operatorname{supp}(u)\) the set of simple roots \(\alpha\) with the property that \(x_{\alpha}\neq 0\). For an element \(s\in T\) we denote by \(\operatorname{supp}(s)\) the set of simple roots \(\alpha\) with the property that \(\alpha(s)\neq 1\).
For an element \(g\in G\) we denote by \(g_{s}\) and \(g_{u}\) the semisimple and the unipotent part in the Jordan decomposition of \(g\) respectively: \(g=g_{s}g_{u}=g_{u}g_{s}\). In any algebraic group, all Borel subgroups, respectively all maximal tori are conjugate and any element in \(G\) is conjugate to an element in \(B\). Thus, conjugating if necessary, we may assume that \(g\) lies in \(B\) and that \(g_{s}\) lies in \(T\): we may conjugate \(B\) and \(T\) by the same element in \(G\), or, when \(g\) is a representative of a conjugacy classe, we may replace \(g\) by a \(G\)-conjugate with the above property.
For a set of roots \(I\subseteq\Phi\), let \(\Phi_{I}\) be the root subsystem generated by \(I\), i.e. \(\Phi_{I}=\mathbb{Z}I\cap\Phi\). We denote by \(L_{I}\) the subgroup \(\langle T,U_{\alpha}:\alpha\in\Phi_{I}\rangle\) of \(G\). If the roots in \(I\) are simple then \(L_{I}\) is a standard Levi subgroup. In this case, we denote by \(P_{I}\) the standard parabolic subgroup with Levi factor \(L_{I}\). When we need to specify the ambient group \(G\), we write \(L_{I}^{G}\) or \(P_{I}^{G}\). Notice that the notation \(L_{I}^{G}\) and \(P_{I}^{G}\) makes sense in the more general case of a reductive algebraic group \(G\). In the particular case of \(I=\{\alpha\}\subseteq\Delta\) we denote by \(P_{\alpha}\) the parabolic subgroup \(P_{I}\) and by \(G_{\alpha}\) the subgroup generated by \(U_{\pm\alpha}\).
### Semisimple conjugacy classes
For an element \(g\in G\) we have \(C_{G}(g)=C_{C_{G}(g_{s})}(g_{u})\). Hence, describing the conjugacy class of \(g\) entails two parts: the description of \(C_{G}(g_{s})\) and the description of unipotent conjugacy classes in \(C_{G}(g_{s})\). The structure of the centralizer of a semisimple element in \(G\) is known. In the following theorem we extract a combinatorial description which we use in the description of Jordan classes in Section 2.3. Recall that the connected components of the centralizers of semisimple elements in \(G\) are called pseudo-Levi subgroups [22]. The Levi-envelope of a pseudo-Levi \(H\) is the minimal Levi subgroup \(L\) of \(G\) containing \(H\) such that \(Z(H)^{\circ}=Z(L)^{\circ}\) (see [2, SS3]).
**Theorem 1** (Centralizers of semisimple elements).: _Let \(G\) be a simple algebraic group and let \(s\) be a semisimple element contained in the maximal torus \(T\). There is a subset \(I\subseteq\Delta\) such that exactly one of the following holds:_
1. \(C_{G}(s)^{\circ}\) _is the Levi-subgroup_ \(L_{I}\)_, or_
2. _there is a root_ \(\gamma\notin\Delta\) _and a simple root_ \(\beta\in\Delta\) _such that_ \(C_{G}(s)^{\circ}\) _is the proper pseudo-Levi subgroup_ \(L_{I\cup\{\gamma\}}\) _with Levi-envelope_ \(L_{I\cup\{\beta\}}\)_._
Proof.: The subgroup \(M_{s}=C_{G}(s)^{\circ}\) is the pseudo-Levi subgroup given by \(\langle T,U_{\alpha}:\alpha(g_{s})=1\rangle\)[26, II SS4.1]. Let \(Z_{s}\) denote the center of \(M_{s}\). Then \(L_{s}=C_{G}(Z_{s}^{\circ})\) is a Levi subgroup, the Levi envelope of \(M_{s}\) (see [2, SS3]). Conjugating, we may assume that it is a standard Levi subgroup, i.e. \(L_{s}=L_{J}\) for some \(J\subseteq\Delta\).
The pseudo-Levi \(M_{s}\) is a subgroup of \(L_{s}\) and the torus \(Z_{s}^{\circ}\) is a maximal central torus of \(M_{s}\) and of \(L_{s}\)[2, Lemma 3.7]. Factoring we obtain the semisimple subgroup \(M_{s}/Z_{s}^{\circ}\) of the semisimple group \(L_{s}/Z_{s}^{\circ}\). Under the projection \(L_{s}\to L_{s}/Z_{s}^{\circ}\), \(x\mapsto\bar{x}\), the centralizer of \(\bar{s}\) in \(\bar{L}_{s}\) is \(\bar{M}_{s}\).
Since \(Z_{s}^{\circ}\) is contained in the maximal torus \(T\) which lies in \(M_{s}\) and \(L_{s}\), the projection \(x\mapsto\bar{x}\) induces a bijection \(\alpha\mapsto\bar{\alpha}\) on the roots of \(L_{s}\) w.r.t. \(T\) and
the roots of \(\bar{L}_{s}\) w.r.t. \(\bar{T}\). The root system \(\bar{\Phi}_{J}\) decomposes into a direct sum of irreducible root systems \(\bar{\Phi}_{i}\) with \(0\leq i\leq m\) for some integer \(m\). Since \(G\) is a simple algebraic group and \(L_{s}\) is a standard Levi subgroup of \(G\), at most one of the \(\bar{\Phi}_{i}\) is not of type \(A\). Renumbering, we may assume that \(\bar{\Phi}_{0}\) has this property.
Conjugating if necessary, we may assume that the root system of \(\bar{M}_{s}\) with respect to \(\bar{T}\) has a basis \(\bar{K}\subseteq\bar{J}\cup\{\bar{\gamma}\}\) where \(-\bar{\gamma}\) is the highest root of \(\bar{\Phi}_{0}\)[22, Proposition 30].
The rank of \(\bar{M}_{s}\) is at most that of \(\bar{L}_{s}\), i.e. \(|\bar{K}|\leq|\bar{J}|\). We claim that \(|\bar{K}|=|\bar{J}|\). If this is not the case, then \(\bar{M}_{s}\) has rank at most \(|\bar{J}|-1\), i.e. a maximal torus of \(\bar{M}_{s}\) has dimension at most \(|\bar{J}|-1\). However the \(|\bar{J}|\)-dimensional torus \(\bar{T}\) lies in \(\bar{M}_{s}\), which is a contradiction.
It follows that \(\bar{K}\) is either \(\bar{J}\) or \(\{\bar{\gamma}\}\cup(\bar{J}-\{\bar{\beta}\})\) for some \(\bar{\beta}\in\bar{J}\cap\bar{\Phi}_{0}\). If \(\bar{K}=\bar{J}\) then (1) holds with \(I=K\). If \(\bar{K}=\{\bar{\gamma}\}\cup(\bar{J}-\{\bar{\beta}\})\) then (2) holds with \(I=J-\{\beta\}\).
For subsets \(I\subseteq\Delta\) and \(I^{\prime}\subseteq\Phi\) the pair \((I,I^{\prime})\) is called _of proper pseudo-Levi type_ if \([L_{I^{\prime}},L_{I^{\prime}}]\) is a proper maximal rank subsystem subgroup of \([L_{I},L_{I}]\). The pair \((I,I^{\prime})\) is called _of proper Levi type_ if \(I^{\prime}=I\) is a Levi subgroup. The pair \((I,I^{\prime})\) is called _of pseudo-Levi type_ if it is of proper pseudo-Levi type or of proper Levi type. Examples can be constructed with the Borel-de Siebenthal algorithm (see for example [20, SS13.2]). In particular for type \(B_{r}\) one may choose \(I=\Delta\) and \(I^{\prime}=\Delta\setminus\{\alpha_{r}\}\cup\{-\alpha_{0}\}\) where \(\alpha_{0}\) is the highest root, in order to obtain \([L_{I^{\prime}},L_{I^{\prime}}]\) of type \(D_{r}\). Notice also that for type \(A_{r}\) the semisimple conjugacy classes are of proper Levi type.
### Unipotent conjugacy classes
For our purposes, we use the Bala-Carter-Pommereing classification of unipotent conjugacy classes [1, 23] (see also [4, Theorem 5.9.6 and SS5.11]). For this, we require the characteristic of the ground field to be good for \(G\), i.e. \(p\neq 2\) if \(G\) is not of type \(A\), \(p\neq 3\) if \(G\) is of exceptional type and \(p\neq 5\) if \(G\) is of type \(E_{8}\). The classification of unipotent classes in the case where the characteristic of the ground field is bad for \(G\) was achieved through the contribution of many authors. The state of the art for unipotent conjugacy classes is available in [16].
For an element \(g\in G\), the group \(C_{G}(g_{s})^{\circ}\) is connected reductive, hence a central product of simple algebraic groups and a central torus with no non-trivial unipotent elements in the center. Therefore, the conjugacy class of \(g_{u}\) is a product of unipotent conjugacy classes in the simple factors of \(C_{G}(g_{s})^{\circ}\). The following theorem is well known and translates directly to the case where \(G\) is a connected reductive algebraic group. The statement is implicit in [16, Theorem 1].
Recall that a unipotent element is distinguished if \(C_{G}(u)^{\circ}\) is unipotent. For a parabolic subgroup \(P=LQ\) with Levi factor \(L\) and unipotent radical \(Q\) we have \(\dim L\geq\dim(Q/[Q,Q])\). The group \(P\) is a distinguished parabolic subgroup if \(\dim L=\dim(Q/[Q,Q])\)[16, SS2.5-6]. An element \(g\) of a parabolic subgroup \(P\) is called a Richardson element of \(P\) if the \(P\)-conjugacy class of \(g\) intersects \(Q\) in an open subset of \(Q\).
**Theorem 2** (Conjugacy classes of unipotent elements).: _Let \(G\) be a simple algebraic group defined over an algebraically closed field of good characteristic. There is a bijective correspondence between unipotent conjugacy classes of \(G\) and \(G\)-classes of pairs \((L,P)\), where \(L\) is a Levi subgroup of \(G\) and \(P\) is a distinguished parabolic
subgroup of \([L,L]\). The \(G\)-class of \((L,P)\) corresponds to the \(G\)-conjugacy class containing a Richardson element of \(P\)._
For two subsets of simple roots \(K\subseteq J\subseteq\Delta\), the pair \((J,K)\) is called _distinguished_ if \(P_{K}^{[LJ,L_{J}]}\) is a distinguished parabolic subgroup of \([L_{J},L_{J}]\).
### Jordan classes
For algebraic groups, Jordan classes were introduced in [2] inspired by the similar notion for the adjoint action of a group on its Lie algebra (see [2, SS4]). Conjugacy classes in \(G\) can be grouped together as follows. Two conjugacy classes are equivalent [2, SS4] if they have representative \(g\) and \(h\) respectively, such that \(C_{G}(h_{s})^{\circ}=C_{G}(g_{s})^{\circ}\), \(h_{s}\in g_{s}Z(C_{G}(g_{s})^{\circ})^{\circ}\) and \(h_{u}\) is conjugate to \(g_{u}\) in \(C_{G}(g_{s})^{\circ}\). The unions of elements in the corresponding equivalence classes are called Jordan classes. The set of Jordan classes is denoted by \(\mathcal{J}\). They partition \(G\) into a finite number of irreducible normal subsets and the conjugacy classes in a Jordan class have the same dimension.
In view of the previous sections we assume in this section that the characteristic of \(F\) is good for \(G\) and consider the set
\[\mathcal{I}=\Big{\{}(I,I^{\prime},J,K):(I,I^{\prime})\text{ is of pseudo-Levi type},J\subseteq I^{\prime},(J,K)\text{ is distinguished}\Big{\}}.\]
We define a map
\[\phi:\mathcal{I}\to\mathcal{J}\]
as follows. Fix \((I,I^{\prime},J,K)\in\mathcal{I}\). By [22, Proposition 32], since the characteristic of \(F\) is good for \(G\), for a given pair \((I,I^{\prime})\) of pseudo-Levi type, there exists a semisimple element \(s\) in \(G\) such that \(C_{G}(s)^{\circ}\) is \(L_{I^{\prime}}\). By [24, Proposition 1] there exists a unipotent element \(u\in C_{G}(s)^{\circ}\) such that \(u\) is a Richardson element of \(P_{K}^{[LJ,L_{J}]}\). Define \(\phi(I,I^{\prime},J,K)\) to be the Jordan class of \(su\). For any pair \((s,u)\) as above, we say that _su realizes the data \((I,I^{\prime},J,K)\)_.
**Corollary 3**.: _The map \(\phi:\mathcal{I}\to\mathcal{J}\) is well-defined and surjective._
Proof.: Fix an element \((I,I^{\prime},J,K)\in\mathcal{I}\). If \(s_{1}\) and \(s_{2}\) are semisimple elements such that \(C_{G}(s_{1})^{\circ}=L_{I^{\prime}}=C_{G}(s_{2})^{\circ}\) then, by definition, for any \(u_{1},u_{2}\in L_{I^{\prime}}\), the elements \(s_{1}u_{1}\) and \(s_{2}u_{2}\) are in the same Jordan class if and only if \(u_{1}\) and \(u_{2}\) are in the same \(L_{I^{\prime}}\)-conjugacy class. This is the case since both \(u_{1}\) and \(u_{2}\) are Richardson elements of \(P_{K}^{[LJ,L_{J}]}\). Hence \(\phi\) is well defined.
Let \(g\) be a representative of some Jordan class. We may assume that \(g_{s}\) lies in \(T\). By Theorem 1 there is a pair \((I,I^{\prime})\) of pseudo-Levi type such that \(C_{G}(g_{s})^{\circ}=L_{I^{\prime}}\). By [26, III SS1.14], since we assume that the characteristic of \(F\) is good for \(G\) we have \(g_{u}\in L_{I^{\prime}}\). By Theorem 2, \(g_{u}\) is a Richardson element of some distinguished parabolic subgroup \(P_{K}^{[L_{J},L_{J}]}\). Hence \(\phi\) is surjective.
### Unipotent factorization of \(G\)
The following result is due to Vavilov, Smolensky and Sury [28]. A proof can also be obtained with the method used in [6] for finite groups of Lie type. We refer to [6] and the references therein for more background on this result.
**Theorem 4**.: _Let \(G\) be a simple algebraic group, \(B\subseteq G\) a Borel subgroup with maximal torus \(T\) and unipotent radical \(U\). We have \(U\cdot U^{\dot{w}_{0}}\cdot U\cdot U^{\dot{w}_{0}}=G\) where \(\dot{w}_{0}\) is a representative in \(N_{G}(T)\) of the longest element of the Weyl group \(N_{G}(T)/T\)._
**Corollary 5**.: _For any simple algebraic group \(G\) we have \(\mathcal{U}^{3}=G\) where \(\mathcal{U}\) denotes the variety of unipotent elements in \(G\)._
Proof.: We have \(G=U\cdot U^{i\dot{w}_{0}}\cdot U\cdot U^{i\dot{w}_{0}}\subseteq U^{U^{i\dot{w}_{0}}} \cdot U^{U^{i\dot{w}_{0}}}\cdot U^{i\dot{w}_{0}}\subseteq\mathcal{U}^{3}\) by Theorem 4.
## 3. Marked diagrams
The _marked diagram_ of an element \(g\in B\) is the Dynkin diagram of \(G\) where we mark the nodes corresponding to the simple roots \(\alpha\) for which the projection of \(g\) on the root subgroup \(U_{\alpha}\) is not \(1\). More precisely, any \(g\in B\) has a factorization of the form \(s\prod_{\alpha\in\Phi^{+}}u_{\alpha}(x_{\alpha})\) with \(s\in T\) and \(x_{\alpha}\in F\), and we mark the nodes corresponding to those \(\alpha\in\Delta\) for which \(x_{\alpha}\neq 0\). We denote this diagram with \(\mathcal{D}(g)\) and write \(\mathcal{D}^{\circ}\) for the marked diagram having all nodes marked. We denote by \(\operatorname{supp}(D)\) the set of simple roots corresponding to marked nodes of a diagram \(D\). Notice that for \(s\in T\) and \(u\in U\), we have \(\operatorname{supp}(\mathcal{D}(su))=\operatorname{supp}(u)\). In what follows we mention a few examples of marked diagrams.
**Unipotent elements.** With the obvious choice of \(T\) and \(B\), the unipotent element of \(\operatorname{SL}_{5}(F)\)
\[\begin{bmatrix}1&1&0&0&0\\ 0&1&1&0&0\\ 0&0&1&0&0\\ 0&0&0&1&1\\ 0&0&0&0&1\end{bmatrix}\quad\text{has marked diagram}\quad\text{\includegraphics[ ]{images/_unipotent_element_of_SL_5}}.\]
Let \(g\) be a unipotent element \(g\) of \(G=\operatorname{SL}_{n}(F)\) in Jordan form, i.e. \(g=\bigoplus_{i}J_{i}\) where the \(J_{i}\) are Jordan blocks. Again, with the obvious choice of \(T\) and \(B\), the number of connected components of \(\mathcal{D}(g)\), obtained after removing the non-marked nodes, equals the number of Jordan blocks \(J_{i}\) and each connected component \(D_{i}\) corresponds to a Jordan block \(J_{i}\) such that the number of nodes in \(D_{i}\) is one less than the length of \(J_{i}\).
If \(g\in B\) is a distinguished unipotent element, then \(\mathcal{D}(g)\) is obtained from the labeled Dynkin diagram of \(G\) by marking the nodes with label '2'. For a list of these diagrams see [4, SS5.9].
**Semisimple elements.** If \(g\in T\), then \(g\) is conjugate to an element \(g^{\prime}\in B\) such that \(\mathcal{D}(g^{\prime})\) has those simple roots \(\alpha\) marked for which \(\alpha(g^{\prime})\neq 1\), i.e. \(\operatorname{supp}(\mathcal{D}(g^{\prime}))=\operatorname{supp}(g)\) (see Proposition 9).
**Regular elements.** A regular element is conjugate to an element which has fully marked diagrams (this follows from Proposition 9).
The _marked diagrams_ of a normal subset \(N\) of \(G\) are the marked diagrams of elements in \(N\cap B\) and we denote this set by \(\mathcal{D}(N)\). Note that \(\mathcal{D}(N)\) is the union of \(\mathcal{D}(C)\) where \(C\) runs over the conjugacy classes in \(N\). Notice also that, by Corollary 3, any conjugacy class in the Jordan class \(\phi(I,I^{\prime},J,K)\) has a marked diagram \(D\) with \(\operatorname{supp}(D)=(J-K)\cap\Delta\). In Proposition 13 we show that any conjugacy class in the Jordan class \(\phi(I,I^{\prime},J,K)\) has a marked diagram \(D\) with \(\operatorname{supp}(D)=(I-\Delta)\cup((J-K)\cap\Delta)\).
### Monoids corresponding to marked diagrams
For an abelian monoid \(M\) we use additive notation and we denote by \(M\Delta\) the free monoid \(M^{|\Delta|}\) with basis \(\Delta\). We may view marked diagrams as elements of \(M\Delta\) by identifying the basis element \(\alpha\) with the marked diagram having only the node corresponding to \(\alpha\) marked. We call the elements of \(M\Delta\) marked diagrams.
Denote by \(\mathbb{N}_{\mathrm{OR}}=(\{0,1\},\boxplus)\) the monoid with two elements where the operation is bitwise OR. With \(\mathbb{N}_{\mathrm{OR}}\Delta\) one can keep track of whether nodes were marked. Example:
With \(\mathbb{N}\Delta\) we take into account multiplicities of marked nodes. For two diagrams \(D_{1}=\sum_{\alpha\in\Delta}n_{\alpha}\alpha\) and \(D_{2}=\sum_{\alpha\in\Delta}m_{\alpha}\alpha\), the sum is \(D_{1}+D_{2}=\sum_{\alpha\in\Delta}(n_{\alpha}+m_{\alpha})\alpha\). Example:
Note that we have a partial ordering on \(\mathbb{N}\Delta\): if \(x=\sum n_{\alpha}\Delta_{\alpha}\) and \(y=\sum m_{\alpha}\Delta_{\alpha}\) then \(x\geq y\) if and only if \(n_{\alpha}-m_{\alpha}\geq 0\) for all \(\alpha\in\Delta\).
An element \(\sum_{\alpha\in\Delta}n_{\alpha}\Delta_{\alpha}\) of \(M\Delta\) is called _regular_ if \(n_{\alpha}\neq 0\) for all \(\alpha\). The fully marked diagram \(\mathcal{D}^{\circ}\) is regular.
While the monoid \(\mathbb{N}_{\mathrm{OR}}\Delta\) is better suited for the description of conjugacy classes which appear in a product of conjugacy classes (as in Proposition 19), the monoid \(\mathbb{N}\Delta\) is used to approximate the number of occurences of a regular conjugacy class in a product of conjugacy classes (as in Theorem C). It is clear from the context which monoid is used. A connection between the two is given by the following lemma.
**Lemma 6**.: _There is a morphism of monoids \(\psi:\mathbb{N}\Delta\to\mathbb{N}_{\mathrm{OR}}\Delta\) such that \(\psi(D)\) is regular if and only if \(D\) is regular._
Proof.: For \(D=\sum_{\alpha\in\Delta}n_{\alpha}\alpha\in\mathbb{N}\Delta\) define \(\psi(D)\) to be \(\sum_{\alpha\in\Delta}m_{\alpha}\alpha\in\mathbb{N}_{\mathrm{OR}}\Delta\) with \(m_{\alpha}=1\) if and only if \(n_{\alpha}\neq 0\). The claims follow directly from the definitions.
The following lemma is needed for the proof of Proposition A.
**Lemma 7**.: _Let \(r=|\Delta|\) and let \(D_{1},\ldots,D_{k}\) be marked diagrams such that \(\sum_{i=1}^{k}D_{i}\geq mr\mathcal{D}^{\circ}\) for some integer \(m\geq 1\). There is a partition of \(\{1,\ldots,k\}\) into \(m\) subsets \(I_{1},\ldots,I_{m}\) such that \(\sum_{i\in I_{j}}D_{i}\geq\mathcal{D}^{\circ}\) for all \(j\) with \(1\leq j\leq m\)._
Proof.: We prove the lemma by induction on \(m\). The case \(m=1\) is trivial. Assume that \(m\geq 2\). Since \(\sum_{i=1}^{k}D_{i}\geq mr\mathcal{D}^{\circ}\), there is a subset \(I\) of \(\{1,2,\ldots,k\}\) such that \(\sum_{i\in I}D_{i}\geq\mathcal{D}^{\circ}\). Renumbering the diagrams, we may assume that \(I=\{1,2,\ldots,k_{0}\}\) where \(k_{0}=|I|\). Let \(I\) be minimal with this property, i.e. for any \(j\in I\), \(S_{j}:=\sum_{i\in I-\{j\}}D_{i}\ngeq\mathcal{D}^{\circ}\). Since \(\mathcal{D}^{\circ}=\sum_{\alpha\in\Delta}\alpha\), for every \(j\in I\) there is at least one \(\alpha_{j}\in\Delta\) such that the coefficient of \(\alpha_{j}\) in \(S_{j}\) is zero. In other words, \(D_{j}\) is the only diagram in \(\{D_{i}\}_{i\in I}\) which has the node corresponding to \(\alpha_{j}\) marked. This gives an injective map \(I\to\Delta:j\mapsto\alpha_{j}\), hence \(|I|\leq|\Delta|=r\). It follows that \(\sum_{i\in I}D_{i}\leq r\mathcal{D}^{\circ}\) and that \(\sum_{i=k_{0}+1}^{k}D_{i}\geq(m-1)r\mathcal{D}^{\circ}\). The claim follows by induction.
## 4. Marked diagrams and unipotent elements
### A map from marked diagrams to \(U\)
Marked diagrams attach combinatorial data to conjugacy classes by means of elements in \(B\). For the reverse direction we consider the map from marked diagrams to unipotent elements given by
\[u:M\Delta\to U,\quad u(D):=\prod_{\alpha\in\mathrm{supp}(D)}u_{\alpha}(1). \tag{2}\]
It is easy to see that \(\mathcal{D}(u(D))=D\) for any marked diagram \(D\).
**Lemma 8**.: _Let \(\Delta^{\prime}\) be a subset of \(\Delta\). If \(u\in U\) is such that \(\operatorname{supp}(u)=\Delta^{\prime}\) then there is a \(T\)-conjugate of \(u\) in \(\bigl{(}\prod_{\alpha\in\Delta^{\prime}}u_{\alpha}(1)\bigr{)}\,[U,U]\)._
Proof.: Let \(u=\prod_{\alpha\in\Phi^{+}}u_{\alpha}(x_{\alpha})\) with \(x_{\alpha}\in F\). Consider the parabolic subgroup \(P_{\Delta^{\prime}}\) with Levi factor \(L_{\Delta^{\prime}}\) and unipotent radical \(Q\). It follows that \(u=\tilde{u}\tilde{u}\) with \(\tilde{u}\in[L_{\Delta^{\prime}},L_{\Delta^{\prime}}]\) and \(\tilde{u}\in Q\). For a simple root \(\alpha\) we have \(x_{\alpha}\neq 0\) if and only if \(\alpha\in\Delta^{\prime}\). Thus \(\tilde{u}\in Q\cap[U,U]\). Since \(T\) contains a maximal torus \(T^{\prime}\) of \([L_{\Delta^{\prime}},L_{\Delta^{\prime}}]\) which normalizes \(Q\cap[U,U]\), it suffices to show that \(\tilde{u}\) is \(T^{\prime}\)-conjugate to an element in \(\bigl{(}\prod_{\alpha\in\Delta^{\prime}}u_{\alpha}(1)\bigr{)}\,[U^{\prime},U ^{\prime}]\) where \(U^{\prime}=U\cap L_{\Delta^{\prime}}\).
The subgroup \([L_{\Delta^{\prime}},L_{\Delta^{\prime}}]\) is a semisimple algebraic group, i.e. a central product of simple algebraic groups. Its maximal torus \(T^{\prime}\) contains a maximal torus of each simple factor, hence we may assume that \([L_{\Delta^{\prime}},L_{\Delta^{\prime}}]\) is a simple algebraic group. In other words we may assume that \(\Delta=\Delta^{\prime}\) and \(T=T^{\prime}\).
Let \(\bar{u}\) be the projection of \(u\) on \(\prod_{\alpha\in\Delta}U_{\alpha}\) and denote by \(v\) the element \(\prod_{\alpha\in\Delta}u_{\alpha}(1)\). By [27, SS3.7 Theorem 1], the elements \(\bar{u}\) and \(v\) are regular. Since \(\dim C_{T}(\bar{u})=0\) the orbit \(\bar{u}^{T}\subseteq\prod_{\alpha\in\Delta}U_{\alpha}\) has dimension \(\dim T=\dim\prod_{\alpha\in\Delta}U_{\alpha}\), hence it is open in \(\prod_{\alpha\in\Delta}U_{\alpha}\). Similarly for \(v\). Since \(\bigl{(}\prod_{\alpha\in\Delta}U_{\alpha}\bigr{)}\times\bigl{(}\prod_{\alpha \in\Phi-\Delta}U_{\alpha}\bigr{)}\) is a factorization of \(U\) into algebraic sets it follows that \(\bar{u}^{T}[U,U]=(\bar{u}[U,U])^{T}=(u[U,U])^{T}\) and \(v^{T}[U,U]=(v[U,U])^{T}\) are open in \(U\). Hence the two sets have a non-trivial intersection and the claim follows.
**Proposition 9**.: _Let \(g\in B\) with \(g_{s}\in T\) and let \(\Delta_{s}\subseteq\operatorname{supp}(g_{s})\). There is an element \(\tilde{u}\in[U,U]\) such that \(g\) is conjugate by an element of \(B\) to \(g_{s}u(D)\tilde{u}\) where \(D\) is the diagram with \(\operatorname{supp}(D)=\Delta_{s}\cup\operatorname{supp}(g_{u})\)._
Proof.: Let \(D_{s}\) be the marked diagram with marked nodes corresponding to the simple roots in \(\Delta_{s}\) and note that \(\operatorname{supp}(D)=\Delta_{s}\cup\operatorname{supp}(g_{u})\) is a disjoint union since \([g_{s},g_{u}]=1\).
Conjugating by an element of \(T\) we may assume that the projection of \(g_{u}\) on \(U_{\alpha}\) is \(u_{\alpha}(1)\) for all simple roots \(\alpha\in D(g_{u})\) (by Lemma 8), i.e. we may assume that \(g_{u}=u(\mathcal{D}(g_{u}))\) modulo \([U,U]\). Thus \(u(D)=g_{u}u(D_{s})\) modulo \([U,U]\). Denote \(u(D_{s})\) by \(\tilde{u}\) and notice that \(g_{s}u(D)=g_{s}g_{u}\tilde{u}\tilde{u}=g\tilde{u}\tilde{u}\) for some element \(\hat{u}\in[U,U]\).
By [12, SS2.4], the element \(g\tilde{u}\) is conjugate by an element \(v\in U\) to \(g_{s}u^{\prime}\) with \(u^{\prime}\in C_{U}(g_{s})\). The closed connected group \(U\) factors as a variety: \(U\cong C_{U}(g_{s})\times Q\) where \(Q\) is a closed connected subvariety of \(U\) and where the isomorphism is given by the product in \(G\). Hence \(v=v_{0}v_{1}\) with \(v_{0}\in C_{U}(g_{s})\) and \(v_{1}\in Q\). It follows that there exists \(q\in Q\) such that
\[g_{s}u^{\prime}=(g\tilde{u})^{v}=(g_{s}^{v_{0}v_{1}})(g_{u}\tilde{u})^{v}=(g_{s }^{v_{1}})(g_{u}\tilde{u})^{v}\in g_{s}qg_{u}\tilde{u}[U,U].\]
Therefore
\[u^{\prime}\in qg_{u}\tilde{u}[U,U]=g_{u}q\tilde{u}[U,U],\quad\text{and thus} \quad g_{u}^{-1}u^{\prime}\in q\tilde{u}[U,U].\]
Since \(g_{u}^{-1}u^{\prime}\in C_{U}(g_{s})\) the projection of \(g_{u}^{-1}u^{\prime}\) on \(U_{\alpha}\) is \(1\) for all \(\alpha\in\operatorname{supp}(g_{s})\). Hence the projection of \(q\tilde{u}[U,U]\) on \(U_{\alpha}\) is \(1\) for all \(\alpha\in\operatorname{supp}(g_{s})\). In other words \(q\tilde{u}\in C_{U}(g_{s})\) modulo \([U,U]\). But \(q,\tilde{u}\in Q\) hence \(q\tilde{u}\in[U,U]\). It follows that \(g_{u}^{-1}u^{\prime}\in[U,U]\) and so \(\tilde{u}=\hat{u}^{-1}((u^{\prime})^{-1}g_{u})^{v^{-1}}\in[U,U]\). Hence
\[(g_{s}u(D)\tilde{u})^{v}=(g\tilde{u}((u^{\prime})^{-1}g_{u})^{v^{-1}})^{v}=(g \tilde{u})^{v}(u^{\prime})^{-1}g_{u}=g_{s}u^{\prime}(u^{\prime})^{-1}g_{u}=g.\q
Proof.: For a conjugacy class \(C\) fix a diagram \(D\in\mathcal{D}(C)\) and let \(g\in B\cap C\) be such that \(D=\mathcal{D}(g)\). Let \(g=su\) be the factorization of \(g\) with \(s\in T\) and \(u\in U\). By [12, SS2.4] we may conjugate \(g\) by an element of \(U\) to obtain \(\tilde{g}=s\tilde{u}\) with \(\tilde{u}\in C_{G}(s)\). Since we conjugated by an element in \(U\), \(\operatorname{supp}(D)=\Delta_{s}\cup\operatorname{supp}(\tilde{u})\) for some \(\Delta_{s}\subseteq\operatorname{supp}(s)\).
Any other conjugacy class \(C^{\prime}\) in the same Jordan class is represented by \(h=h_{s}h_{u}\) with \(h_{u}=\tilde{u}\) and \(h_{s}\in sZ(C_{G}(s)^{\circ})^{\circ}\) with \(C_{G}(h_{s})^{\circ}=C_{G}(s)^{\circ}\) (see SS2.3). In particular \(\operatorname{supp}(h_{s})=\operatorname{supp}(s)\) and \(\operatorname{supp}(h_{u})=\operatorname{supp}(\tilde{u})\). By Proposition 9, \(h\) is conjugate to an element \(\tilde{h}\) with \(\operatorname{supp}(\mathcal{D}(\tilde{h}))=\Delta_{s}\cup\operatorname{supp }(\tilde{u})\), i.e. \(C^{\prime}\) contains the element \(\tilde{h}\) with \(\mathcal{D}(\tilde{h})=D\).
**Lemma 11**.: _Let \(\Delta^{\prime}\) be a subset of \(\Delta\). Consider the standard parabolic subgroup \(P_{\Delta^{\prime}}=L_{\Delta^{\prime}}Q\) with Levi factor \(L_{\Delta^{\prime}}\) and unipotent radical \(Q\). Any open subset \(V\) of \(Q\) contains an element \(u\) with \(\operatorname{supp}(u)=\Delta-\Delta^{\prime}\)._
Proof.: Let \(\Phi_{Q}\) denote the set of roots \(\Phi^{+}-\Phi_{\Delta^{\prime}}\). We have \(Q=\prod_{\alpha\in\Phi_{Q}}U_{\alpha}\) as subgroups of \(G\). By uniqueness of expression, this is a direct product of algebraic sets. The subset \(\tilde{V}=\prod_{\alpha\in\Phi_{Q}}(U_{\alpha}-\{1\})\) is open in \(Q\), hence \(\tilde{V}\cap V\neq\emptyset\).
**Lemma 12**.: _For a semisimple element \(s\in T\) and any element \(u\in\prod_{\alpha\in\operatorname{supp}(s)}U_{\alpha}\) we have \(\operatorname{supp}(u)=\operatorname{supp}([s,u])\)._
Proof.: An element \(u\) in \(\prod_{\alpha\in\operatorname{supp}(s)}U_{\alpha}\) is determined by the scalars \(x_{\alpha}\in F\) for which \(u=\prod_{\alpha\in\operatorname{supp}(s)}u_{\alpha}(x_{\alpha})\). We have
\[[s,u]=(u^{-1})^{s}u=\left(\prod_{\alpha\in\operatorname{supp}(s)}u_{\alpha}( -\alpha(s)x_{\alpha}+x_{\alpha})\right)\tilde{u}\]
for some \(\tilde{u}\in[U,U]\). For \(\alpha\in\operatorname{supp}(s)\) we have \(\alpha(s)\neq 1\). Thus \(-\alpha(s)x_{\alpha}+x_{\alpha}\neq 0\) whenever \(x_{\alpha}\neq 0\) and the claim follows.
**Proposition 13**.: _Any conjugacy class in the Jordan class \(\phi(I,I^{\prime},J,K)\) has a marked diagram \(D\) with \(\operatorname{supp}(D)=(\Delta-I)\cup((J-K)\cap\Delta)\)._
Proof.: Let \(g\in G\) be an element which realizes the data \((I,I^{\prime},J,K)\) of the Jordan class which it represents (see SS2.3). The \([L_{J},L_{J}]\)-conjugacy class of \(u\) intersects the unipotent radical of \(P_{K}^{[L_{J},L_{J}]}\) in an open set. Since \(L_{J}\subseteq L_{I^{\prime}}=C_{G}(g_{s})^{\circ}\), by Lemma 11, we may assume that the projections of \(g_{u}\) on the root groups \(U_{\alpha}\) with \(\alpha\in J-K\) are non-trivial. We therefore have \(\operatorname{supp}(g)=J-K\). Since \(C_{G}(g_{s})^{\circ}\subseteq L_{I}\), the simple roots \(\Delta-I\) are in \(\operatorname{supp}(g_{s})\). Let \(u=\prod_{\alpha\in\Delta-I}u_{\alpha}(1)\). By Lemma 12, \(\operatorname{supp}([g_{s}^{-1},u])=\operatorname{supp}(u)=\Delta-I\). Moreover \(g^{u^{-1}}=(g_{s}g_{u})^{u^{-1}}=g_{s}[g_{s}^{-1},u](g_{u})^{u^{-1}}\in g_{s}[ g_{s}^{-1},u]g_{u}[U,U]\). Hence, since \(\operatorname{supp}(g_{s})\cap\operatorname{supp}(g_{u})=\emptyset\) and \(\operatorname{supp}(u)\subseteq\operatorname{supp}(g_{s})\), it follows that \(\mathcal{D}(g^{u^{-1}})\) has those nodes marked which belong to \((\Delta-I)\cup((J-K)\cap\Delta)\).
### \(U\)-regular elements
An element of \(G\) is called \(U\)_-regular_ if it is conjugate to an element of the form \(su\) with \(s\in T\) and \(u\in U\) a regular unipotent element. A normal subset is called \(U\)-regular if it contains a \(U\)-regular element.
**Proposition 14**.: _A normal subset \(N\subseteq G\) is \(U\)-regular if and only if \(\mathcal{D}(N)\) contains \(\mathcal{D}^{\circ}\)._
Proof.: By [27, SS3.7 Theorem 1] an element \(u\in U\) is regular if and only if it has non-trivial projections on \(U_{\alpha}\) for all \(\alpha\in\Delta\). This is equivalent to \(su\) having non-trivial projections on \(U_{\alpha}\) for all \(\alpha\in\Delta\) whenever \(s\) is an element of \(T\). By definition, this is equivalent to \(\mathcal{D}(su)=\mathcal{D}^{\circ}\) for any \(s\in T\).
**Proposition 15**.: _Regular conjugacy classes are \(U\)-regular._
Proof.: Let \(g\) be a representative of a regular conjugacy class such that \(g_{s}\in T\) and \(g_{u}\in U\). We may assume that \(g\) realizes the data \((I,I^{\prime},J,K)\) of the Jordan class which it represents. The element \(g\in G\) is regular if and only if \(g_{u}\) is regular in \(C_{G}(g_{s})^{\circ}\)[27, SS3.5 Proposition 5]. We have \(\operatorname{supp}(g_{s})=\Delta-I^{\prime}\). Since \(g_{u}\) is regular in \(C_{G}(g_{s})\) we also have that \(J=I^{\prime}\) and \(K=\emptyset\)[27, SS3.7 Theorem 1]. Thus \(\operatorname{supp}(g_{u})=(J-K)\cap\Delta=I^{\prime}\cap\Delta\). It follows from Proposition 9 that there is an element \(\tilde{u}\in[U,U]\) such that \(g\) is conjugate to \(g_{s}u(D)\tilde{u}\) where \(D\) is the diagram with \(\operatorname{supp}(D)=\operatorname{supp}(g_{s})\cup\operatorname{supp}(g_ {u})=\Delta\).
**Proposition 16**.: \(U\)_-regular conjugacy classes of proper Levi type are regular._
Proof.: Let \(h=su\) be a representative of a \(U\)-regular conjugacy class with \(s\in T\) and \(u\) a regular element in \(U\). The element \(h\) is \(U\)-conjugate to \(g\) with \(g_{s}=s\in T\) and \(g_{u}\in C_{U}(g_{s})\)[12, SS2.4]. Conjugating in \(C_{G}(g_{s})\), we may assume that \(g\) realizes the data \((I,I^{\prime},J,K)\) of the Jordan class which it represents. Since \(h\) is \(U\)-conjugate to \(g\), we have \(\operatorname{supp}(g_{u})=\operatorname{supp}(u)\cap I\). Since the class is of proper Levi type, we have \(I=I^{\prime}\), hence \(\operatorname{supp}(g_{u})=\operatorname{supp}(u)\cap I^{\prime}\). Since \(g\) is \(U\)-regular, \(\operatorname{supp}(g_{u})=\operatorname{supp}(u)\cap I=I\), hence \(g_{u}\) is regular in the Levi subgroup \(C_{G}(g_{s})^{\circ}\)[27, SS3.7 Theorem 1]. The proposition follows from [27, SS3.5 Proposition 5].
**Proposition 17**.: _If \(C_{1},C_{2},\ldots,C_{6}\) are \(U\)-regular conjugacy classes then their product contains all regular semisimple conjugacy classes of \(G\)._
Proof.: We may partition the set of simple roots \(\Delta\) into two sets, \(\Delta_{1}\) and \(\Delta_{2}\), such that \(\alpha\) and \(\beta\) are orthogonal for any two distinct roots \(\alpha,\beta\in\Delta_{1}\) or \(\alpha,\beta\in\Delta_{2}\). Thus, for any two such roots \([G_{\alpha},G_{\beta}]=1\). Let \(L_{1}\) be the standard Levi subgroup \(T\prod_{\alpha\in\Delta_{1}}G_{\alpha}\) and let \(Q_{1}\) be the unipotent radical of the parabolic subgroup \(L_{1}U\).
For \(i\in\{1,\ldots,6\}\), since \(C_{i}\) is \(U\)-regular, it is represented by an element \(s_{i}u_{i}\) with \(s_{i}\in T\) and \(u_{i}\in U\) regular. Moreover, \(u_{i}=q_{i}v_{i}\) with \(v_{i}=\prod_{\alpha\in\Delta_{1}}v_{i,\alpha}\) for some \(q_{i}\in Q\) and \(v_{i,\alpha}\in U_{\alpha}=G_{\alpha}\cap U\). Conjugating by \(L\), and since the simple factors of \(L\) commute, we may choose the elements \(v_{i,\alpha}\) to be any regular unipotent element in \(G_{\alpha}\). The product \(C_{1}C_{2}C_{3}\) contains
\[(s_{1}q_{1}v_{1})(s_{2}q_{2}v_{2})(s_{3}q_{3}v_{3})=s_{1}s_{2}s_{3}q_{1}^{s_{ 2}s_{3}}q_{2}^{(v_{1}^{s_{2}})^{-1}s_{3}}q_{3}^{(v_{1}^{s_{2}s_{3}})^{-1}(v_{2 }^{s_{3}})^{-1}}v_{1}^{s_{2}s_{3}}v_{3}.\]
Since the set of non-trivial unipotent elements in \(G_{\alpha}\) is stable by \(T\)-conjugation the product \(v_{1}^{s_{2}s_{3}}v_{2}^{s_{3}}v_{3}\) ranges over all products of tripples of unipotent elements in \(\prod_{\alpha\in\Delta_{1}}G_{\alpha}\). By Corollary 5, and since \([G_{\alpha},G_{\beta}]=1\) for distinct roots \(\alpha,\beta\in\Delta\), there are \(v_{1}\), \(v_{2}\) and \(v_{3}\) such that \(b=v_{1}^{s_{2}s_{3}}v_{2}^{s_{3}}v_{3}\) for any element \(b\in\prod_{\alpha\in\Delta_{1}}G_{\alpha}\). Since \(Q_{1}\) is stable under conjugation by \(L\), the element \(q_{1}^{s_{2}s_{3}}q_{2}^{(v_{1}^{s_{2}})^{-1}s_{3}}q_{3}^{(v_{1}^{s_{2}s_{3}})^ {-1}(v_{2}^{s_{3}})^{-1}}\) lies in \(Q_{1}\). In particular, for each \(t\in\prod_{\alpha\in\Delta_{1}}(T\cap G_{\alpha})\) there is \(q_{t}\in Q_{1}\) such that
\[s_{1}s_{2}s_{3}t_{q_{t}}\in C_{1}C_{2}C_{3}.\]
Similarly, for each \(t\in\prod_{\alpha\in\Delta_{2}}(T\cap G_{\alpha})\) there is \(q_{t}\in U\) such that
\[s_{4}s_{5}s_{6}t_{q_{t}}\in C_{4}C_{5}C_{6}.\]
Since \(T=\prod_{\alpha\in\Delta}(T\cap G_{\alpha})\), it follows that for any \(t\in T\) the normal subset \(C_{1}\cdots C_{6}\) contains \(tq_{t}\) for some \(q_{t}\in U\). In particular, if \(t\in T=s_{1}s_{2}s_{3}s_{4}s_{5}s_{6}T\) is regular, then \(tq_{t}\) is conjugate to \(t\), hence \(C_{1}\cdots C_{6}\) contains all regular semisimple elements of \(T\).
## 5. Marked diagrams and products of conjugacy classes
**Lemma 18**.: _Let \(\Delta^{\prime}\) be a subset of \(\Delta\). If \(u_{1},u_{2}\in U\) are such that \(\operatorname{supp}(u_{1})=\operatorname{supp}(u_{2})=\Delta^{\prime}\) then_
\[u_{1}^{T}u_{2}^{T}[U,U]=\left(\prod_{\alpha\in\Delta^{\prime}}U_{\alpha} \right)[U,U].\]
Proof.: The sets \(u_{1}^{T}u_{2}^{T}\) and \(\prod_{\alpha\in\Delta^{\prime}}U_{\alpha}\) lie in the subsystem subgroup \([L_{\Delta^{\prime}},L_{\Delta^{\prime}}]\). As in the proof of Lemma 8, it suffices to prove the claim for \(\Delta^{\prime}=\Delta\). Again, as in the proof of Lemma 8, \(u_{i}^{T}[U,U]\) is open in \(U\). Since the product of two open sets in a connected algebraic group equals the whole group,
\[U=(u_{1}^{T}[U,U])(u_{2}^{T}[U,U])=u_{1}^{T}u_{2}^{T}[U,U].\qed\]
**Proposition 19**.: _Let \(C_{1}\) and \(C_{2}\) be two conjugacy classes of \(G\) represented by \(g_{1}\) and \(g_{2}\) respectively. If \(g_{1},g_{2}\in B\), then_
\[\mathcal{D}(g_{1})\boxplus\mathcal{D}(g_{2})\in\mathcal{D}(C_{1}C_{2}).\]
Proof.: For \(i=1,2\) let \(g_{i}=s_{i}u_{i}\) with \(s_{i}\in T\) and \(u_{i}\in U\) be the factorization of the element \(g_{i}\) in \(B=TU\). Note that \(C_{i}\) contains \((s_{i}u_{i})^{T}=s_{i}(u_{i})^{T}\). Let \(u_{i}^{1}\) be the projection of \(u_{i}\) on \(\prod_{\alpha\in\Delta}U_{\alpha}\) and \(u_{i}^{0}\in[U,U]\) be such that \(u_{i}=u_{i}^{1}u_{i}^{0}\). Then \(C_{i}\) contains \(s_{i}(u_{i}^{1})^{T}(u_{i}^{0})^{T}\) hence, by the commutator relations, the product \(C_{1}C_{2}\) contains \(s_{1}(u_{1}^{1})^{T}(u_{1}^{0})^{T}s_{2}(u_{2}^{1})^{T}(u_{2}^{0})^{T}\) which lies in \(s_{1}s_{2}(u_{1}^{1})^{T}(u_{2}^{1})^{T}[U,U]\).
Let \(\Delta_{i}=\operatorname{supp}(u_{i}^{1})\), let \(v_{i}\) be the projection of \(u_{i}^{1}\) on \(\prod_{\alpha\in\Delta-(\Delta_{1}\cap\Delta_{2})}U_{\alpha}\) and let \(w_{i}\) be the projection of \(u_{i}^{1}\) on \(\prod_{\alpha\in\Delta_{1}\cap\Delta_{2}}U_{\alpha}\). We have
\[s_{1}s_{2}(u_{1}^{1})^{T}(u_{2}^{1})^{T}[U,U]=s_{1}s_{2}(v_{1})^{T}(w_{1})^{T }(w_{2})^{T}(v_{2})^{T}[U,U]\]
and, by Lemma 18, this further equals
\[s_{1}s_{2}(u_{1}^{1})^{T}(u_{2}^{1})^{T}[U,U]=s_{1}s_{2}(v_{1})^{T}\left(\prod _{\alpha\in\Delta_{1}\cap\Delta_{2}}U_{\alpha}\right)(v_{2})^{T}[U,U].\]
Since \(\operatorname{supp}(v_{1})\), \(\operatorname{supp}(v_{2})\) and \(\operatorname{supp}(w_{1})=\operatorname{supp}(w_{2})\) are pairwise disjoint, the above set contains an element with non-trivial projections on \(U_{\alpha}\) exactly when \(\alpha\in\Delta_{1}\cup\Delta_{2}\), i.e. it contains an element with diagram \(\mathcal{D}(g_{1})\boxplus\mathcal{D}(g_{2})\).
**Corollary 20**.: _Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\). If \(\sum_{i=1}^{k}\mathcal{D}(N_{i})\) contains a regular diagram of \(\mathbb{N}\Delta\), then the normal subset \(N_{1}\cdots N_{k}\) is \(U\)-regular._
Proof.: Let \(D_{i}\in\mathcal{D}(N_{i})\) be diagrams such that \(D=\sum_{i=1}^{k}D_{i}\) is a regular diagram. By Lemma 6, \(D\) is regular if and only if \(\psi(D)=\boxplus_{i=1}^{k}\psi(D_{i})\) is regular. Let \(C_{i}\subseteq N_{i}\) be conjugacy classes such that \(\psi(D_{i})\in\mathcal{D}(C_{i})\). By Proposition 19, \(\psi(D_{1})\boxplus\psi(D_{2})\) is a diagram in \(\mathcal{D}(C_{1}C_{2})\). By induction \(\mathcal{D}(C_{1}\cdots C_{k})\) contains \(\psi(D)\). By Proposition 14, the normal subset \(\prod_{i\in I_{j}}C_{i}\) is \(U\)-regular.
Proof of Proposition A.: As before \(r=\operatorname{rk}(G)\). Let \(D_{i}\in\mathcal{D}(N_{i})\) be diagrams such that \(\sum_{i=1}^{k}D_{i}\geq 12r\mathcal{D}^{\circ}\). Let \(C_{i}\subseteq N_{i}\) be a conjugacy class such that \(D_{i}\in\mathcal{D}(C_{i})\). It is enough to show that \(\prod_{i=1}^{k}C_{i}=G\). By Lemma 7, there is a partition of \(\{1,\ldots,k\}\) into subsets \(I_{1},\ldots,I_{12}\) such that \(\sum_{i\in I_{j}}D_{i}\geq\mathcal{D}^{\circ}\) for all \(1\leq j\leq 12\). By Corollary 20, the normal subset \(\prod_{i\in I_{j}}C_{i}\) is \(U\)-regular, i.e. it contains a \(U\)-regular conjugacy class \(\tilde{C}_{j}\). By Proposition 17, the product \(\tilde{C}_{1}\cdots\tilde{C}_{12}\) contains \(S^{2}\) where \(S\) is the union of all regular semisimple conjugacy classes. Since \(S\) is open in \(G\)[27, SS3.5 Corollary], we have \(S^{2}=G\).
Proof of Proposition B.: Let \(D_{i}\in\mathcal{D}(N_{i})\) be diagrams such that \(\{D_{i}:1\leq 1\leq k\}\) is of type \((r,6)\) with respect to \(D_{i_{1}},\ldots,D_{i_{r}}\) and such that \(\sum_{j=1}^{r}D_{i_{j}}\geq\mathcal{D}^{\circ}\). Let \(C_{i}\subseteq N_{i}\) be a conjugacy class such that \(D_{i}\in\mathcal{D}(C_{i})\). Eventually after reindexing, we may assume that \(N_{1}\cdots N_{k}\) contains
\[\overbrace{\frac{C_{1}C_{2}\cdots C_{6}}{C_{1}C_{2}\cdots C_{6}}} \overbrace{\frac{C_{7}C_{8}\cdots C_{12}}{D_{7}\leq D_{8},\ldots,D_{12}}}^{ \tilde{N}_{7}}\cdots\overbrace{\underbrace{C_{k}C_{k+1}\cdots C_{k+6}}_{D_{k} \leq D_{k+1},\ldots,D_{k+6}}}^{\tilde{N}_{k}}\cdots\]
where \(D_{6j+1}=D_{i_{j}}\). Let \(\Delta_{j}=\operatorname{supp}(D_{i_{j}})\). As in the proof of Proposition 17, \(\tilde{N}_{i}\) contains the torus \(T_{j}=T\cap G(\Delta_{j})\). Thus, the product \(\tilde{N}_{1}\cdots\tilde{N}_{k}\) contains \(T_{1}\cdots T_{k}\) and since \(\sum_{j=1}^{r}D_{i_{j}}\geq\mathcal{D}^{\circ}\), the torus \(T_{1}\cdots T_{k}\) is a maximal torus. Hence \(\tilde{N}_{1}\cdots\tilde{N}_{k}\) contains an open set of semisimple elements.
In what follows we use the standard numering of the simple roots in \(\Delta\) - see for instance [16, p.10] or [20, p.67].
**Lemma 21**.: _Let \(G\) be of type \(A_{r}\). If \(\Delta=\{\alpha_{1},\ldots,\alpha_{r}\}\) are the simple roots ordered as above then the torus \(\prod_{i=1}^{\lceil r/2\rceil}\alpha_{2i-1}^{\vee}(k^{\times})\) is regular._
Proof.: Let \(\tilde{T}=\prod_{i=1}^{\lceil r/2\rceil}\alpha_{2i-1}^{\vee}(k^{\times})\) and let \(\tilde{\Delta}=\{\alpha_{1},\alpha_{3},\ldots\alpha_{2\lceil r/2\rceil-1}\}\). Since \(\tilde{T}\subseteq T\), by [26, II SS4.1] we have \(C_{G}(\tilde{T})^{\circ}=\langle T,U_{\alpha}:\alpha(t)=1\) for all \(t\in\tilde{T}\rangle\). For a root \(\alpha\in\Phi\) we have \(\alpha(t)=t^{\langle\alpha,\beta\rangle}\) for any \(t\in k^{\times}\) as in (1). Hence, \(\alpha(t)=1\) for all \(t\in\tilde{T}\) if and only if \(\langle\alpha,\beta\rangle=0\) for all \(\beta\in\tilde{\Delta}\), i.e. if and only if \(\alpha\) is orthogonal to \(\tilde{\Delta}\). One checks, for example with [13, SS2.10], that no root \(\alpha\) is orthogonal to \(\tilde{\Delta}\). Hence \(C_{G}(\tilde{T})^{\circ}=T\), i.e. \(\tilde{T}\) is a regular torus.
**Lemma 22**.: _Let \(\tilde{T}\subseteq T\) be a regular torus of \(G\). For any \(\tilde{z}\in T\), the set \(\tilde{z}\tilde{T}\) contains an open subset of regular elements._
Proof.: Fix \(\alpha\in\Phi\). Since \(\tilde{T}\) and \(\tilde{z}\tilde{T}\) lie in \(T\), the root subgroup \(U_{\alpha}\) is stable by conjugation under the elements of these closed subsets. Since \(\tilde{T}\) is a regular torus, it has an open orbit on \(U_{\alpha}\). Thus, restricting the conjugation map to \(\tilde{z}\tilde{T}\) one also obtains an open orbit on \(U_{\alpha}\). Hence \(\dim(C_{G}(U_{\alpha})\cap\tilde{z}\tilde{T})\leq\dim(z\tilde{T})\), i.e. \(z\tilde{T}\setminus C_{G}(U_{\alpha})\) is an open subset of \(\tilde{z}\tilde{T}\). Consider \(V=\tilde{z}\tilde{T}\setminus\cup_{\alpha\in\Phi}C_{G}(U_{\alpha})=\cap_{ \alpha\in\Phi}\tilde{z}\tilde{T}\setminus C_{G}(U_{\alpha}).\) Since \(|\Phi|\) is finite, \(V\) is an open subset of \(\tilde{z}\tilde{T}\). Since the ground field is algebraically closed, \(V\) is non-empty. For any \(t\in V\) and any root subgroup \(U_{\alpha}\) we have \([t,U_{\alpha}]\neq 1\), thus, by [26, II SS4.1], all elements of \(V\) are regular.
**Lemma 23**.: _Let \(G\) be of type \(A_{r}\). Let \(\tilde{\Delta}=\{\alpha_{2i-1}:1\leq i\leq\lceil r/2\rceil\}\) and let \(\tilde{T}=\prod_{i=1}^{\lceil r/2\rceil}\alpha_{2i-1}^{\vee}(k^{\times})\). Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\) and let \(D_{i}\in\mathcal{D}(N_{i})\) be the set of all regular elements of \(G\). Then \(\tilde{T}\subseteq T\)._
Proof.: Let \(\tilde{T}\subseteq T\) be a regular torus of \(G\). For any \(\tilde{z}\in T\), the set \(\tilde{z}\tilde{T}\) contains an open subset of regular elements.
Proof.: Fix \(\alpha\in\Phi\). Since \(\tilde{T}\) and \(\tilde{z}\tilde{T}\) lie in \(T\), the root subgroup \(U_{\alpha}\) is stable by conjugation under the elements of these closed subsets. Since \(\tilde{T}\) is a regular torus, it has an open orbit on \(U_{\alpha}\). Thus, restricting the conjugation map to \(\tilde{z}\tilde{T}\) one also obtains an open orbit on \(U_{\alpha}\). Hence \(\dim(C_{G}(U_{\alpha})\cap\tilde{z}\tilde{T})\leq\dim(z\tilde{T})\), i.e. \(z\tilde{T}\setminus C_{G}(U_{\alpha})\) is an open subset of \(\tilde{z}\tilde{T}\). Consider \(V=\tilde{z}\tilde{T}\setminus\cup_{\alpha\in\Phi}C_{G}(U_{\alpha})=\cap_{ \alpha\in\Phi}\tilde{z}\tilde{T}\setminus C_{G}(U_{\alpha}).\) Since \(|\Phi|\) is finite, \(V\) is an open subset of \(\tilde{z}\tilde{T}\). Since the ground field is algebraically closed, \(V\) is non-empty. For any \(t\in V\) and any root subgroup \(U_{\alpha}\) we have \([t,U_{\alpha}]\neq 1\), thus, by [26, II SS4.1], all elements of \(V\) are regular.
**Lemma 23**.: _Let \(G\) be of type \(A_{r}\). Let \(\tilde{\Delta}=\{\alpha_{2i-1}:1\leq i\leq\lceil r/2\rceil\}\) and let \(\tilde{T}=\prod_{i=1}^{\lceil r/2\rceil}\alpha_{2i-1}^{\vee}(k^{\times})\). Let \(N_{1},\ldots,N_{k}\) be normal subsets of \(G\) and let \(D_{i}\in\mathcal{D}(N_{i})\) be the set of all regular elements of \(G\). Then \(\tilde{T}\subseteq T\)._
Proof.: Fix \(\alpha\in\Phi\). Since \(\tilde{T}\) and \(\tilde{z}\tilde{T}\) lie in \(T\), the root subgroup \(U_{\alpha}\) is stable by conjugation under the elements of these closed subsets. Since \(\tilde{T}\) is a regular torus, it has an open orbit on \(U_{\alpha}\). Thus, restricting the conjugation map to \(\tilde{z}\tilde{T}\) one also obtains an open orbit on \(U_{\alpha}\). Hence \(\dim(C_{G}(U_{\alpha})\cap\tilde{z}\tilde{T})\leq\dim(z\tilde{T})\), i.e. \(z\tilde{T}\setminus C_{G}(U_{\alpha})\) is an open subset of \(\tilde{z}\tilde{T}\). Consider \(V=\tilde{z}\tilde{T}\setminus\cup_{\alpha\in\Phi}C_{G}(U_{\alpha})=\cap_{\alpha\in \Phi}\tilde{z}\tilde{T}\setminus C_
_for all \(1\leq i\leq k\). If \(\sum_{i=1}^{k}D_{i}\geq 2\mathcal{D}^{\circ}\), then \(\prod_{i=1}^{k}N_{i}\) contains an open subset of \(\tilde{T}\tilde{z}\) for some \(\tilde{z}\in T\). In particular, Theorem C holds for \(G\) of type \(A_{r}\)._
Proof.: Let \(D_{i}\in\mathcal{D}(N_{i})\) be diagrams such that \(\sum_{i=1}^{k}D_{i}\geq 2\mathcal{D}^{\circ}\). Let \(C_{i}\subseteq N_{i}\) be a conjugacy class such that \(D_{i}\in\mathcal{D}(C_{i})\). It is enough to show that \(\prod_{i=1}^{k}C_{i}\) contains a regular semisimple element. Let \(g_{i}\in C_{i}\) be such that \(\mathcal{D}(g_{i})=D_{i}\).
Notice that \(\tilde{T}\) is a maximal torus of the derived group \([L_{\tilde{\Delta}},L_{\tilde{\Delta}}]\) of the Levi subgroup \(L_{\tilde{\Delta}}\) of type \(A_{1}^{\lceil r/2\rceil}\). Thus, the representatives factor as \(g_{i}=l_{i}q_{i}\) for some \(l_{i}\in L\) and \(q_{i}\in Q\). Moreover \(l_{i}=(\prod_{j=1}^{\lceil r/2\rceil}l_{i,j})z_{i}\) for some \(l_{i,j}\in G_{\alpha_{j}}\cap B\) and \(z_{i}\in Z(L)\). For \(x\in L\) we have
\[\prod_{i=1}^{k}g_{i}^{x}=\prod_{i=1}^{k}(\prod_{j=1}^{\lceil r/2\rceil}l_{i,j} ^{x})(z_{i}q_{i})^{x}=\left[\prod_{i=1}^{k}\prod_{j=1}^{\lceil r/2\rceil}l_{i, j}^{x}\right]\tilde{z}\tilde{q}=\left[\prod_{j=1}^{\lceil r/2\rceil}\prod_{i=1}^{k}l_ {i,j}^{x}\right]\tilde{z}\tilde{q}\]
for some \(\tilde{z}\in Z(L)\) and \(\tilde{q}\in Q\). Since \(\sum_{i=1}^{k}D_{i}\geq 2\mathcal{D}^{\circ}\), for each \(j\) there are at least \(2\) elements in \(\{l_{i,j}:1\leq i\leq\lceil r/2\rceil\}\) which are not central in \(G_{\alpha_{j}}\). Then \(\prod_{i=1}^{k}l_{i,j}^{L}\) contains an open subset of \(G_{\alpha_{2i-1}}\subseteq[L,L]\) by [8, Theorem 2]. Thus \(\{\prod_{j=1}^{\lceil r/2\rceil}\prod_{i=1}^{k}l_{i,j}^{x}:x\in L\}\) contains an open subset of \(\tilde{T}\). Notice that, since \(z_{i}\in Z(L)\), \(\tilde{z}\) depends only on the choice of representatives \(g_{i}\) and not on \(x\in L\). By Lemma 22, there is an open subsets of regular (semisimple) elements \(V\subseteq\tilde{T}\tilde{z}\). Thus, for any \(\tilde{t}\tilde{z}\in V\), \(\tilde{t}\tilde{z}\tilde{q}\) is conjugate to \(\tilde{t}\tilde{z}\). Hence, the normal subset \(\prod_{i=1}^{k}N_{i}\) contains \(V\). Since the ground field is algebraically closed, \(V\) is non-empty.
Proof of Theorem C.: Let \(D_{i}\in\mathcal{D}(N_{i})\) be diagrams such that \(\sum_{i=1}^{k}D_{i}\geq 16\mathcal{D}^{\circ}\). Let \(C_{i}\subseteq N_{i}\) be a conjugacy class such that \(D_{i}\in\mathcal{D}(C_{i})\). It is enough to show that \(\prod_{i=1}^{k}C_{i}\) contains a regular semisimple element. Let \(g_{i}\in C_{i}\) be such that \(\mathcal{D}(g_{i})=D_{i}\). Notice that \(k\geq 16\) with equality if and only if all \(D_{i}\) are regular. If \(k=16\) the claim follows from Proposition 17. Let \(k\geq 17\). If \(\operatorname{rk}(G)\leq 8\) then \(\prod_{i=1}^{k}C_{i}\) contains an open subset of \(G\) by [8, Theorem 2] and the claim follows.
Suppose \(r=\operatorname{rk}(G)>8\) and let \(\Delta_{1}=\{\alpha_{1},\ldots,\alpha_{r-4}\}\) and \(\Delta_{2}=\{\alpha_{r-2},\alpha_{r-1},\alpha_{r}\}\). Consider the parabolic subgroup \(P_{\Delta_{1}\cup\Delta_{2}}\) with Levi factor \(L\) and unipotent radical \(Q\). Then \(g_{i}=l_{i}q_{i}\) for some \(l_{i}\in L\) and \(q_{i}\in Q\). Moreover, \(L\) is a connected reductive group with a simple factor \(H_{1}\) of type \(A_{r-4}\) corresponding to \(\Delta_{1}\) and a simple factor \(H_{2}\) corresponding to \(\Delta_{2}\). Thus
\[g_{i}=g_{i,1}g_{i,2}z_{i}q_{i}\]
for some \(g_{i,1}\in H_{1}\), \(g_{i,2}\in H_{2}\) and \(z_{i}\in Z(L)\). For \(x\in L\) we have
\[\prod_{i=1}^{k}g_{i}^{x}=\prod_{i=1}^{k}g_{i,1}^{x}g_{i,2}^{x}(z_{i}q_{i})^{x}= \left[\prod_{i=1}^{k}g_{i,1}^{x}g_{i,2}^{x}\right]\tilde{z}\tilde{q}=\left[ \prod_{i=1}^{k}g_{i,1}^{x}\right]\left[\prod_{i=1}^{k}g_{i,2}^{x}\right]\tilde {z}\tilde{q}\]
for some \(\tilde{q}\in Q\) and \(\tilde{z}\in Z(L)\). Notice that, since \(z_{i}\in Z(L)\), \(\tilde{z}\) depends only on the choice of representatives \(g_{i}\) and not on \(x\in L\).
Let \(\Delta_{1}^{\prime}=\{\alpha_{2i-1}:1\leq i\leq\lceil r/2-2\rceil\}\), let \(\tilde{T}=\prod_{i=1}^{\lceil r/2-2\rceil}\alpha_{2i-1}^{\vee}(k^{\times})\) and let \(T_{\Delta_{2}}=T\cap H_{2}\). Since \(\sum_{i=1}^{k}D_{i}\geq 16\mathcal{D}^{\circ}\), the elements \(g_{i,1}\) and \(g_{i,2}\) are non-central in \(H_{1}\) and \(H_{2}\) respectively. Moreover \(k\geq 17\). Since \(H_{1}\) and \(H_{2}\) commute, by Lemma 21 and [8, Theorem 2], \(\prod_{i=1}^{k}C_{i}\) contains \(\tilde{T}T_{\Delta_{2}}\) up to multiplication on the right by elements in \(\tilde{z}Q\).
We claim that the torus \(\tilde{T}T_{\Delta_{2}}\) is regular. If this is the case then, by Lemma 22, there is a regular element \(\tilde{t}\tilde{z}\in\tilde{T}T_{\Delta_{2}}\tilde{z}\), thus, all elements in \(\tilde{t}\tilde{z}Q\) are regular semisimple and the proof is finished.
For the proof of the claim, let \(\Delta^{\perp}=\{\alpha\in\Phi:\alpha\perp\beta\text{ for all }\beta\in\Delta^{\prime}_{1}\cup\Delta_{2}\}\). We have \(C_{G}(\tilde{T}T_{\Delta_{2}})^{\circ}=\langle T,U_{\alpha}:\alpha\in\Delta^ {\perp}\rangle\) and need to show that \(\Delta^{\perp}=\emptyset\). For this we use the explicit construction of root systems as described in [13, SS2.10]. If \(G\) is of type \(A\), let \(\alpha=\varepsilon_{i}-\varepsilon_{j}\) (\(1\leq i\neq j\leq r+1\)) then either \(\beta=\varepsilon_{i-1}-\varepsilon_{i}\) or \(\beta=\varepsilon_{i}-\varepsilon_{i+1}\) is an element of \(\Delta^{\prime}_{1}\cup\Delta_{2}\), hence \(\Delta^{\perp}=\emptyset\). If \(G\) is of type \(B\) and \(\alpha\) is the short root \(\pm\varepsilon_{i}\) (\(1\leq i\leq r\)) then either \(\beta=\varepsilon_{i-1}-\varepsilon_{i}\) or \(\beta=\varepsilon_{i}-\varepsilon_{i+1}\) or \(\beta=\varepsilon_{i}\) is an element of \(\Delta^{\prime}_{1}\cup\Delta_{2}\), hence \(\alpha\not\perp\Delta^{\prime}_{1}\cup\Delta_{2}\). If \(\alpha\) is the long root \(\pm\varepsilon_{i}\pm\varepsilon_{j}\) (\(1\leq i<j\leq r\)), again \(\beta=\varepsilon_{i-1}-\varepsilon_{i}\) or \(\beta=\varepsilon_{i}-\varepsilon_{i+1}\) or \(\beta=\varepsilon_{i}\) is an element of \(\Delta^{\prime}_{1}\cup\Delta_{2}\), hence \(\alpha\not\perp\Delta^{\prime}_{1}\cup\Delta_{2}\). If \(G\) is of type \(C\) or \(D\), the argument is similar and the claim follows.
|
2309.02700 | The Rhodes semilattice of a biased graph | We reinterpret the Rhodes semilattices $R_n(\mathfrak{G})$ of a group
$\mathfrak{G}$ in terms of gain graphs and generalize them to all gain graphs,
both as sets of partition-potential pairs and as sets of subgraphs, and for the
latter, further to biased graphs. Based on this we propose four different
natural lattices in which the Rhodes semilattices and its generalizations are
order ideals. | Michael J. Gottstein, Thomas Zaslavsky | 2023-09-06T04:24:18Z | http://arxiv.org/abs/2309.02700v3 | # The Rhodes semilattice of a biased graph
###### Abstract.
We reinterpret the Rhodes semilattices \(R_{n}(\mathfrak{G})\) of a group \(\mathfrak{G}\) in terms of gain graphs and generalize them to all gain graphs, both as sets of partition-potential pairs and as sets of subgraphs, and for the latter, further to biased graphs. Based on this we propose four different natural lattices in which the Rhodes semilattices and its generalizations are order ideals.
Key words and phrases:Rhodes semilattice of a group, gain graph, biased graph, partition-potential pair, balanced closed subgraph 2010 Mathematics Subject Classification: Primary 06A12, Secondary 05B35, 05C22, 06C10
## 1. Introduction
The Rhodes semilattice \(R_{n}(\mathfrak{G})\) of a group \(\mathfrak{G}\), introduced by John Rhodes for semigroup theory (we refer to [2]), is a partial ordering of pairs consisting of a partition and a potential system on subsets of a finite set. We reinterpret the Rhodes semilattices as semilattices of subgraphs in complete link gain graphs, which are graphs with edges labeled from the group \(\mathfrak{G}\). From this point of view the elements of the Rhodes semilattice are closed, balanced subgraphs and the ordering is by gain-graph inclusion. We use this reinterpretation to generalize Rhodes semilattices to semilattices of closed, balanced subgraphs of any gain graph and even more generally any biased graph. (All these concepts are defined below.) Based on our graphic interpretation we propose three natural lattices that contain the Rhodes semilattice and its generalizations as order ideals.
The graphical interpretation of \(R_{n}(\mathfrak{G})\) was initiated by the second author in a referee's report on [2] and is mentioned in the published version; see [2, Remark 6.1]. Here we provide the full generalization suggested by that insight, with proof. Our partition-potential generalization of \(R_{n}(\mathfrak{G})\) is newly developed by the first author.
## 2. A new perspective on the Rhodes semilattice
We begin with basic definitions, many of which are from [2], [3], and [5].
A _partial partition_\(\pi=\{\pi_{1},\ldots,\pi_{j}\}\) of a set \(X\) is a partition of a subset of \(X\). The _support_\(\operatorname{supp}\pi\) is the union of all blocks of \(\pi\). The set of partial partitions of an \(n\)-element set is denoted by \(\Pi^{\natural}_{n}\), which we partially order by refinement, i.e., \(\tau\leq\pi\) if every block of \(\tau\) is contained in a block of \(\pi\). The
###### Abstract.
We study the _graph graph_, denoted by \(\Phi=(\Gamma,\phi,\mathfrak{G})\), is a graph equipped with a _gain function_\(\phi\). The graph graph is defined on the edges of the graph to the group \(\mathfrak{G}\), such that reversing the direction of an edge inverts the gain. We write \(\phi(e;v,w)\) in order to indicate the sense in which the gain is measured; thus \(\phi(e;w,v)=\phi(e;v,w)^{-1}\). The gain of a path is its edge gain product; thus a path \(P=v_{0}e_{1}v_{1}e_{2}\cdots e_{k}v_{k}\) has gain \(\phi(P)=\phi(e_{1};v_{0},v_{1})\phi(e_{2};v_{1},v_{2})\cdots\phi(e_{k};v_{k-1},v _{k}).\) A circle is called _balanced_ if its gain (considered as a closed path) is \(1\); this property is independent of its representation as a closed path. A subgraph \(\Upsilon\) of a gain graph \(\Phi\) is balanced
if every circle in \(\Upsilon\) is balanced. It is _closed and balanced_ if it is balanced and whenever there is a balanced circle \(C\subseteq\Phi\) such that \(C\setminus e\in\Upsilon\), then \(e\in E(\Upsilon)\).
**Lemma 2.1**.: _In a balanced gain graph, the gain of a path depends only on its initial and final vertices._
Proof.: This is implicit in the proof of [3, Lemma 5.3].
A _potential function_ for a balanced gain graph \(\Phi\) is a function \(\theta:V(\Phi)\to\mathfrak{G}\) such that \(\phi(e;v,w)=\theta(v)^{-1}\theta(w)\) for each edge \(e\in\Gamma\).
**Lemma 2.2**.: _Let \(\Phi\) be a gain graph. There is a potential function \(\theta\) for \(\Phi\) if and only if \(\Phi\) is balanced. Every potential function in \([\theta]_{\pi(\Phi)}\) defines the same gains on \(\Phi\), and every potential function defining the gains of \(\Phi\) is in \([\theta]_{\pi(\Phi)}\)._
Proof.: Balance is clear by computing the gain of a circle.
Conversely, we can obtain a potential function for a balanced gain graph \(\Phi\) by choosing a root node \(r\) for each component of \(\Phi\) and defining \(\theta(v)=\phi(P_{rv})\) where \(P_{rv}\) is an \(rv\)-path in a component of \(\Phi\). This is well defined because the gain function in a balanced gain graph is path independent.
Suppose \(\phi(e)=\theta(v)^{-1}\theta(w)\) and \(\eta\) is another potential function of \(\Phi\), then (because the gain function in a balanced subgraph is path independent) for any path \(P_{vw}\) in \(\Phi\), \((\theta(v))^{-1}\theta(w)=\phi(P_{vw})=(\eta(v))^{-1}\eta(w)\). This is true for every \(v,w\) in a block of \(\pi(\Phi)\), so by definition \(\eta\in[\theta]_{\pi(\Phi)}\). Conversely, if \(\eta\in[\theta]_{\pi(\Phi)}\) then \((\theta(v))^{-1}\theta(w)=(\eta(v))^{-1}\eta(w)\) for any \(v,w\) in a block of \(\pi(\Phi)\), so \(\phi(e)=(\theta(v))^{-1}\theta(w)=(\eta(v))^{-1}\eta(w)\).
The _group expansion_ of a graph \(\Gamma\) is denoted by \(\mathfrak{G}\cdot\Gamma=(V(K_{n}),\mathfrak{G}\times E(\Gamma),\phi)\), where \(\nu_{\mathfrak{G}\cdot\Gamma}(g,e)=\nu_{\Gamma}(e)\). To define the gain of \((g,e)\) we must take account of the endpoints, \(\nu_{\mathfrak{G}\cdot\Gamma}(g,e)=\{v,w\}\). For the sake of notation, arbitrarily pick an orientation, \((e;v,w)\), and define \(\phi(g,e;v,w)=g\), \(\phi(g,e;w,v)=g^{-1}\) for each \(g\). In this notation, \((g,e)\) and \((g^{-1},e)\) are different edges (unless \(g=g^{-1}\)) whose gains are
\[\phi(g,e;v,w)=g,\ \phi(g,e;w,v)=g^{-1},\] \[\phi(g^{-1},e;v,w)=g,\ \phi(g^{-1},e;w,v)=g.\]
**Definition 2.3**.: The _graphic Rhodes semilattice of \(\mathfrak{G}\cdot K_{n}\)_, denoted by \(R(\mathfrak{G}\cdot K_{n})\), is the family of closed and balanced subgraphs in \(\mathfrak{G}\cdot K_{n}\), ordered by inclusion.
Its meet operation is intersection.
**Theorem 2.4**.: _The partition-potential Rhodes semilattice \(R_{n}(\mathfrak{G})\) is isomorphic to the graphic Rhodes semilattice \(R^{b}(\mathfrak{G}\cdot K_{n})\)._
Proof.: In the natural correspondence between \(R_{n}(\mathfrak{G})\) and \(R^{\mathrm{b}}(\mathfrak{G}\cdot K_{n})\), the pair \((\pi,[\theta]_{\pi})\in R(\mathfrak{G})\) corresponds to the subgraph of \(\mathfrak{G}\cdot K_{n}\) whose components are complete subgraphs in each vertex set \(\pi_{i}\) with gains given by any
potential function in \([\theta]_{\pi}\). We correspond a closed and balanced subgraph \(B\) of \(\mathfrak{G}\cdot K_{n}\) to the pair \((\pi(B),[\theta]_{\pi(B)})\), where \(\theta\) is a potential function defining the gains of \(B\). In Theorem 3.6 we prove (in more generality) that this correspondence is an isomorphism.
Theorem 2.4 shows us how to generalize the Rhodes semilattice to gain and biased graphs, which we do in the next section.
## 3. Generalization to gain graphs
Let \(\Phi\) be a gain graph with gain group \(\mathfrak{G}\) and vertex set \(X\). Let \(\pi=\{\pi_{1},\ldots,\pi_{j}\}\) be a partial partition of \(X\) and \([\theta]_{\pi}\) a potential system for \(\pi\). We define a function \(\mathbf{B}\) from partition-potential pairs to subgraphs of \(\Phi\) by
\[\mathbf{B}(\pi,[\theta]_{\pi}):=(\operatorname{supp}\pi,\{e\in E(\Phi\colon\pi ):(\exists i)\;\nu_{\Gamma}(e)=\{v,w\}\subseteq\pi_{i},\;\phi(e)=\theta^{-1}( v)\theta(w)\}).\]
This subgraph is well defined, by Lemma 2.2.
We say \((\pi,[\theta]_{\pi})\) is a \(\Phi\)_-connected_ partition-potential pair if \(\mathbf{B}(\pi,[\theta]_{\pi_{i}})\) is connected for each \(\pi_{i}\in\pi\).
**Definition 3.1**.: Let \(\Phi\) be a gain graph with vertex set \(X\) and group \(\mathfrak{G}\). The _partition-potential Rhodes semilattice of \(\Phi\)_, denoted by \(R(\Phi)\), is the set of all \(\Phi\)-connected partition-potential pairs of \(\Phi\). The meet operation is the same as it is for \(R_{n}(\mathfrak{G})\).
We can see that \(R_{n}(\mathfrak{G})=R(\mathfrak{G}\cdot K_{n})\), so the partition-potential Rhodes semilattice of a gain graph is a generalization of the original Rhodes semilattice.
**Definition 3.2**.: Let \(\Phi\) be a gain graph. The _graphic Rhodes semilattice of \(\Phi\)_, denoted by \(R^{\mathrm{b}}(\Phi)\), is the family of closed and balanced subgraphs, ordered by inclusion. The meet operation is gain graph intersection.
**Lemma 3.3**.: _Let \(\Phi\) be a gain graph, \(\pi\) a partial partition of \(X\), and \((\pi,[\theta]_{\pi})\in R_{n}(\Phi)\). Then \(\mathbf{B}(\pi,[\theta]_{\pi})\) is a closed and balanced subgraph of \(\Phi\)._
Proof.: \(\mathbf{B}(\pi,[\theta]_{\pi})\) is balanced because it has gains defined by a potential function. It is closed because if \(v,w\) are in a component of \(\mathbf{B}(\pi,[\theta]_{\pi})\), then the edge \(e\) with \(\nu_{\Gamma}(\theta^{-1}(v)\theta(w),e)=\{v,w\}\) and gain \(\phi(e;v,w)=\theta^{-1}(v)\theta(w)\), if it exists in \(\Phi\), is in \(\mathbf{B}(\pi,[\theta]_{\pi})\) by the definition of \(\mathbf{B}\).
**Lemma 3.4**.: _Let \(\Phi\) be a gain graph. \(\mathbf{B}\) is a surjection onto the closed and balanced subgraphs of \(\Phi\)._
Proof.: By Lemma 3.3 we know the image of \(\mathbf{B}\) is contained in the set of closed and balanced subgraphs.
Let \(B\) be a closed and balanced subgraph. By Lemma 2.2 there is a potential function \(\theta\) defining its gains. If \(e\in B\) then its vertices are in the same block of \(\pi(B)\); it follows that \(e\) is in \(\mathbf{B}(\pi(B),[\theta]_{\pi(B)})\), which implies that \(B\subseteq\mathbf{B}(\pi(B),[\theta]_{\pi(B)})\).
Now let \(e\) be an edge of \(\mathbf{B}(\pi(B),[\theta]_{\pi(B)})\). The vertices of \(e\) are in one block of \(\pi(B)\) and its gain \(\phi(e;v,w)=\theta(v)^{-1}\theta(w)\). Since \(B\) is closed and \(\theta\) is a potential function for \(B\), \(e\) is in \(B\); therefore \(B=\mathbf{B}(\pi(B),[\theta]_{\pi(B)})\).
**Lemma 3.5**.: _Let \(\Phi\) be a gain graph. If we restrict the domain of \(\mathbf{B}\) to \(R(\Phi)\), then \(\mathbf{B}\) is injective._
Proof.: Suppose \((\tau,[\eta]_{\tau})\in R(\Phi)\) and \(\mathbf{B}(\tau,[\eta])=B\). We showed in the previous proof that \(B=\mathbf{B}(\pi(B),[\theta]_{\pi(B)})\). We now prove that, if \((\tau,[\eta]_{\tau})\neq(\pi(B),[\theta]_{\pi(B)})\), then \((\tau,[\eta]_{\tau})\notin R(\Phi)\).
First we observe that every block of \(\pi(B)\) must be contained in a block of \(\tau\). Thus, \(\pi(B)\leq\tau\). If two blocks \(\pi_{i},\pi_{j}\in\pi(B)\) are contained in the same block \(\tau_{k}\), or if some \(\pi_{i}\subset\tau_{k}\), then \(\tau_{k}\) does not induce a connected subgraph of \(B\); such a partition-potential pair cannot be in \(R(\Phi)\). Thus, the blocks of \(\pi(B)\) are in distinct blocks of \(\tau\) and are equal to those blocks; that is, \(\tau=\pi(B)\).
Now Lemma 2.2 implies that \([\theta]_{\tau}=[\theta]_{\pi(B)}\), completing the proof.
**Theorem 3.6**.: _Let \(\Phi\) be a gain graph. Then \(R(\Phi)\) is isomorphic to \(R^{b}(\Phi)\)._
Proof.: We have shown \(\mathbf{B}\) is a bijection. Now we prove it preserves order. By the definition \((\tau,[\eta]_{\tau})\leq(\pi,[\theta]_{\pi})\) if and only if each block \(\tau_{i}\in\tau\) is contained in some block of \(\pi\) and \(\mathfrak{G}\cdot\theta|_{\tau_{i}}=\mathfrak{G}\cdot\eta|_{\tau_{i}}\). Equivalently, each component of \(\mathbf{B}(\tau,[\eta]_{\tau})\) is contained in a component of \(\mathbf{B}(\pi,[\theta]_{\pi})\); that is, \(\mathbf{B}(\tau,[\eta]_{\tau})\subseteq\mathbf{B}(\pi,[\theta]_{\pi})\).
**Example 3.7** (Group expansions).: If \(\Phi\) is a group expansion we can simplify the description of the graphic Rhodes semilattice. Suppose \(\Gamma\) is a simple graph and \(\mathfrak{G}\) is a group. A subgraph \(\Psi\subseteq\Gamma\) is _closed in \(\Gamma\)_ if, whenever \(e\) is an edge of \(\Gamma\) such that \(\Psi\cup\{e\}\) contains a circle that contains \(e\), then \(e\) is in \(\Psi\).
For a subgraph \(B\) of \(\mathfrak{G}\cdot\Gamma\), by \(p(B)\) we mean the projection of \(B\) onto the underlying graph \(\Gamma\). If \(B\) is closed and balanced subgraph, then \(p(B)\) is a closed subgraph in \(\Gamma\). Conversely, if \(C\) is closed in \(\Gamma\) and \([\theta]_{\pi(C)}\) is a potential system on \(\pi(C)\), then \(\mathbf{B}(\pi(C)),[\theta]_{\pi(C)})\) is a closed and balanced subgraph of \(\mathfrak{G}\cdot\Gamma\).
This shows that the properties of closure and balance for elements of \(R^{\mathrm{b}}(\mathfrak{G}\cdot\Gamma)\) can be split into closure of the underlying base graph in \(\Gamma\) and an arbitrary choice of potential for that base graph. If in particular \(\Gamma=K_{n}\), the closed subgraphs correspond bijectively to the partial partitions of the vertex set of \(K_{n}\).
## 4. Generalization to biased graphs
A _biased graph_ is a graph together with a class of circles (edge sets or graphs of simple closed paths), called _balanced circles_, such that no theta subgraph contains exactly two balanced circles. We denote the graph along
with the set of balanced circles by \(\Omega=(\Gamma,\mathscr{B})\). We define closed and balanced subgraphs of a biased graph exactly the same as we did for gain graphs. A subgraph \(\Upsilon\) of a biased graph \(\Omega\) is balanced if every circle in \(\Upsilon\) is balanced and is _closed and balanced_ if in addition whenever there is a balanced circle \(C\in\Omega\) such that \(C\setminus e\in\Upsilon\), \(C\) is in \(\Upsilon\). A gain graph \(\Phi\) with underlying graph \(\Gamma\) gives rise to the biased graph \(\langle\Phi\rangle=(\Gamma,\mathscr{B}(\Phi))\)[3, Section 5].
The definition of the graphic Rhodes semilattice of a gain graph depends on the subgraphs and the balanced circles in the subgraph. Since the balanced circles of a gain graph define a biased graph, we can readily generalize the definition of the graphic Rhodes semilattice to biased graphs.
**Definition 4.1**.: Let \(\Omega\) be a biased graph. The _Rhodes semilattice of \(\Omega\)_, denoted by \(R^{\mathrm{b}}(\Omega)\), is the family of closed and balanced subgraphs in \(\Omega\) ordered by inclusion.
A difference between bias and gains is that we cannot state a partition-potential description of balanced subgraphs of a biased graph. This is not a trivial difference, since not all biased graphs can be given gains; see [3, Example 5.8].
## 5. The four lattices
In this section we have a biased graph \(\Omega\). We treat a gain graph \(\Phi\) as the biased graph \(\langle\Phi\rangle\). We propose to embed the Rhodes semilattice as an order ideal of a lattice. The simplest such lattice is the first.
**Definition 5.1**.: The _trivial Rhodes lattice_ of \(\Omega\), denoted by \(\widehat{R}(\Omega)\), is \(R(\Omega)\) with an added top element \(\hat{1}\). \(\widehat{R}(\mathfrak{G}\cdot K_{n})\) is the Rhodes lattice \(\widehat{R}_{n}(\mathfrak{G})\) defined in [2]. (This lattice is trivial only from the viewpoint of partially ordered sets; we do not mean it is useless.)
With the benefit of our subgraph interpretation we propose more substantial kinds of Rhodes lattices.
The frame matroid of \(\Omega\) is a matroid \(\mathbf{F}(\Omega)\) on the edge set \(E(\Omega)\)[4, Section 2]. We regard each edge set \(S\) as the spanning subgraph \((V(\Omega),S)\). From this viewpoint the flats of \(\Omega\) become frame-closed spanning subgraphs of \(\Omega\).
**Definition 5.2**.: The _frame Rhodes lattice_ of \(\Omega\), denoted by \(R^{\mathbf{F}}(\Omega)\), is the family of all frame-closed subgraphs of \(\Omega\); that is, the set of all frame-closed spanning subgraphs of all induced subgraphs of \(\Omega\).
The lift matroid of \(\Omega\) is a matroid \(\mathbf{L}(\Omega)\) on the set \(E(\Omega)\)[4, Section 4]. We regard each edge set \(S\) as a spanning subgraph of \(\Omega\), \((V(\Omega),S)\). From this viewpoint the flats of \(\Omega\) become spanning subgraphs of \(\Omega\) whose edge sets are closed in \(\mathbf{L}(\Omega)\). There is a natural one-point extension of the lift matroid, called the _complete_ or _extended lift matroid_, whose balanced flats are the same as those of \(\mathbf{L}(\Omega)\); the following definition extends to it but we omit the formal definition.
**Definition 5.3**.: The _lift Rhodes lattice_, denoted by \(R^{\mathbf{L}}(\Omega)\), is the family of all lift-closed subgraphs of \(\Omega\); that is, the set of all lift-closed spanning subgraphs of all induced subgraphs of \(\Omega\).
A subgraph \(\Upsilon\) is called _balance-closed_ (which does not mean balanced and does not mean closed in a matroid) if, whenever there is a balanced circle \(C\in\Omega\) such that \(C\setminus e\in\Upsilon\), then \(C\subseteq\Upsilon\).
**Definition 5.4**.: The _balance-closed Rhodes lattice_, denoted by \(R^{\mathbf{BC}}(\Omega)\), is the family of balance-closed subgraphs of all induced subgraphs of \(\Omega\), ordered by inclusion.
The graphic Rhodes semilattice is the order ideal of balanced subgraphs in each of these proposed Rhodes lattices.
We plan to study these lattices in order to draw conclusions about the structure of the original Rhodes semilattice and the generalizations to gain and biased graphs.
|
2310.17508 | Energy levels of mesonic helium in quantum electrodynamics | On the basis of variational method we study energy levels of pionic helium
$(\pi-e-He)$ and kaonic helium $(K-e-He)$ with an electron in ground state and
a meson in excited state with principal and orbital quantum numbers $n\sim
l+1\sim 20$. Variational wave functions are taken in the Gaussian form. Matrix
elements of the basic Hamiltonian and corrections to vacuum polarization and
relativism are calculated analytically in a closed form. We calculate some
bound state energies and transition frequencies which can be studied in the
experiment. | V. I. Korobov, A. V. Eskin, A. P. Martynenko, F. A. Martynenko | 2023-10-26T16:01:33Z | http://arxiv.org/abs/2310.17508v2 | # Energy levels of mesonic helium in quantum electrodynamics
###### Abstract
On the basis of variational method we study energy levels of pionic helium (\(\pi-e-He\)) and kaonic helium (\(K-e-He\)) with an electron in ground state and a meson in excited state with principal and orbital quantum numbers \(n\sim l+1\sim 20\). Variational wave functions are taken in the Gaussian form. Matrix elements of the basic Hamiltonian and corrections to vacuum polarization and relativism are calculated analytically in a closed form. We calculate some bound state energies and transition frequencies which can be studied in the experiment.
Kaonic helium, pionic helium, variational method, quantum electrodynamics pacs: 36.10.Gv, 12.20.Ds, 14.40.Aq, 12.40.Vv
## I Introduction
One of the directions in the development of the theory of fundamental interactions is connected with a study of bound states of particles. In addition to usual stable atoms and molecules that exist in our world, there are exotic bound states (muonium, positronium, positronium ion, muonic hydrogen, and others), which have attracted the attention of both experimenters and theoreticians for decades [1; 2; 3]. Although they have a short lifetime, nevertheless, by studying various energy intervals in the energy spectrum of such systems, as well as their decay widths, year after year it was possible to obtain from these studies more accurate information about the values of fundamental parameters of the Standard Model. A number of such exotic systems has been growing in recent years. For example, in [4; 5], it was proposed to study by laser spectroscopy method pionic helium atoms, which consist of a negative pion, an electron, and a helium nucleus. From a measurement of pion transitions between states with large values of the principal and orbital quantum numbers (\((n,l)=(17.16)\rightarrow(17.15)\)) one can try to obtain a more accurate value of the pion mass than can be done by other methods. In [6; 7], a successful experiment has already been carried out for nearly circular orbits \(n\sim l+1\), which gave a transition frequency value of 183760 MHz. To find a more accurate value of the pion mass from these measurements, it is also necessary to take into account systematic effects such as collision induced shift, broadening of the transition lines and others [8; 9; 10]. The work in this direction is in an active phase. Along with the atoms of pionic helium, other atoms can be proposed and studied, for example, kaonic helium, setting as the goal of research a more accurate determination of the mass of the \(K^{-}\) meson. It will be useful to note that there are other approaches to clarifying a value of the \(\pi\) meson mass. Thus, the study carried out in [11] demonstrates the potential of crystal spectroscopy of curved crystals in the field of exotic atoms. In this work, \(5g-4f\)
transitions in pionic nitrogen and muonic oxygen were measured simultaneously in a gaseous nitrogen-oxygen mixture. Knowing the muon mass, the muon line can be used to energy calibrate the pion transition. The mass value of negatively charged pion was obtained, which is 4.2 ppm higher than the current world average \(139.57077\pm 0.00017\) MeV [12].
Mesonic atoms are formed as a result of a replacement of an orbital electron by a negatively charged meson. After that, laser spectroscopy of such atoms is carried out, which will make it possible to measure transition frequencies and determine the reduced mass of a system and hence a mass of the meson. To reduce the influence of strong interaction between a meson and a nucleus, the meson's orbit is raised by increasing its orbital momentum. The long lifetime of a meson atom is determined by the state with a large value of orbital momentum \(l=(16\div 20)\) in which a meson is formed in the atom. Its transition to the ground state with \(l=0\) is strongly suppressed. The lifetime of such an atom is several nanoseconds.
The study of energy levels of three-particle systems can be carried out with high accuracy within the framework of the variational method. There are some differences in the use of a variational method to find the energy levels of three-particle systems. They are connected with a choice of coordinates and representation of the Hamiltonian to describe the system, with a choice of basis wave functions. Thus, in [4] an exponential basis was used, and the coordinates of the electron and meson are determined with respect to the nucleus. In works [13; 14; 15], when calculating the energy levels of mesomolecules of hydrogen, muonic helium, etc., we use the Jacobi coordinates. The purpose of this work is to calculate the energy levels in pionic and kaonic helium atoms, as well as transition frequencies between levels in which the meson is in an excited state with a large orbital quantum number.
## II General formalism
Different approaches have been developed for a study of three-particle systems. There is an analytical method of perturbation theory, which makes it possible to analytically investigate both the Lamb shift and the hyperfine structure of the spectrum [16; 17; 18; 19; 20; 21]. Another methods that are used for many-particle systems are the variational method and method of hyperspherical coordinates, which allow one to find energy levels and wave functions with very high accuracy [22; 23; 24; 25; 26; 27; 28; 29; 30]. Since for mesonic helium the states of an atom with large values of orbital moments of the meson are considered so that the electron and meson are at the same distance from the nucleus, it is virtually impossible to use a method of analytical perturbation theory. Therefore, further we study this system on the basis of the variational method. The Gaussian basis is used as the basis set of wave functions.
To find the energy levels of a three-particle system, we introduce the Jacobi coordinates \(\mathbf{\rho}\), \(\mathbf{\lambda}\), which are related to the particle radius vectors \(\mathbf{r}_{1}\) (nucleus), \(\mathbf{r}_{2}\) (meson), \(\mathbf{r}_{3}\) (electron) as follows:
\[\mathbf{\rho}=\mathbf{r}_{2}-\mathbf{r}_{1},\quad\mathbf{\lambda}=\mathbf{r}_{3}- \frac{m_{1}\mathbf{r}_{1}+m_{2}\mathbf{r}_{2}}{m_{1}+m_{2}}, \tag{1}\]
where \(m_{1}\), \(m_{2}\), \(m_{3}\) are the masses of \(He\) nucleus, \(\pi^{-}\) (\(K^{-}\))-meson and electron.
To solve the variational problem, we choose the ground state trial basis wave functions in the form of superposition of the Gaussian exponents:
\[\Psi(\mathbf{\rho},\mathbf{\lambda},A)=\sum_{i=1}^{K}C_{i}\psi_{i}(\mathbf{\rho},\mathbf{ \lambda},A^{i}),\quad\psi_{i}(\mathbf{\rho},\mathbf{\lambda},A^{i})=e^{-\frac{1}{2} \left(A^{i}_{11}\mathbf{\rho}^{2}+2A^{i}_{12}\mathbf{\rho}\mathbf{\lambda}+A^{i}_{22}\mathbf{ \lambda}^{2}\right)}, \tag{2}\]
where \(C_{i}\) are linear variational parameters, \(A^{i}\) is the matrix of nonlinear variational parameters, K is the basis size.
In nonrelativistic approximation the Hamiltonian of a three-particle atom in the Jacobi coordinates can be presented as
\[\hat{H}_{0}=-\frac{1}{2\mu_{1}}\nabla^{2}_{\mathbf{\rho}}-\frac{1}{2\mu_{2}}\nabla^{ 2}_{\mathbf{\lambda}}+\frac{e_{1}e_{2}}{|\mathbf{\rho}|}+\frac{e_{1}e_{3}}{|\mathbf{\lambda }+\frac{m_{2}}{m_{12}}\mathbf{\rho}|}+\frac{e_{2}e_{3}}{|\mathbf{\lambda}-\frac{m_{1}} {m_{12}}\mathbf{\rho}|}, \tag{3}\]
where \(m_{12}=m_{1}+m_{2}\), \(\mu_{1}=\frac{m_{1}m_{2}}{m_{1}+m_{2}}\), \(\mu_{2}=\frac{(m_{1}+m_{2})m_{3}}{m_{1}+m_{2}+m_{3}}\), \(e_{1}\), \(e_{2}\), \(e_{3}\) are the particle charges.
For arbitrary states of the meson and electron with orbital angular momenta \(l_{1}\) and \(l_{2}\), a convenient basis for the expansion of functions depending on two directions are bipolar spheric harmonics [31]:
\[[Y_{l_{1}}(\theta_{\rho},\phi_{\rho})\otimes Y_{l_{2}}(\theta_{\lambda},\phi_ {\lambda})]_{LM}=\sum_{m_{1},m_{2}}C^{LM}_{l_{1}m_{1}l_{2}m_{2}}Y_{l_{1}m_{1}} (\theta_{\rho},\phi_{\rho})Y_{l_{2}m_{2}}(\theta_{\lambda},\phi_{\lambda}), \tag{4}\]
where \(\theta_{\rho},\phi_{\rho}\) and \(\theta_{\lambda},\phi_{\lambda}\) are spherical angles that determine the direction of the vectors \(\mathbf{\rho}\), \(\mathbf{\lambda}\). Since the \(\pi^{-}\) or \(K^{-}\) meson are in an orbital excited state \(l\) in pionic (kaonic) helium, and the electron is in the ground state the variational wave function of the system is chosen for such states in the form:
\[\Psi_{lm}(\mathbf{\rho},\mathbf{\lambda},A)=\sum_{i=1}^{K}C_{i}Y_{lm}(\theta_{\rho}, \phi_{\rho})\rho^{l}e^{-\frac{1}{2}\left(A^{i}_{11}\mathbf{\rho}^{2}+2A^{i}_{12} \mathbf{\rho}\mathbf{\lambda}+A^{i}_{22}\mathbf{\lambda}^{2}\right)}, \tag{5}\]
where spherical function \(Y_{lm}(\theta_{\rho},\phi_{\rho})\) describes the angular part of the orbital motion of a pion (kaon).
Within the framework of the variational approach, the solution of the Schrodinger equation is reduced to solving the following matrix problem for the coefficients \(C_{i}\):
\[H\cdot C=EB\cdot C, \tag{6}\]
where the matrix elements of the Hamiltonian \(H_{ij}\) and normalizations \(B_{ij}\) can be calculated analytically in a basis of the Gaussian wave functions. Thus, the normalization of the wave function (5) is determined by the following expression:
\[<\Psi|\Psi>=\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+2}\pi^{3/2}\Gamma\left(l+\frac{3}{2 }\right)\frac{B^{l}_{22}}{(detB)^{l+\frac{3}{2}}},\quad B_{kn}=A^{i}_{kn}+A^{ j}_{kn}, \tag{7}\]
where \(\Gamma(l+3/2)\) is the Euler gamma function.
Consider further analytical results for the matrix elements of the Hamiltonian. The kinetic energy operator contains two terms. The matrix element from the Laplace operator with respect to \(\mathbf{\lambda}\) has the form:
\[<\Psi|\nabla^{2}_{\mathbf{\lambda}}|\Psi>=\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+2}\pi^{ \frac{3}{2}}\Gamma\left(l+\frac{3}{2}\right)\frac{B^{l-1}_{22}}{(detB)^{l+\frac {5}{2}}}\times \tag{8}\]
\[\left[3A^{i}_{22}(A^{i}_{22}-B_{22})detB+(2l+3)(A^{i}_{22}B_{12}-A^{i}_{12}B_ {22})^{2}\right].\]
Similar matrix element with the Laplace operator in \(\rho\) is also expressed in terms of nonlinear variational parameters as follows:
\[<\Psi|\nabla^{2}_{\mathbf{\rho}}|\Psi>=\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+1} \pi^{\frac{3}{2}}\Gamma\left(l+\frac{1}{2}\right)\frac{B_{22}^{l-1}}{(detB)^{l+ \frac{5}{2}}}\times \tag{9}\]
\[\left[(2l+1)detB(-(2l+3)A_{11}^{i}B_{22}+3(A_{12}^{i})^{2}+2lA_{12}^{i}B_{12}) +(2l+1)(2l+3)(A_{12}^{i}B_{12}-A_{11}^{i}B_{22})^{2}\right].\]
The potential energy operator in nonrelativistic Hamiltonian consists of pairwise Coulomb interactions \(U_{ij}\) (i, j=1, 2, 3). The convenience of using the Gaussian basis in this case also lies in the possibility of analytical representation of the matrix elements of potential energy (in electronic atomic units):
\[<\Psi|U_{12}|\Psi>=-Z\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+\frac{3}{2}}\pi^{\frac{3}{2 }}\Gamma\left(l+1\right)\frac{B_{22}^{l-1}}{(detB)^{l+1}}, \tag{10}\]
\[<\Psi|U_{13}|\Psi>=-Z\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+\frac{5}{2}}\pi\Gamma\left( l+\frac{3}{2}\right)\frac{B_{22}^{l+\frac{1}{2}}}{(detB)^{l+\frac{3}{2}}}F_{1} \left(\frac{1}{2},l+\frac{3}{2},\frac{3}{2},-\frac{(F_{2}^{23})^{2}}{detB}\right) \tag{11}\]
\[<\Psi|U_{23}|\Psi>=\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+\frac{5}{2}}\pi\Gamma\left( l+\frac{3}{2}\right)\frac{B_{22}^{l+\frac{1}{2}}}{(detB)^{l+\frac{3}{2}}}F_{1} \left(\frac{1}{2},l+\frac{3}{2},\frac{3}{2},-\frac{(F_{2}^{13})^{2}}{detB}\right) \tag{12}\]
\[F_{2}^{13}=B_{12}+\frac{m_{1}}{m_{12}}B_{22},\quad F_{2}^{23}=B_{12}-\frac{m_{ 2}}{m_{12}}B_{22}, \tag{13}\]
where \({}_{2}F_{1}(\alpha,\beta,x)\) is a hypergeometric function.
For \(l=1\) the expressions (11)-(13) coincide with previously obtained results [32]. Using the matrix elements of the \(\hat{H}_{0}\) hamiltonian, some energy levels of the \(\pi^{-}\)-meson and \(K^{-}\)-meson atoms are calculated in Matlab system. The calculations are carried out using our program, which was previously used to calculate the energy levels of various muonic atoms in quantum electrodynamics. The calculation of the energy levels of the \(\pi^{-}\)-meson atom is carried out in order to test the operation of the program. The calculation results are shown in Table 1.
To improve the accuracy of the calculation, we consider some important corrections to the Hamiltonian \(\hat{H}_{0}\). The pair electromagnetic interaction between particles in quantum electrodynamics is determined by the Breit potential [33]. Among the various terms in this potential, let us single out those terms that have the greatest numerical value. These include relativistic corrections, contact interaction and corrections for vacuum polarization.
The relativistic corrections are defined in the energy spectrum by the following terms in electronic atomic units:
\[\Delta U_{rel}=-\frac{\alpha^{2}}{8}\left(\frac{{\bf p}_{1}^{4}}{m_{1}^{3}}+ \frac{{\bf p}_{2}^{4}}{m_{2}^{3}}+\frac{{\bf p}_{3}^{4}}{m_{3}^{3}}\right). \tag{14}\]
The term of leading order in (14) is related with a motion of the electron. The value of the matrix element from \(\Delta U_{rel}^{e}\) can be obtained in exactly the same way as (8) in terms of variational parameters:
\[<\Psi|-\frac{\alpha^{2}}{8}\nabla_{\mathbf{\lambda}}^{4}|\Psi>=- \frac{\alpha^{2}}{8}\sum_{i,j=1}^{K}C_{i}C_{j}2^{l+1}\pi^{\frac{1}{2}}\Gamma \left(l+\frac{3}{2}\right)\frac{B_{22}^{l-2}}{(detB)^{l+\frac{7}{2}}}\Big{[}15 (A_{22}^{i})^{2}(detB)^{2}(A_{22}^{i}-B_{22})^{2}+ \tag{15}\]
\[10(2l+3)A^{i}_{22}(A^{i}_{22}-B_{22})detB(A^{i}_{22}B_{12}-A^{i}_{12}B_{22})^{2}+ (2l+3)(2l+5)(A^{i}_{22}B_{12}-A^{i}_{12}B_{22})^{4}\biggr{]}.\]
Let us also take into account the vacuum polarization effects in the energy spectrum. Since for both an electron and a meson in a highly excited state, the Compton wavelength of an electron is much smaller than the radius of the Bohr orbit, we can use the following expression for the vacuum polarization potential in electronic atomic units:
\[\Delta U_{vp}=\Delta U_{vp}(r_{13})+\Delta U_{vp}(r_{23})=-\frac{4}{15}\alpha^ {2}(Z\alpha)\delta(\mathbf{\lambda}+\frac{m_{2}}{m_{12}}\mathbf{\rho})+\frac{4}{15} \alpha^{2}(Z\alpha)\delta(\mathbf{\lambda}-\frac{m_{1}}{m_{12}}\mathbf{\rho}). \tag{16}\]
The matrix elements of such potentials are calculated analytically in a closed form:
\[<\Psi|\Delta U_{vp}(r_{13})|\Psi>=-\frac{4}{15}\alpha^{2}(Z\alpha\sum_{i,j=1}^ {K}C_{i}C_{j}2^{l+\frac{1}{2}}\Gamma\left(l+\frac{3}{2}\right)\frac{1}{(F_{1 }^{13})^{l+\frac{3}{2}}}, \tag{17}\]
\[<\Psi|\Delta U_{vp}(r_{23})|\Psi>=-\frac{4}{15}\alpha^{2}(Z\alpha\sum_{i,j=1}^ {K}C_{i}C_{j}2^{l+\frac{1}{2}}\Gamma\left(l+\frac{3}{2}\right)\frac{1}{(F_{1 }^{23})^{l+\frac{3}{2}}}, \tag{18}\]
\[F_{1}^{13}=B_{11}+\frac{m_{2}^{2}}{m_{12}^{2}}B_{22}-2\frac{m_{2}}{m_{12}}B_{ 12},\quad F_{1}^{23}=B_{11}+\frac{m_{1}^{2}}{m_{12}^{2}}B_{22}+2\frac{m_{1}}{ m_{12}}B_{12}. \tag{19}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline State & \(E_{nr}\)(Exp) & \(E_{nr}\)(G) & \(-\frac{\alpha^{2}}{8}\mathbf{p}_{e}^{4}\) & \(\Delta U_{vp}\) & \(\Delta U_{cont}\) \\ \hline \multicolumn{5}{|c|}{(\({}^{3}_{2}He-\pi^{-}-e\)) atom} \\ \hline (17,16) & -2.64312261030188(2)[4] & -2.6423822152 & -0.0000568853 & -0.000003596 & 0.0000021185 \\ \hline (17,15) & -2.6709980910(1)[4] & -2.6698284795 & -0.0000578381 & -0.0000003646 & 0.0000021477 \\ \hline \multicolumn{5}{|c|}{(\({}^{4}_{2}He-\pi^{-}-e\)) atom} \\ \hline (17,16) & -2.65751243850171[4] & -2.6567689659 & -0.0000560957 & -0.000003549 & 0.0000020904 \\ \hline (17,15) & -2.68542722(2)[4] & -2.6842422023 & -0.0000571739 & -0.0000003606 & 0.0000021242 \\ \hline \multicolumn{5}{|c|}{(\({}^{3}_{2}He-K^{-}-e\)) atom} \\ \hline (20,19) & -4.6685977528 & -4.6806222136 & -0.0000194273 & -0.000001272 & 0.0000007495 \\ \hline (20,18) & -4.6786693864 & -4.6932218265 & -0.0000178752 & -0.000001159 & 0.000006829 \\ \hline (21,20) & -4.3199030879 & -4.3133610685 & -0.0000238657 & -0.000001466 & 0.000008638 \\ \hline (21,19) & -4.3285605454 & -4.3238629798 & -0.0000230792 & -0.000001498 & 0.0000008827 \\ \hline \multicolumn{5}{|c|}{(\({}^{4}_{2}He-K^{-}-e\)) atom} \\ \hline (20,19) & -4.8266672526 & -4.8328936568 & -0.0000190789 & -0.0000001231 & 0.0000007252 \\ \hline (20,18) & -4.8520548999 & -4.8578478878 & -0.0000172623 & -0.000001161 & 0.0000006552 \\ \hline (21,20) & -4.4573174498 & -4.4499843007 & -0.0000207687 & -0.0000001490 & 0.000008777 \\ \hline (21,19) & -4.4653630953 & -4.4601685694 & -0.0000219706 & -0.0000001409 & 0.000008299 \\ \hline \end{tabular}
\end{table}
Table 1: Energy levels of the meson atom obtained in nonrelativistic approximation with the Gaussian and exponential basis and values of main corrections in the energy spectrum in electron atomic units (e.a.u.).
The contact interaction potential, as well as (16), is expressed through the \(\delta\)-functions in the form (in electronic atomic units):
\[\Delta U_{cont}=\frac{\pi Z\alpha^{2}}{2}\delta(\mathbf{\lambda}+\frac{m_{2}}{m_{12}} \mathbf{\rho})-\frac{\pi\alpha^{2}}{2}\delta(\mathbf{\lambda}-\frac{m_{1}}{m_{12}}\bm {\rho}). \tag{20}\]
In Table 1 we present the results of calculating the energy values with the Hamiltonian \(\hat{H}_{0}\) and the values of the matrix elements (15), (17), (18). (20).
by the expressions:
\[W(\rho)=\frac{(2\pi)^{3/2}}{<\Psi|\Psi>}\sum_{i,j=1}^{K}\frac{C_{i}C_{j}}{B_{22}^ {3/2}}\rho^{(2l+2)}e^{-\frac{1}{2}\frac{detB}{B_{22}}\rho^{2}}, \tag{21}\]
\[W(\lambda)=\frac{2^{l+\frac{5}{3}}\pi}{<\Psi|\Psi>}\sum_{i,j=1}^{K}\frac{C_{i} C_{j}\Gamma\left(l+\frac{3}{2}\right)}{B_{11}^{l+\frac{3}{2}}}\lambda^{2}e^{- \frac{1}{2}B_{22}\lambda^{2}}{}_{1}F_{1}\left(l+\frac{3}{2},\frac{3}{2},\frac {B_{12}^{2}\lambda^{2}}{2B_{11}}\right), \tag{22}\]
\[W(\rho,\lambda)=\frac{4\pi}{<\Psi|\Psi>}\sum_{i,j=1}^{K}\frac{C_{i}C_{j}}{B_{12 }}\rho^{2l+1}\lambda e^{-\frac{1}{2}[B_{11}\rho^{2}+B_{22}\lambda^{2}]}sh(B_{ 12}\rho\lambda),\ B_{lk}=A_{lk}^{i}+A_{lk}^{j}, \tag{23}\]
\[<\rho^{2}>=\frac{\pi^{\frac{3}{2}}2^{l+3}\Gamma\left(l+\frac{5}{2}\right)}{< \Psi|\Psi>}\sum_{i,j=1}^{K}C_{i}C_{j}\frac{B_{22}^{l+1}}{(detB)^{l+5/2}}, \tag{24}\]
\[<\lambda^{2}>=\frac{\pi^{\frac{3}{2}}2^{l+2}\Gamma\left(l+\frac{3}{2}\right)} {<\Psi|\Psi>}\sum_{i,j=1}^{K}C_{i}C_{j}\frac{B_{22}^{l-1}}{(detB)^{l+\frac{5} {2}}}(3B_{11}B_{22}+2B_{12}^{2}l). \tag{25}\]
The radial distribution densities are presented in Fig. 1, 2 in the case of pionic helium and kaonic helium. These plots show the presence of characteristic distances in the particle systems \((He-\pi^{-}-e)\) and \((He-K^{-}-e)\). It also follows from th
Figure 4: The radial distribution densities \(W(\rho)\), \(W(\lambda)\) for \(({}^{3}_{2}He-K^{-}-e)\) for the state \((22,20)\). The variable values \(\rho\) and \(\lambda\) are taken in electron atomic units.
considered states, the meson turns out to be located at the same distances from the nucleus or slightly closer to the nucleus then the electron. The distribution densities for the two radial variables \(\rho\) and \(\lambda\) provide a more complete picture of the characteristic distances in a given system of three particles. They are shown in two graphs in Fig. 5,6.
Fine splitting in a three-particle atom which is determined by the interaction of the electron spin and the large orbital angular momentum of the meson is not considered here.
## III Discussion of the results
This paper examines the energy levels of pionic and kaonic helium for states in which the meson has such a large orbital momentum that it is located approximately at the same distance from the nucleus as the electron. The calculations are performed in leading order within the framework of the variational method with the Gaussian basis, and a number of basic corrections determined by the Breit Hamiltonian (for relativism, vacuum polarization and contact interaction) are calculated in the first order of perturbation theory. Since an electron is in the \(1S\) state, the notation \((n,l)\) is used for a state of three particles in Table 1, where \(l\) is the orbital momentum of the meson, and \(n\) is the principal quantum number for the subsystem \((\pi^{-}He^{2+})\), \((K^{-}He^{2+})\).
The Rydberg states in atoms play an important role in refining the values of fundamental constants. Thus, based on the spectroscopy of the Rydberg states in a hydrogen atom, the measurement of the Rydberg constant has been improved [34; 35]. In this problem, by working with the Rydberg states, it is possible to eliminate contributions to the structure of a nucleus. In the case of mesonic atoms, a use of the Rydberg states makes it possible to reduce the influence of strong interaction on the energy spectrum.
Spectroscopy of various exotic molecules can provide new information about a nature of fundamental interactions and the values of fundamental parameters of the Standard Model. Several years ago, the PiHe collaboration at the Paul Scherrer Institute performed laser spectroscopy of the infrared transition in three-body pion helium atoms [4; 5]. Such atoms were created in a superfluid (He-II) helium target. Similar measurements in antiproton helium atoms embedded in liquid helium were carried out by the CERN ASACUSA collaboration [36]. The antiproton-to-electron mass ratio was determined as \(m_{p}/m_{e}\)=1836.1526734(15) [36]. The mass of \(\pi\) meson can be determined by comparing the experimental transition fre
Figure 6: The radial distribution density \(W(\rho,\lambda)\) for \((Ke^{3}He)\) in states (21,20) and (21,19). The variable values \(\rho\) and \(\lambda\) are taken in electron atomic units.
quencies in pionic helium with results of the QED calculation [4]. Although the transition frequency \((17,16)\rightarrow(17,15)\) has already been measured for pionic helium [5], an analysis of experimental data to extract the pion mass is still ongoing.
Our study of the energy levels of both pionic and kaonic helium is carried out on the basis of the variational approach, which was developed in the work [28]. In contrast to the work [4], to describe a three-particle system, the Jacobi coordinates \(\rho\), \(\lambda\) are used, in which the original Hamiltonian has the form (3). The second difference between our calculation and [4] is the use of a Gaussian rather than an exponential basis within the variational method. In such a basis, all matrix elements of the Hamiltonian are obtained in a closed analytical form. Finally, the third difference between our calculations and [4] is that in [4], within the framework of the variational approach, the method of complex coordinate rotation is used, and we work with a real Hamiltonian and solve the eigenvalue problem (6). The obtained numerical results for leading order contribution to the energy of a system and corrections to it are presented in Table 1. Comparing these results with calculation in [4], it is necessary to note a slight difference in the results, which appears in the third digit after the decimal point (second coulomb in Table 1). For the \((17.16)\rightarrow(17.15)\) transition frequency for pionic helium-4 that has been measured, our result 180772 GHz is slightly different (near one per cent) from the result \(183681.5\pm 0.5\) GHz obtained in [4] and from experimental value, which is 183760(6)(6) GHz. Our result for similar transition frequency in pionic helium-3 is equal 180594 GHz. We also present here the results for the transition frequencies \((21,20)\rightarrow(21,19)\) in the case of kaonic helium. They are equal 69094 GHz \((K-e-\frac{3}{2}He)\) and 67017 GHz \((K-e-\frac{4}{2}He)\). In general, our results using the Gaussian trial functions (third column of Table 1) are consistent with calculations with an exponential basis in [4] (second column of Table 1) in the case of pionic helium. In the case of kaonic helium the obtained results are new. The difference in results is due, in our opinion, to differences in the used variational approaches in this work and in [4] and bases for variational wave functions.
A study of characteristic distances at which the nucleus, meson and electron are located relative to each other is shown in Figs. 1-6 for some states for which the binding energies are calculated. The meson is in an excited state with a large orbital momentum \(l\). The key parameter with which you can estimate its distance to the nucleus is determined by the expression \(\sqrt{\mu_{1}/m_{3}}\), where \(\mu_{1}\) is the reduced mass of the meson-nucleus system and \(m_{3}\) is the electron mass. When the principal quantum number \(n=\sqrt{\mu_{1}/m_{3}}\approx 16\) for the \((\pi^{-4}_{\ 2}He)\) or \((\pi^{-3}_{\ 2}He)\) subsystems, a movement of the \(\pi^{-}\)- meson occurs at approximately the same distances from the nucleus and with the same binding energy as for an electron. In the case of kaonic helium, the value of the principal quantum number increases due to an increase in the meson mass and reaches the value \(n\approx 29\). This parameter determines the order of the principal quantum number \(n\), at which the meson and electron have close orbits. But in this work we have so far considered slightly smaller values \(n\approx 20\), so that the \(K^{-}\)-meson is located a little closer to the nucleus. It follows from Figs. 1-6 that in a case of the considered Rydberg states of the \(\pi^{-}\) (\(K^{-}\))-meson, characteristic distances along \(\rho\) and \(\lambda\) have close values. So, for example, the root mean square value of \(\sqrt{\lambda^{2}}\) for the state (17,16) in \((\pi-e-\frac{4}{2}He)\) is 60050 fm, and the root mean square value of \(\sqrt{\rho^{2}}\) for the same state is equal 37210 fm. This means that the use of an analytical method for calculating energy levels as in [13, 21] is difficult, since the characteristic series for the parameter \(M_{e}/M_{\mu}\) from [13, 21] is not rapidly converging.
When calculating relativistic effects, we take into account only corresponding correction for the electron, meaning that the electron is lightest particle in this system, and with an
increase in the principal quantum number \(n\), the orbital speed is determined by the formula \(v=Z\alpha/n\). Therefore, for a meson in the circular Rydberg states it is suppressed by the factor \(n\).
In Table 1 we limited ourselves to presenting numerical results of calculating the energies of bound states of three particles only for a certain number of states with \((n,l)\). But obtained general analytical formulas for the matrix elements of the Hamiltonian of a system make it possible to carry out corresponding numerical calculations for other states \((n,l)\), which may be more important for the experiment. For the principal quantum number \(n=29\), the binding energy of kaonic helium in the state \((n,l)=(29,28)\) is equal to -2.8001942461 e.a.u., and in the state \((n,l)=(29,27)\) it has the value -2.9152046696 e.a.u., which ultimately gives the transition frequency between these levels \(\nu=756732\) GHz.
###### Acknowledgements.
This work is supported by Russian Science Foundation (grant No. RSF 23-22-00143).
|
2306.12396 | Derived equivalences of upper-triangular ring spectra via lax limits | We extend a theorem of Ladkani concerning derived equivalences between
upper-triangular matrix rings from ordinary rings to ring spectra. Our result
also extends an analogous theorem of Maycock for differential graded algebras. | Gustavo Jasso | 2023-06-21T17:33:47Z | http://arxiv.org/abs/2306.12396v3 | # Derived equivalences of upper-triangular ring spectra via Lax limits
###### Abstract.
We extend a theorem of Ladkani concerning derived equivalences between upper-triangular matrix rings to ring spectra. Our result also extends an analogous theorem of Maycock for differential graded algebras. We illustrate the main result with certain canonical equivalences determined by a smooth or proper ring spectrum.
Key words and phrases:Upper-triangular matrix ring; derived equivalences; reflection functors; ring spectrum 2020 Mathematics Subject Classification: 18G80 The purpose of this short article is to extend the following theorem of Ladkani [1] from ordinary rings to ring spectra in the sense of stable homotopy theory; we note that this theorem was extended to differential graded algebras by Maycock [11]. Recall that to rings \(R\) and \(S\) and an \(S\)-\(R\)-bimodule \(M\) one associates the upper-triangular matrix ring
\[\left(\begin{smallmatrix}S&M\\ 0&R\end{smallmatrix}\right)=\left\{\left(\begin{smallmatrix}s&m\\ 0&r\end{smallmatrix}\right)|\,r\in R,\ s\in S,\ m\in M\right\}\]
with sum and product operations given the corresponding matrix operations. We denote the (triangulated) derived category of right modules over a ring \(R\) by \(\operatorname{D}(\operatorname{Mod}(R))\) and recall than an object \(X\in\operatorname{D}(\operatorname{Mod}(R))\) is _compact_ if the functor
\[\operatorname{Hom}_{R}\left(X,-\right):\,\operatorname{D}(\operatorname{Mod }(R))\longrightarrow\operatorname{Ab}\]
preserves small coproducts.
**Theorem** (Ladkani).: _Let \(R\) and \(S\) be rings. Suppose given an \(S\)-\(R\) bimodule \(M\) such that \(M_{R}\) is compact as an object of \(\operatorname{\mathcal{D}}(\operatorname{Mod}(R))\) and an \(R\)-module \(T\) such that the functor_
\[-\otimes_{E}^{\operatorname{L}}T\colon\,\operatorname{D}(\operatorname{Mod}( E))\stackrel{{\sim}}{{\longrightarrow}}\operatorname{D}( \operatorname{Mod}(R))\]
_is an equivalence of triangulated categories, where \(E=\operatorname{Hom}_{R}\left(T,T\right)\) is the ring of endomorphisms of \(T\). Suppose, moreover, that \(\operatorname{Ext}_{R}^{>0}\left(M,T\right)=0\). Then, there is an equivalence of triangulated categories_
\[\operatorname{D}(\operatorname{Mod}\left(\begin{smallmatrix}S&M\\ 0&R\end{smallmatrix}\right))\simeq\operatorname{D}\bigl{(}\operatorname{Mod} \left(\begin{smallmatrix}E&\operatorname{Hom}_{R}(M,T)\\ 0&S\end{smallmatrix}\right)\bigr{)}\,.\]
As Ladkani explains in _loc. cit._, interesting equivalences of derived categories are obtained from appropriate choices of \(R\), \(S\), \(M\) and \(T\). The main focus of this article is to illustrate how formal properties of a higher-categorical upper-triangular gluing construction yield a simple and conceptual proof of (a vast generalisation of) the above theorem.
We use freely the theory of \(\infty\)-categories developed by Joyal, Lurie and others; our main references are [15, 15, 15]. Here we only recall that an \(\infty\)-category \(\mathcal{C}\) is stable if it is pointed, admits finite colimits and the suspension functor \(\Sigma\colon\mathcal{C}\longrightarrow\mathcal{C},\ X\longmapsto 0\amalg_{X}0\), is an equivalence [15, Corollary 1.4.2.27]. The homotopy category of a stable \(\infty\)-category is additive (in the usual sense) and is canonically triangulated in the sense of Verdier [15, Theorem 1.1.2.14]. Working with \(\infty\)-categories rather than with triangulated categories permits us to construct the (homotopy) limit of a diagram of exact functors between stable \(\infty\)-categories, a construction that is not available in the realm of triangulated categories. We also mention that the gluing construction that we utilise below is used by Ladkani
in [1] to glue (abelian) module categories; notwithstanding, our proof of the main theorem is different in the case of ordinary rings and of differential graded algebras in that it does not rely on explicit computations.
Let \(\mathbf{k}\) be an \(\mathbb{E}_{\infty}\)-ring spectrum, for example the sphere spectrum \(\mathbb{S}\) or the Eilenberg-Mac Lane spectrum of an ordinary commutative ring [17, Theorem 7.1.2.13]. The presentable stable \(\infty\)-category \(\mathcal{D}(\mathbf{k})\) of \(\mathbf{k}\)-module spectra is a (closed) symmetric monoidal \(\infty\)-category [17, Proposition 7.1.2.7]. Below we work within the symmetric monoidal \(\infty\)-category \(\mathrm{PrSt}^{\mathrm{L}}_{\mathbf{k}}\) of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories and \(\mathbf{k}\)-linear colimit-preserving functors between them [17, Variants D.1.5.1 and D.2.3.3]. Thus, an object of \(\mathrm{PrSt}^{\mathrm{L}}_{\mathbf{k}}\) is a presentable (stable) \(\infty\)-category equipped with an action of \(\mathcal{D}(\mathbf{k})\). The \(\infty\)-category \(\mathrm{PrSt}^{\mathrm{L}}_{\mathbf{k}}\) admits small limits and these are preserved by the forgetful functor \(\mathrm{PrSt}^{\mathrm{L}}_{\mathbf{k}}\to\mathrm{Pr}^{\mathrm{L}}\) to the \(\infty\)-category of presentable \(\infty\)-categories and colimit-preserving functors between them, see [17, Remark D.1.6.4] and [17, Corollary 4.2.3.3]. Limits of presentable stable \(\infty\)-categories along colimit-preserving functors can be computed using [17, Proposition 5.5.3.13 and Corollary 3.3.3.2] since the limit of a diagram of stable \(\infty\)-categories and exact functors is itself stable [17, Theorem 1.1.4.4], see also [17, Propositions 1.1.4.1 and 4.8.2.18].
Let \(\mathcal{C}\) and \(\mathcal{D}\) be \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories and \(F\colon\mathcal{C}\to\mathcal{D}\) a \(\mathbf{k}\)-linear colimit-preserving functor. Define \(\mathcal{L}_{*}(F)\) via the the pullback square
in the \(\infty\)-category \(\mathrm{PrSt}^{\mathrm{L}}_{\mathbf{k}}\); an object of the \(\infty\)-category \(\mathcal{L}_{*}(F)\) is a pair \((c,f\colon F(c)\to d)\) where \(c\in\mathcal{C}\) and \(f\colon F(c)\to d\) is a morphism in \(\mathcal{D}\). The above pullback is well defined since the \(\infty\)-category \(\mathrm{Fun}\left(\Delta^{1},\mathcal{D}\right)\) is presentable [17, Proposition 5.5.3.6] and stable [17, 1.1.3.1] and inherits a \(\mathbf{k}\)-linear structure from \(\mathcal{D}\) via the equivalence of \(\infty\)-categories
\[\mathrm{Fun}\left(\Delta^{1},\mathcal{D}\right) \simeq\mathrm{Fun}\left((\Delta^{1})^{\mathrm{op}},\mathcal{D}^{ \mathrm{op}}\right)^{\mathrm{op}}\] \[\simeq\mathrm{LFun}\left(\mathrm{Fun}\left(\Delta^{1},\mathcal{S} \right),\mathcal{D}^{\mathrm{op}}\right)^{\mathrm{op}}\] \[\simeq\mathrm{RFun}\left(\mathcal{D}^{\mathrm{op}},\mathrm{Fun} \left(\Delta^{1},\mathcal{S}\right)\right)\simeq\mathcal{D}\otimes\mathrm{Fun }\left(\Delta^{1},\mathcal{S}\right).\]
Above, \(\mathcal{S}\) denotes the \(\infty\)-category of spaces, \(\mathrm{LFun}\left(-,-\right)\) (resp. \(\mathrm{RFun}\left(-,-\right)\)) denotes the \(\infty\)-category of functors that admit a right adjoint (resp. a left adjoint), and the symbol \(\otimes\) denotes Lurie's tensor product of presentable \(\infty\)-categories [17, Propositions 4.8.1.15 and 4.8.1.17] (see also [17, Theorem 5.1.5.6 and Proposition 5.2.6.2]). Similarly, the restriction functor
\[0^{*}\colon\,\mathrm{Fun}\left(\Delta^{1},\mathcal{D}\right)\longrightarrow \mathrm{Fun}\left(\Delta^{0},\mathcal{D}\right)\simeq\mathcal{D}\]
has a canonical \(\mathbf{k}\)-linear structure. When the right adjoint \(G\colon\mathcal{D}\to\mathcal{C}\) of \(F\), which exists by [17, Corollary 5.5.2.9], is also colimit-preserving we may also form the pullback square
in the \(\infty\)-category \(\mathrm{PrSt}^{\mathrm{L}}_{\mathbf{k}}\)[17, Remark D.1.5.3]. There is a canonical equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories
\[\mathcal{L}_{*}(F)\stackrel{{\sim}}{{\longrightarrow}}\mathcal{ L}^{*}(G)\,,\qquad(c,f\colon F(c)\to d)\longmapsto(d,\overline{f}\colon c \to G(d)), \tag{1}\]
stemming from the fact that both \(\infty\)-categories \(\mathcal{L}_{*}(F)\) and \(\mathcal{L}^{*}(G)\) are equivalent to the \(\infty\)-category of sections of the biCartesian fibration over \(\Delta^{1}\) classified by the adjunction \(F\dasharrow G\), see [17, Lemma 5.4.7.15].
We also remind the reader of the equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories [1, Lemma 1.3]
\[\mathcal{L}^{*}(F)\stackrel{{\sim}}{{\longrightarrow}}\mathcal{L}_{* }(F)\,,\qquad(d,f\colon c\to F(d))\longmapsto(d,F(d)\to\operatorname{cofib}(f)), \tag{2}\]
induced by the passage from a morphism to its cofibre, that we regard as a very general version of the Bernstein-Gel\({}^{\prime}\)fand-Ponomarev reflection functors [1]. The gluing operation \(F\mapsto\mathcal{L}_{*}(F)\) is an example of a lax limit [1] and is also considered in the setting of differential graded categories, see for example [10].
For a given \(\mathbf{k}\)-algebra spectrum \(R\), that is an \(\mathbb{E}_{1}\)-algebra object of the symmetric monoidal \(\infty\)-category \(\mathcal{D}(\mathbf{k})\), we denote the \(\mathbf{k}\)-linear stable \(\infty\)-category of (right) \(R\)-module spectra by \(\mathcal{D}(R)\), see also [12, Remark 7.1.3.7]. The underlying stable \(\infty\)-category of \(\mathcal{D}(R)\) is compactly generated by the regular representation of \(R\)[12, Corollary D.7.6.3]. We identify the \(\mathbf{k}\)-linear stable \(\infty\)-category of _left_\(R\)-module spectra with \(\mathcal{D}(R^{\mathrm{op}})\), where \(R^{\mathrm{op}}\) denotes the opposite \(\mathbf{k}\)-algebra spectrum of \(R\)[12, Remark 4.1.1.7]. If \(M\) and \(N\) are \(R\)-module spectra, we denote by \(\underline{\operatorname{Map}}_{R}(M,N)\) the \(\mathbf{k}\)-module spectrum of morphisms \(M\to N\)[12, Example D.7.1.2].
Let \(R\) and \(S\) be \(\mathbf{k}\)-algebra spectra. We identify the \(\infty\)-category of \(S\)-\(R\)-bimodule spectra with the \(\infty\)-category \(\mathcal{D}(S^{\mathrm{op}}\otimes_{\mathbf{k}}R)\)[12, Proposition 4.6.3.15]. The \(\mathbf{k}\)-linear variant of the Eilenberg-Watts Theorem [12, Proposition 7.1.2.4 and p. 738] yields an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories
\[\mathcal{D}(S^{\mathrm{op}}\otimes_{\mathbf{k}}R)\stackrel{{ \sim}}{{\longrightarrow}}\operatorname{LFun}_{\mathbf{k}}\left( \mathcal{D}(S)\,,\mathcal{D}(R)\right),\qquad M\longmapsto-\otimes_{S}M,\]
where \(\operatorname{LFun}_{\mathbf{k}}\left(\mathcal{D}(S)\,,\mathcal{D}(R)\right)\) is the \(\infty\)-category of \(\mathbf{k}\)-linear colimit-preserving functors \(\mathcal{D}(S)\to\mathcal{D}(R)\).
Given a bimodule spectrum \(M\in\mathcal{D}(S^{\mathrm{op}}\otimes_{\mathbf{k}}R)\), we denote the right adjoint to the tensor product functor \(-\otimes_{S}M\) by \(\underline{\operatorname{Map}}_{R}(M,-)\). We also introduce the \(\mathbf{k}\)-linear presentable stable \(\infty\)-category
\[\mathcal{D}(\begin{smallmatrix}S&M\\ 0&R\end{smallmatrix})=\mathcal{L}_{*}(-\otimes_{S}M)\,.\]
The notation \(\mathcal{D}(\begin{smallmatrix}S&M\\ 0&R\end{smallmatrix})\) is justified by the Recognition Theorem of Schwede and Shipley [12, Corollary D.7.6.3] (see also [12, Theorem 7.1.2.1]). Indeed, a standard argument using the recollement
\[\mathcal{D}(R)\]
described in [1, Remark 1.4] shows that the object \(X=i(R)\oplus p_{L}(S)\) is a compact generator of the stable \(\infty\)-category \(\mathcal{L}_{*}(-\otimes_{S}M)\) whose \(\mathbf{k}\)-algebra spectrum of endomorphisms decomposes as the direct sum of \(\mathbf{k}\)-module spectra
\[\begin{split} S&\simeq\underline{\operatorname{Map}} \left(p_{L}(S),p_{L}(S)\right)\\ 0&\simeq\underline{\operatorname{Map}}\left(p_{L}(S),i (R)\right)\end{split}\qquad\qquad\qquad\underline{\operatorname{ Map}}\left(i(R),p_{L}(S)\right)\simeq M\\ \underline{\operatorname{Map}}\left(i(R),i(R)\right)& \simeq R,\end{split}\]
since \(i_{R}p_{L}(S)\simeq S\otimes_{S}M\). Upper-triangular ring spectra are considered for example in [14].
We are ready to state and prove the main result in this article.
**Theorem**.: _Let \(R\), \(S\) and \(E\) be \(\mathbf{k}\)-algebra spectra. Suppose given a bimodule spectrum \(M\in\mathcal{D}(S^{\mathrm{op}}\otimes_{\mathbf{k}}R)\) such that the \(R\)-module spectrum \(M_{R}=S\otimes_{S}M\) is compact and a bimodule spectrum \(T\in\mathcal{D}(E^{\mathrm{op}}\otimes_{\mathbf{k}}R)\) such that the functor_
\[-\otimes_{E}T\colon\mathcal{D}(E)\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{D}(R)\]
_is an equivalence. Then, there is an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories_
\[\mathcal{D}(\begin{smallmatrix}S&M\\ 0&R\end{smallmatrix})\simeq\mathcal{D}(\begin{smallmatrix}E&N\\ 0&S\end{smallmatrix})\,,\]
_where \(N=\underline{\operatorname{Map}}_{R}(M,T)\)._
Proof.: The commutative square
in which the left vertical functor is an equivalence by assumption, induces an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories
\[\mathcal{L}_{*}\!\left(\underline{\operatorname{Map}}_{R}(M,-)\right)\simeq \mathcal{L}_{*}\!\left(\underline{\operatorname{Map}}_{R}(M,-\otimes_{E}T) \right). \tag{3}\]
Since \(\mathcal{D}(S)\) is generated under filtered colimits by the compact \(S\)-modules [17, Definition 7.2.4.1 and Proposition 7.2.4.2], the assumption that the \(R\)-module spectrum \(M_{R}=S\otimes_{S}M\) is compact is equivalent to the requirement that the (exact) functor
\[\underline{\operatorname{Map}}_{R}(M,-):\mathcal{D}(R)\longrightarrow\mathcal{ D}(S)\]
preserves small colimits [17, Proposition 1.1.4.1 and 1.4.4.1]. Hence, in view of the Eilenberg-Watts Theorem, the \(\mathbf{k}\)-linear colimit-preserving functors
\[\underline{\operatorname{Map}}_{R}(M,-\otimes_{E}T):\mathcal{D}(E) \longrightarrow\mathcal{D}(S)\quad\text{and}\quad-\otimes_{E}\underline{ \operatorname{Map}}_{R}(M,T):\mathcal{D}(E)\longrightarrow\mathcal{D}(S)\]
are equivalent. Consequently, there is an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories
\[\mathcal{L}_{*}\!\left(\underline{\operatorname{Map}}_{R}(M,-\otimes_{E}T) \right)\simeq\mathcal{L}_{*}\!\left(-\otimes_{E}\underline{\operatorname{Map }}_{R}(M,T)\right). \tag{4}\]
We conclude the proof by considering the following composite of equivalences of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories (recall that \(N=\underline{\operatorname{Map}}_{R}(M,T)\)):
\[\mathcal{D}(\begin{smallmatrix}S&M\\ 0&R\end{smallmatrix}) =\mathcal{L}_{*}(-\otimes_{S}M)\] \[\overset{(1)}{\simeq}\mathcal{L}^{*}\!\left(\underline{ \operatorname{Map}}_{R}(M,-)\right)\] \[\overset{(2)}{\simeq}\mathcal{L}_{*}\!\left(\underline{ \operatorname{Map}}_{R}(M,-)\right)\] \[\overset{(3)}{\simeq}\mathcal{L}_{*}\!\left(\underline{ \operatorname{Map}}_{R}(M,-\otimes_{E}T)\right)\] \[\overset{(4)}{\simeq}\mathcal{L}_{*}\!\left(-\otimes_{E} \underline{\operatorname{Map}}_{R}(M,T)\right)=\mathcal{D}(\begin{smallmatrix} E&N\\ 0&S\end{smallmatrix})\,.\qed\]
_Remark_.: When \(\mathbf{k}\) is the Eilenberg-Mac Lane spectrum of the ordinary ring of integer numbers, Ladkani's theorem is recovered from the previous theorem by considering the case where the underlying spectra of \(R\), \(S\), \(M\) and \(T\) are discrete, that is their stable homotopy groups vanish in non-zero degrees. The assumptions in Ladkani's theorem are sufficient to guarantee that the upper-triangular ring spectra in the statement in the previous theorem are both discrete. Ladkani's theorem then follows from the fact that the \(\infty\)-category of module spectra over a discrete ring spectrum \(A\) is equivalent to the derived \(\infty\)-category of modules over the ordinary ring \(\pi_{0}(A)\), see [17, Remark 7.1.1.16]. Maycock's extension of Ladkani's theorem to differential graded algebras corresponds to the case where \(\mathbf{k}\) is the Eilenberg-Mac Lane spectrum of an ordinary commutative ring, see [17, Proposition 7.1.4.6].
_Example_.: Let \(R=S=E\) be arbitrary \(\mathbf{k}\)-algebra spectra and \(M=T=R\) with its canonical \(R\)-bimodule structure. The functors \(-\otimes_{R}R\) and \(-\otimes_{R}\underline{\operatorname{Map}}_{R}(R,R)\) are both equivalent to the identity functor of \(\mathcal{D}(R)\) and the equivalence in the main theorem reduces to the (non-trivial) equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories
\[\mathcal{D}(\begin{smallmatrix}R&R\\ 0&R\end{smallmatrix})\simeq\operatorname{Fun}\left(\Delta^{1},\mathcal{D}(R) \right)\overset{\sim}{\longrightarrow}\operatorname{Fun}\left(\Delta^{1}, \mathcal{D}(R)\right)\simeq\mathcal{D}(\begin{smallmatrix}R&R\\ 0&R\end{smallmatrix})\]
given by the passage from a morphism in \(\mathcal{D}(R)\) to its cofibre.
We conclude this article by describing certain canonical equivalences attached to an algebra spectrum (or, more generally, a morphism between such) that satisfies suitable finiteness/dualisability conditions. The bimodule spectra that arise play a central role in the study of right/left Calabi-Yau structures [11, 12] and their relative variants [13, 14], see [15, 16, 17, 18, 19, 20]. Given a \(\mathbf{k}\)-algebra spectrum \(A\), we write \(A^{e}=A\otimes_{\mathbf{k}}A^{\mathrm{op}}\) and recall that \(A\) can be viewed either as a right or as a left \(A^{e}\)-module spectrum [16, Construction 4.6.3.7 and Remark 4.6.3.8]. We also make implicit use of the canonical equivalences between the \(\mathbf{k}\)-linear \(\infty\)-category of \(A\)-bimodule spectra and those of \(A^{e}\)-\(\mathbf{k}\)-bimodule spectra and of \(\mathbf{k}\)-\(A^{e}\)-bimodule spectra, see [16, Proposition 4.6.3.15] and the discussing succeeding it.
1. Let \(A\) be a proper \(\mathbf{k}\)-algebra spectrum, that is the underlying \(\mathbf{k}\)-module spectrum of \(A\) is compact; equivalently, \(A\) is a right dualisable object of the \(\infty\)-category of \(A^{e}\)-\(\mathbf{k}\)-bimodule spectra, see [16, Definition 4.6.4.2] and [16, Example D.7.4.2 and Remark D.7.4.3]. We write \[DA=\underline{\mathrm{Map}}_{\mathbf{k}}(A,\mathbf{k})\] for the \(\mathbf{k}\)-linear dual of \(A\). Setting \(R=E=\mathbf{k}\), \(S=A^{e}\), \(M=A\) and \(T=\mathbf{k}\), the main theorem affords an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories \[\mathcal{D}\big{(}\begin{smallmatrix}A^{e}&A\\ 0&\mathbf{k}\end{smallmatrix}\big{)}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{D}\big{(}\begin{smallmatrix}\mathbf{k}&D\\ 0&A^{e}\end{smallmatrix}\big{)}\] between the derived \(\infty\)-category of the 'one-point extension' of \(A^{e}\) by the diagonal \(A\)-bimodule spectrum and that of the 'one-point coextension' of \(A^{e}\) by \(DA\) (this terminology originates in representation theory of algebras [15]).
2. Let \(A\) be a smooth \(\mathbf{k}\)-algebra spectrum, that is \(A\in\mathcal{D}(A^{e})\) is a compact object [16, Definition 11.3.2.1]; equivalently, \(A\) is a left dualisable object of the \(\infty\)-category of \(A^{e}\)-\(\mathbf{k}\)-bimodule spectra, see [16, Definition 4.6.4.13] and [16, Remark 11.3.2.2]. The \(A\)-bimodule spectrum is called the inverse dualising \(A\)-bimodule (not to be confused with the based-loops functor on \(\mathcal{D}(A)\)). Setting \(R=E=A^{e}\), \(S=\mathbf{k}\), \(M=A\) and \(T=A^{e}\), the main theorem yields an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories \[\mathcal{D}\big{(}\begin{smallmatrix}\mathbf{k}&A\\ 0&A^{e}\end{smallmatrix}\big{)}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{D}\big{(}\begin{smallmatrix}A^{e}&\Omega_{A}\\ 0&\mathbf{k}\end{smallmatrix}\big{)}\,.\]
3. Let \(A\) be a smooth and proper \(\mathbf{k}\)-algebra spectrum. In this case there are mutually-inverse equivalences of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories \[-\otimes_{A}\Omega_{A}\colon\mathcal{D}(A)\stackrel{{\sim}}{{ \longleftrightarrow}}\mathcal{D}(A)\,:-\otimes_{A}DA,\] see [16, Proposition 4.6.4.20] where \(DA\) is called the Serre \(A\)-bimodule [16, Definition 4.6.4.5] and \(\Omega_{A}\) is called the dual Serre \(A\)-bimodule [16, Definition 4.6.4.16] (the fact that \(DA\) and \(\Omega_{A}\) are the right and left duals of \(A\) in the \(\infty\)-category of \(A^{e}\)-\(\mathbf{k}\)-bimodule spectra in the sense of [16, Definition 4.6.2.3] follows from [16, Proposition 4.6.2.1 and Remark 4.6.2.2]). Setting \(R=E=S=A\), \(M=A\) and \(T=DA\) or \(T=\Omega_{A}\), the main theorem provides equivalences of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories \[\mathcal{D}(\begin{smallmatrix}A&A\\ 0&A\end{smallmatrix})\stackrel{{\sim}}{{\longrightarrow}} \mathcal{D}(\begin{smallmatrix}A&DA\\ 0&A\end{smallmatrix})\stackrel{{\sim}}{{\longrightarrow}}\mathcal{D} \big{(}\begin{smallmatrix}A&\Omega_{A}\\ 0&A\end{smallmatrix}\big{)}\,,\] where we use that \(\underline{\mathrm{Map}}_{A}(A,DA)\simeq DA\) and \(\underline{\mathrm{Map}}_{A}(A,\Omega_{A})\simeq\Omega_{A}\) as \(A\)-bimodule spectra.
4. Let \(f\colon B\to A\) be a morphism of \(\mathbf{k}\)-algebra spectra that is not necessarily unital. By the Eilenberg-Watts Theorem, the counit of the induced adjunction \[-\otimes_{B}A\simeq f_{!}\colon\mathcal{D}(B)\longleftrightarrow\mathcal{D}( A)\,:f^{*}\]
can be interpreted as a morphism of \(A\)-bimodule spectra
\[\varepsilon\colon A\otimes_{B}A\longrightarrow A.\]
Suppose that \(A\) is smooth and that \(f^{*}(A)\) is compact as a \(B\)-module spectrum, so that the source and target of the morphism \(\varepsilon\) are compact \(A\)-bimodule spectra and, consequently, so is its cofibre. The \(A\)-bimodule spectrum
\[\Omega_{A,B}=\underline{\operatorname{Map}}_{A^{e}}(\operatorname{cofib}( \varepsilon),A^{e})\]
is called the relative inverse dualising \(A\)-bimodule [20]. Setting \(R=E=A^{e}\), \(S=\mathbf{k}\), \(M=\operatorname{cofib}(\varepsilon)\) and \(T=A^{e}\), the main theorem yields an equivalence of \(\mathbf{k}\)-linear presentable stable \(\infty\)-categories
\[\mathcal{D}\Big{(}\begin{smallmatrix}\mathbf{k}&\operatorname{cofib}( \varepsilon)\\ 0&A^{e}\end{smallmatrix}\Big{)}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{D}\Big{(}\begin{smallmatrix}A^{e}&\Omega_{A,B}\\ 0&\mathbf{k}\end{smallmatrix}\Big{)}\]
that specialises to the equivalence in ii when \(B=0\).
### Acknowledgements
The main result in this article was presented as part of a lecture series delivered during the conference 'Two Weeks of Silting' that took place in Stuttgart, Germany, in August 2019; the author is grateful to the organisers for the opportunity of speaking at the conference. The author thanks Peter Jorgensen for informing him of Maycock's article [14] as well as the anonymous referee who suggested to include further applications of the main theorem. The author's research was supported by the Deutsche Forschungsgemeinschaft (German Research Foundation) under Germany's Excellence Strategy - GZ 2047/1, Projekt- ID 390685813, and partially supported by the Swedish Research Council (Vetenskapsradet) Research Project Grant 'Higher structures in higher-dimensional homological algebra.'
|
2304.02924 | The Governance of Physical Artificial Intelligence | Physical artificial intelligence can prove to be one of the most important
challenges of the artificial intelligence. The governance of physical
artificial intelligence would define its responsible intelligent application in
the society. | Yingbo Li, Anamaria-Beatrice Spulber, Yucong Duan | 2023-04-06T08:26:38Z | http://arxiv.org/abs/2304.02924v1 | # The Governance of Physical Artificial Intelligence
###### Abstract
Physical artificial intelligence can prove to be one of the most important challenges of the artificial intelligence. The governance of physical artificial intelligence would define its responsible intelligent application in the society.
Physical artificial intelligence can prove to be one of the most important challenges of the artificial intelligence. The governance of physical artificial intelligence would define its responsible intelligent application in the society.
## Introduction
Artificial Intelligence (AI) has grown to be the fundamental technology in today's world. Over the last few years, not only has AI been popularly applied in typical AI applications of information and signal processing such as Natural Language Processing (NLP), but it has also empowered all the other industries such as healthcare and robotics. Miriyev and Kovac[1] proposed to define the AI used in Robotics as Physical Artificial Intelligence (PAI) because PAI interacts with the physical world, contrary to the notion of traditional Digital Artificial Intelligence (DAI) applied in digital information processing. From this perspective, we propose to extend the notion of PAI to a much wider domain to also include Internet of Things (IoT), or automatic driving cars. To the best of our knowledge, most research on AI governance is limited to the domain of DAI, so in the present paper we propose to outline the governance framework of PAI.
## The application of PAI
In the proposed concept of PAI by Miriyev and Mirko [1], PAI refers to the typical robot system. While, we propose to extend the concept of PAI to cover all potential applications with the built-in AI perceiving and interacting between the cyberspace and the physical world. Besides the robot system with the AI working in an integrated and limit physical environment, the distributed intelligent system with AI capability is the typical Distributed PAI. As shown in Fig. 1, PAI could be applied in and include multiple distributed industries, such as IoT, self-driving cars, agriculture, healthcare and logistics.
We propose to classify PAI into two overlapped kinds as shown in Fig. 1: Independent PAI and Distributed PAI. Independent PAI refers to the intelligent device and the robot[1]. Distributed PAI becomes more and more popular when the edge computing[2] is mature and every device is connected to the network in the wider space. IoT and edge computing are typical Distributed PAI subdomains. Since it is popular for every intelligent system to be online and individual units in Distributed PAI have strong computing capabilities now, Independent PAI and Distributed PAI will overlap in multiple applications[3].
The IoT is a typical distributed system with a spatial distribution that ranges from a small space such as a room to a wider area such as a city. The IoT is formed of various sensors that capture the signals and changes in the physical world. Its AI power could happen in both server side and the edge side. Based on the AI analysis, IoT could directly or indirectly make predictions in the cyberspace to influence the physical world. For example, a self-driving car needs to first perceive real time road situations and connect to the Internet for navigation, then adjust the driving behavior. The agriculture is one of the most successful PAI applications. The sensors in the agriculture including cameras, temperature meter, hygrometer, etc, monitor the growth of the plant and predict, for example, the optimal pesticide intervention and the best harvesting time. In the healthcare industry, families, nursing homes or hospitals could use the biological sensors and the chemical sensors to monitor a patient and predict potential risks such as falling or unstable situations through a monitoring center. The "last mile" is the expensive and hard problem in the logistic industry. A typical distributed AI application could help with delivery tasks through delivery robots and drones connected to and commanded by the center server. Another example is the automatic sorting robot that has been used in the sorting center of the logistics. The general framework of Distributed PAI is described in Fig. 2.
DAI mimics the brain capability of logical thinking and induction in human brain, to process the data and signals perceived by human eyes and ears. The human brain is only responsible for processing the signals and transmitting commands to other parts of the body such as movement, vision perception, sound perception, digestion and etc. By comparison, Individual PAI is like an individual human body, while Distributed PAI further extends the AI capabilities just like the human society is composed
of multiple humans.
Due to the probabilistic data processing of the current DAI, decisions from DAI are unsubstantial, not being able to reduce the uncertainty of applications. On the other side of promoting the application of DAI, the explainable property of current DAI poses increasing governance challenges against negative and malicious practice of DAI, including data biases and AI frauds, etc. Explainable AI tries to explain and understand the internal operating mechanism of AI. Distributed PAI interacts with the physical world with a much larger spatial area and consequently accumulates Big Data AI footprints which includes much longer interaction trajectories crossing cyberspace and physical world. So explainable AI applied in Distributed PAI has the advantage to reveal the internal mechanism of DAI and the integrated human-cyber-physical social phenomena.
PAI needs to combine multiple streams of information including materials, temperature, vision, sound, etc, crossing multiple modals from multiple sensors as shown in Fig. 1. Through the mix of the multimodal information, PAI builds competitive capability that uses multiple types of information which allows it to make better decision and better precision, in the context of imprecise data collection, inconsistent information and incomplete knowledge scattering over various abstraction levels. The various sources of data and information bring multiple kinds of data, which outperform a single source of data, to make real-time decisions and predictions. This is a significant feature of PAI.
## The governance of PAI
The governance of DAI has been challenged by researchers from a fairness to social impact perspective. DAI has been facing the challenges of risk and governance [4] from, but not limited to, the following aspects: 1) The storage and transfer security; 2) The fake data; 3) The social privacy; 4) The bias of the sex, gender, and race because of the limited training datasets.
As a consequence to the multi-source perception and multi-dimension interaction in a much larger space, PAI, especially Distributed PAI, brings more uncertainty and risk from the social impact to the technology influence:
* The existence problem. PAI especially Distributed PAI such as IoT needs multiple kinds of sensors to interact with the physical world. If PAI is distributed in a limited space such as a factory, it will not encounter challenging regulation problems because it is in an internal space. However, if the space is extended to a larger space such as a city which is not under one unique regulation, PAI will face problems of social regulations.
* The data organization problem. The multiple sources of data from the physical world of a wider space will increase the structural construction and integration complexity of the data and information. The Knowledge Graph could be the potential solution for the information organization in the hierarchical structure.
Figure 1: The applications of Distributed PAI
* Cannikin Law. The development of PAI depends on at least 5 disciplines of materials science, mechanical engineering, chemistry, biology and computer science. Therefore, the slower development of one discipline will cause the problem of Cannikin law and prohibit the development of PAI.
* The social acceptance. Similar to the dilemma of DAI, the ubiquitous application of PAI will cause the worry of the society regarding the increase of unemployment, broadening of the gap in income, the shrinking of privacy space, etc. The acceptance from multiple aspects of the law and society will influence the application of PAI in the research, the industry and the society. So, the acceptance of the society and the corresponding legislation is a potential factor for PAI.
We illustrates above governance problems of PAI in Figure 3. The development of PAI has to resolve these four problems.
## Conclusion
We have suggested extending the notion of PAI to a larger physical space with the distributed applications with the notion of Distributed PAI. The spatial variety of Distributed PAI could vary from a room space to a city space, while its applications include IoT, agriculture, and so forth. Since PAI, especially Distributed PAI, perceives and interacts with different physical entities in different spaces, the governance issues including the existence problem has been challenged. We have put forward a
Figure 2: Distributed PAI
framework of governance problems of PAI however this is open to further discussions since it is a research topic applying not only to the research but also the development of the whole human society.
|
2303.17356 | The ESSnuSB design study: overview and future prospects | ESSnuSB is a design study for an experiment to measure the CP violation in
the leptonic sector at the second neutrino oscillation maximum using a neutrino
beam driven by the uniquely powerful ESS linear accelerator. The reduced impact
of systematic errors on sensitivity at the second maximum allows for a very
precise measurement of the CP violating parameter. This review describes the
fundamental advantages of measurement at the 2nd maximum, the necessary
upgrades to the ESS linac in order to produce a neutrino beam, the near and far
detector complexes, the expected physics reach of the proposed ESSnuSB
experiment, concluding with the near future developments aimed at the project
realization. | ESSnuSB Collaboration, A. Alekou, E. Baussan, A. K. Bhattacharyya, N. Blaskovic Kraljevic, M. Blennow, M. Bogomilov, B. Bolling, E. Bouquerel, F. Bramati, A. Branca, O. Buchan, A. Burgman, C. J. Carlile, J. Cederkall, S. Choubey, P. Christiansen, M. Collins, E. Cristaldo Morales, L. D'Alessi, H. Danared, D. Dancila, J. P. A. M. de André, J. P. Delahaye, M. Dracos, I. Efthymiopoulos, T. Ekelöf, M. Eshraqi, G. Fanourakis, A. Farricker, E. Fernandez-Martinez, B. Folsom, T. Fukuda, N. Gazis, B. Gålnander, Th. Geralis, M. Ghosh, A. Giarnetti, G. Gokbulut, L. Halić, M. Jenssen, R. Johansson, A. Kayis Topaksu, B. Kildetoft, B. Kliček, M. Kozioł, K. Krhač, Ł. Łacny, M. Lindroos, A. Longhin, C. Maiano, S. Marangoni, C. Marrelli, C. Martins, D. Meloni, M. Mezzetto, N. Milas, M. Oglakci, T. Ohlsson, M. Olvegård, T. Ota, M. Pari, J. Park, D. Patrzalek, G. Petkov, P. Poussot, F. Pupilli, S. Rosauro-Alcaraz, D. Saiang, J. Snamina, A. Sosa, G. Stavropoulos, M. Stipčević, B. Szybiński, R. Tarkeshian, F. Terranova, J. Thomas, T. Tolba, E. Trachanas, R. Tsenov, G. Vankova-Kirilova, N. Vassilopoulos, E. Wildner, J. Wurtz, O. Zormpa, Y. Zou | 2023-03-30T13:12:12Z | http://arxiv.org/abs/2303.17356v3 | # The ESSnuSB design study: overview and future prospects
###### Abstract
ESnuSB is a design study for an experiment to measure the CP violation in the leptonic sector at the second neutrino oscillation maximum using a neutrino beam driven by the uniquely powerful ESS linear accelerator. The reduced impact of systematic errors on sensitivity at the second maximum allows
for a very precise measurement of the CP violating parameter. This review describes the fundamental advantages of measurement at the 2\({}^{\rm nd}\) maximum, the necessary upgrades to the ESS linac in order to produce a neutrino beam, the near and far detector complexes, the expected physics reach of the proposed ESSnuSB experiment, concluding with the near future developments aimed at the project realization.
neutrino; oscillation; long baseline; CP violation; second maximum; precision +
Footnote †: journal: Physics Letters B
## 1 Introduction
It was widely believed in the first half of 20\({}^{\rm th}\) century that all physical laws must be invariant to spatial translation (implying the conservation of momentum), spatial rotation (implying the conservation of angular momentum), time translation (implying the conservation of energy), parity transformation and time inversion. Parity transformation effectively maps our world to its mirror image: invariance of physics to parity transformation means that for any physical process there exist a mirrored version of it. By the 1950s, parity invariance was experimentally verified with a high degree of precision for the strong and electromagnetic interactions, but there was no conclusive evidence for weak interactions [1]. A landmark experiment was then performed by observing beta decay of polarized cobalt-60 nuclei, which has found that parity is in fact maximally violated in that process [2].
Shortly thereafter it was proposed that the true symmetry of the universe is actually charge-parity (CP) symmetry [3], where charge transformation C maps each particle to its antiparticle and vice-versa. That is, laws of physics remain invariant to the parity inversion if one also swaps all particles and antiparticles. The CP symmetry violation (CPV) would then imply a fundamental difference in behaviour between particles and antiparticles.
The CPV has been observed in the hadron sector, first by measuring decays of neutral kaons [4], and later by measuring decays of heavier neutral mesons [5; 6; 7; 8]. The size of the CPV in the hadron sector has turned out to be quite small, not enough to explain the matter-antimatter asymmetry that we observe in the Universe today [9; 10].
At the time of writing of this document there is no conclusive evidence of CPV in the lepton sector; there are hints from the T2K [11; 12; 13] and NOvA [14] experiments, though, that leptonic CP might actually be maximally violated. The sensitivity of these two experiments in not expected to reach the discovery level of 5 \(\sigma\). Therefore, next generation facilities need to be constructed which will have larger target mass and more intense neutrino beam required to collect a statistical sample significant enough to resolve the existence of the leptonic CP violation. Two such experiments are currently in a construction phase: the Hyper-Kamiokande (HyperK) [15] and the Deep Underground Neutrino Experiment (DUNE) [16; 17; 18; 19]. They are expected to have the ability to reject the no-CPV hypothesis with a significance of more than 5 \(\sigma\) for a large fraction of parameter space [20; 21; 22; 23; 24].
ESSnuSB will be a next-to-next generation CPV experiment focusing on the precise measurement of CPV parameters in the lepton sector. The unprecedented precision will be achieved by measuring neutrino oscillations in the second oscillation maximum - in which the CPV effect is about 2.7 times larger than in the first one - making the experiment less sensitive to the systematic errors. This will be made possible using a very intense neutrino beam produced by the uniquely powerful ESS linear accelerator together with the very large neutrino far detectors.
This review starts with the discussion on fundamental benefits of measurement at the 2\({}^{\rm nd}\) neutrino oscillation maximum. It proceeds with the description of the proposed ESSnuSB experiment: neutrino beam production, near and far neutrino detectors and the physics reach of the proposed experiment. It concludes by a brief description of the future developments towards the realization of the project.
## 2 CP violation measurement at the 2\({}^{\rm nd}\) oscillation maximum
At fundamental level, ESSnuSB aims to measure the CPV by observing the difference in oscillation probabilities between neutrinos and antineutrinos in \(\nu_{\mu}\to\nu_{e}\) and \(\overline{\nu}_{\mu}\to\overline{\nu}_{e}\) appearance channels, respectively.
### Oscillations in vacuum
The probability of neutrino oscillations in vacuum assuming a plane-wave approximation is given by (see Sec 14.4 in [25])
\[P_{\nu_{\alpha}\to\nu_{\beta}}=\delta_{\alpha\beta}-4\sum_{i>j}{\rm Re}\Big{(} A_{ij}^{\alpha\beta}\Big{)}\sin^{2}\frac{\Delta m_{ij}^{2}L}{4E}\pm 2\sum_{i>j}{ \rm Im}\Big{(}A_{ij}^{\alpha\beta}\Big{)}\sin\frac{\Delta m_{ij}^{2}L}{2E}\, \tag{1}\]
where \(\alpha\) is the initial neutrino flavour, \(\beta\) is the oscillated flavour, indices \(i\) and \(j\) are in the range 1-3, \(A_{ij}^{\alpha\beta}=U_{ai}^{*}U_{aj}\Gamma_{\beta i}^{*}U_{\beta j}\) is a quadrilinear product of elements of the unitary PMNS [26; 27; 28; 29; 30] mixing matrix \(U\), \(\Delta m_{ij}^{2}=m_{i}^{2}-m_{j}^{2}\) is a difference of squared neutrino masses \(m_{i}\) and \(m_{j}\), \(E\) is the energy of the neutrino and \(L\) is the distance between neutrino creation and interaction points; the upper sign \((+)\) in front of the third term corresponds to neutrinos, the lower one \((-)\) to antineutrinos.
The difference between oscillation probabilities of neutrinos and antineutrinos is then given by the expression
\[\mathcal{A}_{\rm CP}^{\alpha\to\beta}=P_{\nu_{\alpha}\to\nu_{\beta}}-P_{\overline {\nu}_{\alpha}\to\overline{\nu}_{\beta}}=4\sum_{i>j}{\rm Im}\Big{(}A_{ij}^{ \alpha\beta}\Big{)}\sin\frac{\Delta m_{ij}^{2}L}{2E}. \tag{2}\]
It follows from the properties of the 3-generation mixing matrix that the term \({\rm Im}\Big{(}A_{ij}^{\alpha\beta}\Big{)}\) is constant up to a sign if \(i\neq j\) and \(\alpha\neq\beta\), and zero otherwise [31]. That is:
\[{\rm Im}\Big{(}A_{ij}^{\alpha\beta}\Big{)}\equiv\pm J\, \tag{3}\]
where \(J\) is called the Jarlskog invariant.
Assuming the commonly used parametrization (see Sec. 14.3 in [25]) of the mixing matrix
\[U=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta_{\rm CP}}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta_{\rm CP}}&c_{12}c_{23}-s_{12}s_{23} s_{13}e^{i\delta_{\rm CP}}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta_{\rm CP}}&-c_{12}s_{23}-s_{12}c_{23} s_{13}e^{i\delta_{\rm CP}}&c_{23}c_{13}\end{pmatrix}\, \tag{4}\]
where \(c_{ij}=\cos\theta_{ij}\) and \(s_{ij}=\sin\theta_{ij}\) are sine and cosine of a mixing angle \(\theta_{ij}\), and \(\delta_{\rm CP}\) is a CPV phase, the Jarlskog invariant can be written as
\[J=s_{12}c_{12}s_{13}c_{13}s_{23}c_{23}c_{13}\sin\delta_{\rm CP}. \tag{5}\]
The CPV amplitude for the ESSnuSB oscillation channel can then be written as
\[\mathcal{A}_{\rm CP}^{\mu\to e}=P_{\nu_{\mu}\to\nu_{e}}-P_{\overline{\nu}_{ \mu}\to\overline{\nu}_{e}}=-16J\sin\frac{\Delta m_{31}^{2}L}{4E}\sin\frac{ \Delta m_{32}^{2}L}{4E}\sin\frac{\Delta m_{21}^{2}L}{4E}. \tag{6}\]
As an illustration, the dependence of \(\mathcal{A}_{\rm CP}^{\mu\to e}\) on \(L/E\), using central values of parameters from Table 1. and \(\delta_{\rm CP}=-\pi/2\), is shown in Fig. 1.
By denoting \(x_{\rm max}^{(1)}\) and \(x_{\rm max}^{(2)}\) as the values of \(L/E\) at the first and second maximum respectively, one obtains the expression for the ratio between CPV violation at the \(2^{\rm nd}\) and \(1^{\rm st}\) maximum:
\[\frac{\mathcal{A}_{\rm CP}^{\mu\to e}\Big{(}x_{\rm max}^{(2)} \Big{)}}{\mathcal{A}_{\rm CP}^{\mu\to e}\Big{(}x_{\rm max}^{(1)}\Big{)}}= \frac{\sin\frac{\Delta m_{31}^{2}x_{\rm max}^{(2)}}{4}\sin\frac{\Delta m_{32}^ {2}x_{\rm max}^{(2)}}{4}\sin\frac{\Delta m_{24}^{2}x_{\rm max}^{(2)}}{4}}{\sin \frac{\Delta m_{32}^{2}x_{\rm max}^{(1)}}{4}\sin\frac{\Delta m_{32}^{2}x_{\rm max }^{(1)}}{4}}\,. \tag{7}\]
The ratio between CPV in second and first maximum does not depend on the PMNS mixing angles, only on the neutrino mass splittings. Plugging in the values for mass splittings from Table 1, together with \(x_{\rm max}^{(1)}\) and \(x_{\rm max}^{(2)}\), one obtains
\[\frac{\mathcal{A}_{\rm CP}^{\mu\to e}\Big{(}x_{\rm max}^{(2)} \Big{)}}{\mathcal{A}_{\rm CP}^{\mu\to e}\Big{(}x_{\rm max}^{(1)}\Big{)}}\approx 2.7. \tag{8}\]
That is, the difference between neutrino and antineutrino oscillation probabilities due to CP violation is about three times larger at the second maximum than at the first one.
\begin{table}
\begin{tabular}{l c}
**Parameter** & **Best-fit value \(\pm 1\sigma\) range** \\ \hline \(\sin^{2}\theta_{12}\) & \(0.304\pm 0.012\) \\ \(\sin^{2}\theta_{13}\) & \(0.02246\pm 0.00062\) \\ \(\sin^{2}2\theta_{23}\) & \(0.9898\pm 0.0077\) \\ \(\Delta m_{21}^{2}\) & \((7.42\pm 0.21)\times 10^{-5}\) eV\({}^{2}\) \\ \(\Delta m_{31}^{2}\) & \((2.510\pm 0.027)\times 10^{-3}\) eV\({}^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: The best-fit values and \(1\sigma\) allowed regions of the oscillation parameters used throughout this review, as given in [32]. Reprinted from [33].
### Matter effects
When neutrinos propagate through matter instead of vacuum, the oscillation probability functions change. This effect is well understood and described in literature [34; 35; 36; 37]. The fundamental reason for this is the effective potential induced by forward elastic scattering of neutrinos with matter. The matter potential seen by electron neutrinos is different that the one seen by muon and tau neutrinos: forward elastic scattering can proceed through neutral-current (NC) interaction for all three neutrino flavours, while there is an additional contribution for \(\nu_{e}\) only through charged-current (CC) interactions with orbital electrons.
Matter effects may mimic the vacuum CPV signal1. This needs to be carefully taken into account in the long baseline oscillation experiments since there neutrino beam propagates through the Earth's crust. This effect is illustrated in Fig. 2 for the distance of 360 km at which the far detector of the ESSnuSB experiment is going to be located.
Footnote 1: One may argue that introducing matter (and not antimatter) into vacuum breaks the CP symmetry in itself.
The first appearance maxima in Fig. 2 are located roughly in the neutrino energy region 0.65-0.85 MeV and the second in the 0.25-0.35 MeV. The exact position of maxima depends on the choice of value for \(\delta_{\rm CP}\) and whether the matter effects are taken into account. The oscillation probability is significantly altered in the presence of matter around the 1st maximum, while at the 2nd maximum the probabilities in matter and in vaccuum are similar. In fact, it can be shown that at neutrino energies of currently operating and proposed long baseline experiments and terrestrial matter densities, the matter effects around the 2nd maximum have minimal contribution to probability functions [38; 39].
### Conclusion
The advantages of measurement at the 2nd oscillation maximum over the 1st one are significantly larger difference between neutrino and antineutrino oscillation probabilities and minimal dependance of oscillation functions on matter effects. The downside is that the 2nd maximum is about three times further away from the source than the 1st maximum for a
Figure 2: \(\nu_{\mu}\to\nu_{e}\) and \(\overline{\nu}_{\mu}\to\overline{\nu}_{e}\) oscillation probabilities as a function of neutrino energy at the fixed distance of 360 km. The oscillation probabilities are shown for \(\delta_{\rm CP}=0\) and \(\delta_{\rm CP}=-\pi/2\). Full lines correspond to oscillations in vacuum and dashed lines to oscillations in matter.
given neutrino energy, implying a reduction of neutrino flux - consequently the number of expected neutrino interactions - by a factor of nine. However, given the larger CPV amplitude (8) and assuming that the background and systematic error are comparable at the 1st and 2nd maximum, the measurement at the 2nd is expected to be more significant and precise if large enough statistical sample can be accumulated [40; 41; 42; 43; 44; 45; 22; 46]. To accumulate a significant statistical sample at the 2nd maximum, a very intense neutrino beam is required. To achieve this, the ESSnuSB project foresees to use the uniquely powerful ESS proton linear accelerator currently in construction near Lund in Sweden.
## 3 Neutrino beam
Neutrino beam for the ESSnuSB experiment will be produced using the ESS [47] proton linear accelerator.
The basic idea behind the neutrino beam production is to create a beam of pions which is allowed to decay in-flight via the process \(\pi^{+}\to\mu^{+}+\nu_{\mu}\) (and its charge conjugate). The pions are produced by shooting a proton beam onto a thin target: this produces a number of hadronic species which immediately decay via strong and electromagnetic interactions, leaving only weak-stable particles called secondaries. Secondaries are composed of pions and heavier mesons (e.g. kaons, strange and charmed mesons), the number of latter rising with the proton beam energy. An electromagnetic horn envelops the target and is used to simultaneously focus the particles of a selected charge sign and defocus those of the opposite sign. By selecting the sign of the focused particles, one can choose between producing a beam of neutrinos and a beam of antineutrinos - decays of positive mesons mostly produce neutrinos, while decays of negative ones mostly produce antineutrinos. While charged pion decays produce almost exclusively a muon (anti)neutrino beam, heavier mesons (like kaons) have decay channels that include electron (anti)neutrinos which contribute to the \(\nu_{e}\) and/or \(\overline{\nu}_{e}\) component of the produced beam2.
Footnote 2: Decays of heavier mesons like \(D_{s}\) may have tau neutrinos in the final state as well, but their energy production threshold is well above the ESS proton energies.
Since CPV measurement will be performed by observing electron (anti)neutrino interactions from \(\nu_{\mu}\to\nu_{e}\) and \(\overline{\nu}_{\mu}\to\overline{\nu}_{e}\) oscillation channels, the prompt \(\nu_{e}\) and \(\overline{\nu}_{e}\) beam components constitute the background to this measurement. The electron neutrino component of the beam coming from decays of heavier mesons is difficult to model precisely due to the quantum chromodinamical (QCD) nature of meson production, which induces an additional systematic error on the CPV measurement. An advantage of using a relatively low energy proton beam such as ESS at 2.5 GeV is that secondaries will contain a very small amount of flavoured baryons due to their mass production threshold, which in turn makes the resulting muon (anti)neutrino beam quite clean.
Since the ESS accelerator was designed for the production of spallation neutrons, a number of modifications will be required to enable production of a neutrino beam in parallel with the neutron programme. The proposed changes are shown in Fig. 3.
The ESS will operate using 2.86 ms long proton (or H\({}^{-}\) ion) pulses, which would produce neutrino beam pulses of a comparable duration. It has been shown that such "long" pulses can not be used for CPV measurement (see Section 6.3.4 in [33]) due to the atmospheric neutrino background at the ESSnuSB far detectors: the number of atmospheric neutrino interactions during the "long" pulse would be so high that its statistical fluctuations would be larger than the number of expected beam \(v_{e}\) interactions, completely drowning the CPV signal. To solve this problem, the ESS pulses will be compressed to 1.3 us which will effectively eliminate the atmospheric background in the far detectors.
The proton accumulator ring of 384 m in length will be used to compress the ESS pulses. This will be done by filling the accumulator ring over many of its periods of circulation and then discharging it towards the neutrino targets in one period, creating a 1.3 us proton pulse. In order to efficiently fill the ring, \(\mathrm{H}^{-}\) ions will be accelerated by the ESS instead of protons, their two extra electrons will be stripped at the moment they enter the accumulator ring: this will avoid space charge issues which would arise from the electromagnetic repulsion between the protons already circulating within the ring and those being injected. The long ESS \(\mathrm{H}^{-}\) pulse will be chopped into four subpieces in the low energy part of the ESS linac, and each of the subpieces will be compressed to 1.3 us one after another. Each of the four compressed pulses will be transferred a separate neutrino target using the switchyard system located between the accumulator ring and the target station.
In order to withstand the very high power of the ESS linac, the ESSnuSB target station will consist of four identical target/horn systems, each receiving a quarter (1.25 MW) of the nominal average 5 MW ESS beam power. The targets will have a cylindrical shape of 78 cm in length and 3 cm in diameter, and will consist of closely packed titanium spheres of 3 mm mean
Figure 3: Layout of the ESS accelerator including ESSnuSB modifications required for neutrino beam production and detection. The proposed modifications are show in color: the transfer line, the accumulator ring, the switchyard, the neutrino target station and the near detector. Reprinted from [33].
diameter cooled by gaseous helium. The secondaries will be focused using a pulsed magnetic horn driven by 350 kA pulses of 100 \(\upmu\)s duration and will be lead into a 50 m long decay tunnel. The shape of the horn and length of the decay tunnel have been optimized for maximum CPV measurement sensitivity using a genetic algorithm.
The neutrino beam energy distribution using this setup is shown in Fig. 4.
The neutrino flux is dominated by \(\nu_{\mu}(\overline{\nu}_{\mu})\) component in neutrino(antineutrino) mode, which makes up a 97.6 % (94.8 %) of the flux. The \(\overline{\nu}_{\mu}(\nu_{\mu})\) component in neutrino(antineutrino) mode makes up 1.7 % (4.7 %) and comes from wrong-sign pion decays (i.e. imperfect charge
Figure 4: The expected neutrino flux components and their energy distribution at the 100 km distance from the source, in absence of the neutrino oscillations. Reprinted from [33].
selection of secondaries) and from decays of tertiary muons (muons produced in the secondary pion decay). The \(\nu_{e}(\overline{\nu}_{e})\) component in neutrino(antineutrino) mode makes up 0.67 % (0.43 %) of the flux and comes dominantly from the tertiary muon decays, with a small component from three-body kaon decay. The very small \(\overline{\nu}_{e}(\nu_{e})\) in neutrino(antineutrino) mode, making up a 0.03 % (0.03 %) of the flux, comes primarily from wrong-sign tertiary muon decays.
It should be noted that modelling tertiary muon production is much more precise than modeling secondary kaon production. This, together with the fact that most of the electron (anti)neutrino component comes from tertiary muon decays, makes it possible to model this component with less systematic uncertainty than would be the case for more energetic proton beams which produce a larger number of kaon secondaries.
## 4 Neutrino detectors
The ESSnuSB experiment will consist of a suite of near detectors and a very large water Cherenkov far detectors. The far detectors will be used to measure the oscillated neutrino spectrum and hence the signal for the CP-violation. Measurements from the near detectors will be used to constrain the prompt neutrino flux and interaction cross-sections. A detailed description of the ESSnuSB detectors can be found in the ESSnuSB conceptual design report [33]; this section provides a brief overview of the basic ideas.
### Near detectors
Knowledge of neutrino interaction cross-sections, both inclusive and differential, will play a crucial role in the ESSnuSB experiment. An a-priori uncertainty of the cross-sections directly translates to uncertainty of the expected event rate at far detectors, which in turn decreases the CPV discovery potential and precision of the \(\delta_{\rm CP}\) phase measurement. It turns out that this uncertainty (assuming it stays within reasonable bounds) does not affect the 5 \(\sigma\)\(\delta_{\rm CP}\) coverage too much (see Fig. 8) - mostly due to the very intense neutrino beam, large far detectors and the fact that we are measuring at the second oscillation maximum. However, it does play a crucial role in the expected precision of the \(\delta_{\rm CP}\) measurement (see Fig. 10). Since ESSnuSB is designed to be a next-to-next generation CPV experiment with focus on precision, near detectors have been designed with a focus on cross-section measurement. Additionally, separate cross-section measurement campaigns are foreseen in the construction and commissioning phase of the ESSnuSB experiment (see Sec. 6).
Near detector suite of the ESSnuSB project will consist of three different detectors, in upstream to downstream order: emulsion detector with water as a target, the SFGD-like granulated scintillator detector and a water Cherenkov detector. The near detector hall is schematically shown in Fig. 5.
The emulsion near detector (named \(\nu\)iking) will have a 1 t of water target mass, and will be used to precisely measure the final state topology of the neutrino-water interactions. This will make it possible to discriminate between different neutrino interaction modes - e.g. quasi-elastic scattering (QES), inelastic scattering like resonant (RES) and deep-inelastic (DIS), and meson exchange current (MEC) scattering. The measurement of MEC contribution to the total cross-section will be of special importance since it is expected to have a significant contribution at ESSnuSB neutrino energies and yet this channel is currently not adequately explored. The signature of this interaction in neutrino mode is a charged lepton and two protons in the final state. Usage of emulsion technology will enable direct measurement of the two protons, which will make it possible to very precisely tag and measure kinematics of these interactions. In antineutrino mode there will be two neutrons instead of protons in the final state which makes this kind of measurement significantly more difficult.
Immediately downstream of the emulsion detector, a SFGD-like [48] magnetized granulated scintillator detector will be installed. It will feature a fiducial target mass of 1 t and a dipole magnetic field of up to 1 T perpendicular to the beam direction. Having a magnetic field, it will be able to discriminate between positive and negative charged leptons and therefore between neutrinos and antineutrinos. Its granular design will enable both muon momentum measurement and calorimetric measurement of the final state particles in neutrino interactions. Hence, it will feature the best neutrino energy reconstruction of the three near detectors. The downside is that the target material will be composed of carbohydrates (CH) instead of water; a theoretical model can be used to go from CH to water neutrino interaction cross-section, but this will introduce additional systematic uncertainties. The additional purposes of this detector will be to: _(i)_ to provide timing information to the emulsion detector, and _(ii)_ provide muon charge and momentum information to near WC detector for those events in which the charged lepton in the final state transverses through both detectors - this will allow additional calibration of the WC detector.
Figure 5: Schematic view of the ESSnuSB near detector hall.
Downstream of the scintillator detector a 0.77 kt fiducial mass water Cherenkov detector will be situated. Due to its large mass, it is expected to record bulk of neutrino interactions among the three detectors. It will be used to collect a high-statistic sample of \(\nu_{\mu}\) interactions and a significant sample of \(\nu_{e}\) interactions. Additionally, a sample of neutrino - orbital electron scattering events \(\nu+e^{-}\rightarrow\nu+e^{-}\) will be isolated. The interaction cross-section for this process is precisely known, so it can be used to directly measure neutrino flux. This measurement, in turn, will be an input to measurement of neutrino-nucleus cross-section - the main goal of the near detector setup. It should be noted that having the same target material and detection technology as the far detector, it will be possible to correlate systematic uncertainties to some extent between the two detectors using a dedicated analysis.
### Far Detectors
The far detector site will consist of two identical large water Cherenkov detectors. Each detector will be placed in a cavern in the shape of a standing cylinder with the height of 78 m and the base diameter of 78 m, having an extra room on top for access and housing the required infrastructure (see Fig. 6). The design with two caverns was chosen due to extreme technical challenge of excavating a single one large enough to contain the required water volume. A cylindrical structure 76 m meter high and having 76 m base diameter that will be constructed in the cavern to house the photomultipliers (PMT). The entire cavern will be filled with ultra-pure water. Assuming a 2 m fiducial cut inward from the walls of the PMT structure, each detector will contain 270 kt fiducial mass of water, for a total of 540 kt fiducial mass.
The PMT-holding structure will feature inwards pointing 20 inch PMTs whose purpose will be to detect Cherenkov light from charged particles produced in neutrino interactions, and
Figure 6: Technical drawing of a single far detector cavern. Reprinted from [33].
the outward-facing 8 inch PMTs that will be used as a veto. The inner detector will have PMT coverage (fraction of the area covered by PMTs) of 30 %.
In the ESSnuSB energy range (see Fig. 4), most of CC neutrino interactions are expected to be of QES type. Since the only final state particle above the Cherenkov threshold in this type of interaction is a charged lepton (electron for \(\nu_{e}\) and muon for \(\nu_{\mu}\)), high purity \(\nu_{\mu}\) and \(\nu_{e}\) event samples can be isolated with high efficiency; the selection algorithm does not need to achieve high purity and efficiency in the higher energy region containing a significant contribution from RES and DIS with complicated final states, for a simple reason that not many ESSnuSB neutrino interactions will happen there. The resulting selection efficiency as a function of energy for different neutrino flavours is shown in Fig. 7.
## 5 Physics reach
The expected significance of the leptonic CPV discovery and the precision of \(\delta_{\rm CP}\) measurement has been studied (see Section 8. in [33]) assuming the described ESSnuSB setup. The total run-time is assumed to be 10 y, out of which 5 y will be in neutrino mode and 5 y in antineutrino mode. The fact that the measurement will be conducted in the \(L/E\) range covering both 2\({}^{\rm nd}\) and 1\({}^{\rm st}\) oscillation maximum results in the very high expected discovery potential and unprecedented precision of the CPV phase \(\delta_{\rm CP}\) measurement. These results have been shown to be very robust to different assumptions on the systematic uncertainties.
The effect of three types of systematic error have been studied, each of them distorting the expected measured neutrino spectrum. These are: _(i)_ normalization uncertainty, uncertainty on the overall normalization of the expected neutrino spectrum, _(ii)_ energy calibration uncertainty, assuming fully correlated error on neutrino energy reconstruction, and _(iii)_ bin-to-bin uncorrelated uncertainty, modelling the distortion of the shape of the expected spectrum. All these systematics have been applied independently for each measurement channel; a single measurement channel corresponds to an oscillation channel like \(\nu_{\mu}\rightarrow\nu_{e}\) and \(\nu_{e}\rightarrow\nu_{e}\), or to the NC neutrino interactions.
CPV discovery potential is defined as expected significance with which the experiment will rule out the non-CPV values \(\delta_{\rm CP}=0,\pi\). Out of the three studied systematic error types, it turns out that the CPV discovery potential is most sensitive to the normalization uncertainty. This effect is shown in Fig. 8.
Figure 7: The efficiency to correctly select a neutrino flavour as a function of neutrino energy. Full lines correspond to different neutrino flavours. Dashed line is efficiency of the fiducial cut. Reprinted from [33].
The extreme robustness of the ESSnuSB experiment to the systematic error is demonstrated by the fact that the CPV discovery potential is competitive even with a very pessimistic assumption on the normalization error of 25 %. Unless otherwise noted, throughout the remainder of the text a smaller - but still conservative - normalization error of 5 % will be used. The coverage of the \(\delta_{\rm CP}\) range for which the discovery potential is more than 5 \(\sigma\) as a function of run-time is shown in Fig. 9.
Figure 8: CPV discovery potential as a function of true \(\delta_{\rm CP}\) value, assuming the baseline of 360 km (Zinkgruvan mine) and run-time of 5y in \(\nu\) mode and 5y in \(\overline{\nu}\) mode. Different lines correspond to different normalization uncertainty assumptions. Reprinted from [33].
Figure 9: Coverage of the \(\delta_{\rm CP}\) range for which the discovery potential is larger than 5 \(\sigma\) as a function of run-time, assuming equal time in neutrino mode and antineutrino mode.
The strength of the ESSnuSB experiment will come from the unprecedented precision of the \(\delta_{\rm CP}\) measurement. The bin-to-bin correlated systematic error has the largest effect on the \(\delta_{\rm CP}\) precision. The 1 \(\sigma\) precision as a function of true value of \(\delta_{\rm CP}\) is shown in Fig. 10.
It can be seen from Fig. 10 that the precision for \(\delta_{\rm CP}\) value is expected to be less than \(9^{\circ}\) for all true values, when using a conservative assumption on the systematic uncertainty of 5 % on normalization and 5 % bin-to-bin uncorrelated. The actual systematic uncertainties are expected to be smaller than that by the start of the project, further improving the precision. If the value of \(\delta_{\rm CP}\) is roughly known, the precision can be improved even further by adjusting the time measurements will be taken in neutrino mode v.s. time in antineutrino mode, while keeping the total run-time constant.
It is expected that by the time the ESSnuSB starts taking data, the existence of the CPV in the lepton sector will be either confirmed or excluded in the large part of parameter space by HyperK [15] and/or DUNE [16; 17; 18; 19] experiments. In either case, ESSnuSB will be an important next-to-next generation experiment which will be able to precisely measure the amplitude of CPV or to conduct a more precise scan of the paramater space for searching for its existence.
It should be noted that, apart from the CPV measurement, ESSnuSB will have an ability to discriminate between the normal and inverted neutrino mass hierarchy with a significance of more than 5 \(\sigma\)[49].
## 6 Future developments
The study performed in the ESSnuSB CDR [33] and described so far in this review has mainly focused on the CPV measurement using the ESS neutrino beam. The construction of the large far detector facility will require a significant amount of time and resources: an intermediate step is therefore foreseen, which will focus on the neutrino interaction cross-section measurement, neutrino production target station R&D and study of the additional physics potential of ESSnuSB near and far detectors. These topics will be studied within the ESSnuSB+ [50] project, together with the study of the civil engineering of both ESS upgrades
Figure 10: The expected 1 \(\sigma\) precision for the measurement of the CPV parameter \(\delta_{\rm CP}\) as a function of the true value of \(\delta_{\rm CP}\), assuming the baseline of 360 km (Zinkgruvan mine) and run-time of 5y in \(\nu\) mode and 5y in \(\overline{\nu}\) mode. Different lines correspond to different bin-to-bin uncorrelated errors. A normalization error of 5 % is applied on top of the bin-to-bin error. Reprinted from [33].
and the far detector site and production of conceptual CAD drawings of the infrastructure. The additional facilities at the ESS site proposed by the ESSnuSB+ project are shown in Fig. 11.
As described in Sec. 3, the full ESSnuSB neutrino production target station will consist of four identical target/horn systems due to the high power of the ESS beam. The ESSnuSB+ proposes to build an R&D target station which will contain only one ESSnuSB target/horn operating at 1.25 MW beam power, i.e. at 1/4 of the full ESS power. The expertise obtained in design, construction and operation of this target station will directly apply to running a full-power ESSnuSB neutrino beam production facility. In addition, the pions from the secondary hadron beam produced in the R&D target station will be used to feed the low energy muon storage ring (LEnuSTORM), similar to the nuSTORM project [51].
A transfer line will be designed for the secondary particles - composed of hadrons (nucleons, pions and a small fraction of kaons) - exiting the R&D target station; the pions will be lead into a straight part of the LEnuSTORM racetrack ring in which pions will decay through the process \(\pi^{+}\rightarrow\mu^{+}+\nu_{\mu}\) (or its charged conjugate version for \(\pi^{-}\) mode of operation) to produce additional muons. The bend at the end of the straight section will be designed to kinematically select muons3 to keep them in the ring, while pions will be lead into the beam
Figure 11: Layout of the proposed upgrades to the ESS linear accelerator including those from ESSnuSB and ESSnuSB+. The ESSnuSB+ upgrades include a special target station, a muon storage racetrack ring (LEnuSTORM), a low energy monitored neutrino beam (LEMNB) line, and a new near detector to be used both for LEnuSTORM and LEMNB. The ESSnuSB near detector will be used as a far detector for LEnuSTORM and LEMNB. The image has been reprinted from the ESSnuSB+ proposal.
dump. The circulating muons will gradually decay through the process \(\mu^{+}\to e^{+}+\nu_{e}+\overline{\nu}_{\mu}\) (or its charged conjugate version for \(\pi^{-}/\mu^{-}\) mode); decays occurring in the straight section will produce a neutrino beam containing equal parts muon and electron neutrinos (with additional muon neutrinos coming from the pion decay in the first straight portion of the ring during the filling). This beam will have a significant \(\nu_{e}\) (or \(\overline{\nu}_{e}\)) component which will be used to measure the electron (anti)neutrino interaction cross-section.
A low energy monitored neutrino beam line, inspired by the ENUBET project [52], will be situated parallel to the LEnuSTORM ring. The basic idea behind this facility is to have an instrumented decay tunnel in which pions and muons decay to produce a neutrino beam. The walls of the decay tunnel will be instrumented by an iron-scintillator calorimeter which will be used to reconstruct the energy and direction of the charged decay products of pions and muons (muons for pion decay and electrons for muon decay). This information will be used to constrain the expected energy spectrum of neutrinos exiting the tunnel. The neutrinos will be detected in a detector shared with the LEnuSTORM. This will make it possible to precisely measure the interaction cross-section of muon neutrinos and possibly electron neutrinos. Given the very high expected number of decays in the tunnel, the LEMNB prefers to have a proton pulse as long as possible. Therefore, it will operate directly using "long" ESS pulses, bypassing the accumulator ring. This will require static focusing of the secondaries since electromagnetic horns are not able to withstand long high-current pulses required to hold the magnetic field for 3 ms. The feasibility of such static focusing has already been demonstrated in the ENUBET project [53].
Additionally, a study will be performed on the effect of gadolinium doping of the proposed ESSnuSB water Cherenkov detectors on significance and precision of proposed measurements. Gadolinium has a large cross section for neutron absorption, after which it emits a gamma ray. This makes it possible to detect neutrons present in the final state of neutrino interactions occurring in a WC detector. Since neutrinos tend to produce protons in the final state, and antineutrinos tend to produce neutrons, by detecting the delayed gamma ray signal from neutron capture one can have a degree of discrimination between neutrino and antineutrino interactions even in absence of a magnetic field.
Apart from the main CPV measurement programme, the large far detectors of the ESSnuSB project have the potential for additional notable measurements. In the neutrino sector, they will be able to measure interactions of atmospheric neutrinos, solar neutrinos, and supernova neutrinos; additionally, they may be sensitive to more difficult to measure neutrino sources such as cosmic neutrinos, diffuse supernova neutrinos and geoneutrinos. Given the large water fiducial mass, far detectors will have a high sensitivity to proton decay as well.
Along with their main purpose of measuring neutrino interaction cross-section, the proposed neutrino facilities at ESS - LEnuSTORM, LEMNB, and ESSnuSB target station as neutrino sources, in conjuction with ESSnuSB and LEnuSTORM/LEMNB near detectors - will be used for additional short-baseline neutrino physics measurements. One of main topics of study will be sterile neutrino oscillations driven by a 1 eV\({}^{2}\) neutrino mass-square difference. Additional topics will include studies of non-standard neutrino interactions and constraints of new physics scenarios by studying neutrino-electron scattering.
## 7 Conclusions
The ESSnuSB is designed to be a next-to-next generation neutrino oscillation experiment to precisely measure the CP violation phase \(\delta_{\rm CP}\) at the 2\({}^{\rm nd}\) oscillation maximum by employing the uniquely powerful ESS accelerator as a proton driver for neutrino beam production. It is expected to start taking data after HyperK and DUNE have already conducted their measurement and either confirmed the existence of leptonic CPV or excluded it in their sensitivity regions. If the existence of CPV is confirmed, ESSnuSB will start the precision era of leptonic
CPV measurement; if not, it will have an equally important mission of searching for CPV in the part of parameter space inaccessible to its two predecessor experiments.
The conceptual design for the ESSnuSB experiment has been published [33], and the new project ESSnuSB+ is underway to design intermediate facilities for R&D and measurement of neutrino interaction cross-section, perform civil engineering conceptual studies and explore the additional physics possibilities of the proposed infrastructure.
This project has been supported by the COST Action EuroNuNet: "Combining forces for a novel European facility for neutrino-antineutrino symmetry-violation discovery". It has also received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 777419. We acknowledge further support provided by the following research funding agencies: Centre National de la Recherche Scientifique and Institut National de Physique Nucleaire et de Physique des Particules, France; Deutsche Forschungsgemeinschaft, Germany, Projektnummer 423761110; Agencia Estatal de Investigacion through the grants IFT Centro de Excelencia Severo Ochoa, Spain, contract No. CEX2020-001007-S and PID2019-108892RB funded by MCIN/AEI/10.13039/501100011033; Polish Ministry of Science and Higher Education, grant No. W129/H2020/2018, with the science resources for the years 2018-2021 for the realisation of a co-funded project; Ministry of Science and Education of Republic of Croatia grant No. KK.01.1.01.0001; as well as support provided by the universities and laboratories to which the authors of this report are affiliated, see the author list on the first page. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
The following abbreviations are used in this manuscript:
\begin{tabular}{l l} CPV & Charge-parity violation \\ ESS & European Spallation Source \\ ESSnuSB & European Spallation Source Neutrino Super Beam \\ PMNS & Pontecorvo-Maki-Nakagawa-Sakata \\ NC & Neutral-current \\ CC & Charged-current \\ QCD & Quantum chromodynamics \\ QES & Quasi-elastic scattering \\ RES & Resonant scattering \\ DIS & Deep-inelastic scattering \\ MEC & Meson-exchange current \\ SFGD & Super fine-grained detector \\ CH & Carbohydrates \\ PMT & Photomultiplier tube \\ R\&D & Research and development \\ LEnuSTORM & Low Energy neutrinos from SToRed Muons \\ LEMNB & Low energy monitored neutrino beam \\ WC & Water Cherenkov \\ \end{tabular}
|
2302.00503 | Tracking People in Highly Dynamic Industrial Environments | To date, the majority of positioning systems have been designed to operate
within environments that have long-term stable macro-structure with potential
small-scale dynamics. These assumptions allow the existing positioning systems
to produce and utilize stable maps. However, in highly dynamic industrial
settings these assumptions are no longer valid and the task of tracking people
is more challenging due to the rapid large-scale changes in structure. In this
paper we propose a novel positioning system for tracking people in highly
dynamic industrial environments, such as construction sites. The proposed
system leverages the existing CCTV camera infrastructure found in many
industrial settings along with radio and inertial sensors within each worker's
mobile phone to accurately track multiple people. This multi-target
multi-sensor tracking framework also allows our system to use cross-modality
training in order to deal with the environment dynamics. In particular, we show
how our system uses cross-modality training in order to automatically keep
track environmental changes (i.e. new walls) by utilizing occlusion maps. In
addition, we show how these maps can be used in conjunction with social forces
to accurately predict human motion and increase the tracking accuracy. We have
conducted extensive real-world experiments in a construction site showing
significant accuracy improvement via cross-modality training and the use of
social forces. | Savvas Papaioannou, Andrew Markham, Niki Trigoni | 2023-02-01T15:17:12Z | http://arxiv.org/abs/2302.00503v1 | # Tracking People in Highly Dynamic Industrial Environments
###### Abstract
To date, the majority of positioning systems have been designed to operate within environments that have long-term stable macro-structure with potential small-scale dynamics. These assumptions allow the existing positioning systems to produce and utilize stable maps. However, in highly dynamic industrial settings these assumptions are no longer valid and the task of tracking people is more challenging due to the rapid large-scale changes in structure. In this paper we propose a novel positioning system for tracking people in highly dynamic industrial environments, such as construction sites. The proposed system leverages the existing CCTV camera infrastructure found in many industrial settings along with radio and inertial sensors within each worker's mobile phone to accurately track multiple people. This multi-target multi-sensor tracking framework also allows our system to use cross-modality training in order to deal with the environment dynamics. In particular, we show how our system uses cross-modality training in order to automatically keep track environmental changes (i.e. new walls) by utilizing occlusion maps. In addition, we show how these maps can be used in conjunction with social forces to accurately predict human motion and increase the tracking accuracy. We have conducted extensive real-world experiments in a construction site showing significant accuracy improvement via cross-modality training and the use of social forces.
Wireless Sensor Networks, Positioning
## Introduction
In today's large and complex industrial environments such as construction sites the need of advanced planning and scheduling, careful coordination, efficient communication and reliable activity monitoring is essential for productivity and safety purposes. Accurate and cost effective positioning and identification are the two main key requirements in order to meet all the above goals. Although positioning technologies have reached a significant level of maturity over the last years there is still no adequate solution for providing accurate positioning services across large and complex industrial settings.
More specifically, tracking the workers in a construction site much more challenging than indoor positioning mainly due to the many moving parts and the fast large-scale changes that occur in these complex environments. For instance, in an indoor environment, the positions of walls and floors remain constant over time, whereas positions of furniture vary little from day to day. Existing indoor positioning systems leverage this environmental stability to provide accurate location services with the use of stable maps. In contrast, the construction site evolves rapidly from day to day, precluding the use of systems which rely on stable, long-term maps for positioning. Currently, there is no system that allows for workers to be tracked reliably and robustly during all phases of construction. As a case in point, consider the challenges in a unified positioning system that works equally as well during deep foundation excavation through to an almost complete multi-storey building. At different points in time, the performance of different techniques alters, with some improving and some degrading.
In this paper, we propose a multi-sensor tracking system which makes use of visual, radio and inertial measurements in order to tackle the problem of accurate localization and identification in construction sites which are characterized by rapid large-scale changes in structure. For example, Fig. (1) shows the effect of a wall being installed in the middle of one of our tracking experiments. The received signal strength of a worker's smartphone from one of the access points dropped considerably after the installation of the wall, in a matter of minutes. The field of view of the camera also changed, not allowing us to directly visually track the people behind the wall. In addition to these short changes, during our experiments we observed much more
Fig. 1: The WiFi signal strength received by the worker in circle is affected by the installation of a new wall (a) Before the installation, there are direct WiFi signals from the access point (shown as triangle) to the worker, (b) The worker is blocked by the new wall, which affects the propagation properties of the WiFi signals as shown in the graph above.
significant long term changes (Fig. (2)); within periods of a few weeks, the scene changed dramatically, staircases or entire floors were added, obfuscating the view to the first floors and creating additional layers where people needed to be tracked. Moreover, the radio and magnetic maps proved unstable with the movement of large structures and the uniforms that people wear for safety make them very hard to distinguish visually, necessitating the use of a multi-sensor tracking framework.
Our aim is to provide a system that can monitor the location of workers to indicate working hazards (e.g. red and green zones), which can be individually tailored. For example, a steel-worker has the training to operate in areas which might not yet be poured with concrete whilst forming the steel rebar. Conversely, a general construction worker should not venture into regions where steel-work has not been completed. This level of safety requires positioning precision beyond the majority of indoor positioning solutions, with desired sub-meter accuracy.
In essence, we are exploiting the fact that different sensing technologies have uncorrelated failure modes to provide a robust, adaptive positioning framework. To summarize, the major contributions of this work are as follows:
1. We are investigating the problem of tracking in highly dynamic industrial settings and we a are proposing a positioning framework explicitly designed for these rapidly changing environments. Our particle-filter based multi-hypothesis tracking framework utilizes three different sensor modalities (i.e. vision, radio and inertial) to allow for accurate tracking in challenging conditions and environments such as construction sites which are characterized by rapid large-scale changes in structure.
2. A technique for cross-modal sensor parameter learning. The proposed system is able to automatically tune the parameters of its sub-systems (e.g. radio model, visual detector, step-length) by making use of the tracking output and a subset of sensor modalities.
3. We demonstrate the impact of applying the social force model to improve tracking in dynamic environments. In a construction site the environment changes rapidly with the addition of new walls, corridors, etc. These changes define the walkable area by restricting human motion in certain locations. In this work we show how to take advantage of these environmental changes with social forces to significantly increase the tracking accuracy.
4. We have conducted extensive experiments in a real construction site with the help and guidance of our industrial partners.
## 2 Problem Definition
In this paper we tackle the problem of tracking people in environments equipped with one or more stationary calibrated cameras. We assume that people that desire to be tracked carry a mobile device, such as a smartphone or customized worker safety equipment, and move freely in and out of the field of view (FOV). We divide time into short time intervals, and at each time \(t\) we receive a number of camera detections of the moving objects denoted as \(C_{t}=\{c_{t}^{1},c_{t}^{2},...,c_{t}^{j},...\},\ 1\leq j\leq|C_{t}|\). A camera detection \(c_{t}^{j}\) represents the bounding box of the \(j_{\text{th}}\) object generated by a foreground detector. Note that at time \(t\) we could be receiving camera detections not only from people but also from other moving objects (i.e. vehicles); false positive detections are also received due to illumination changes, shadows, etc. In order to reduce the number of false positive detections and concentrate on detecting only people we apply a head detector to the output of a foreground detector. A camera detection \(c_{t}^{j}\) is projected into the ground plane via a projective transformation which will be denoted as \(\hat{c}_{t}^{j}\) in this paper.
At time \(t\) we also receive a collection of radio measurements \(R_{t}=\{r_{t}^{k}\},\ 1\leq k\leq K\) where \(K\) is the total number of people with mobile devices who wish to be tracked and \(r_{t}^{k}=[\text{rss}^{1},...,\text{rss}^{m}]_{t}^{k}\) is a vector of received signal strength (RSS) measurements of the \(k_{\text{th}}\) device from \(m\) access points. Additionally, we assume that each mobile device is equipped with an inertial measurement unit (IMU) containing an accelerometer and a magnetometer. This allows us to generate at time \(t\) a collection of inertial measurements denoted as \(S_{t}=\{s_{t}^{k}\}\) where \(s_{t}^{k}=[b_{t}^{k},d_{t}^{k},\theta_{t}^{k}]\) is a vector that contains the step indicator, step-length and heading of the \(k_{\text{th}}\) person respectively. Each index \(k\) uniquely identifies a person and corresponds to a unique MAC address of the mobile device.
The problem to solve is the following: _Given anonymous camera detections \(C_{1:t}\), id-linked radio measurements \(R_{1:t}\) and id-linked inertial measurements \(S_{1:t}\) estimate the trajectories of all users carrying mobile devices and moving inside the camera FOV_.
## 3 System Overview
An overview of the proposed system architecture is shown in Fig. (3). The _Positioning and Identification filter_ obtains anonymous camera detections, radio and inertial measurements from multiple people and is responsible for solving three problems. Firstly, it establishes the correspondences of camera detections across frames, that is, it links together anonymous camera detections that correspond to the same person. Secondly, it finds the mapping between anonymous camera detections and id-linked smartphone (radio and step) measurements. Finally, it identifies and estimates the positions of multiple targets.
The _Adaptive Learner_ uses the output of the filter in combination with the input observations, and performs cross-modality training. Specifically, it configures the foreground detector's internal parameters taking into account available motion measurements. In addition, it tunes the step-length estimation method by leveraging reliable camera measurements. Finally, it exploits camera measurements to learn the radio model; radio, magnetic and occlusion
Fig. 2: We have conducted tracking experiments in a construction site setting: (a) the construction site on day 1, (b) the construction site on day 36. The site changes rapidly from day to day, precluding the use of positioning systems which rely on stable, long-term maps.
maps can also be learned which can be used to further improve the system's accuracy. The remaining components of the system are existing modules which pre-process raw sensor data and transform them to camera, step and radio measurements.
## 4 Multiple Target Tracking
In this section we provide a brief overview of previous work on multiple target tracking (MTT). A more detailed description of MTT algorithms can be found in [1].
### _Introduction to Multiple Target Tracking_
Under the general MTT setup a number of indistinguishable targets are assumed to move freely inside the field of view; they can enter and exit the FOV at random times. The system receives sensor data about the position of the targets periodically which are noisy, include false alarm measurements (i.e. background noise or clutter) and occur with some detection probability. Each target follows a sequence of states (e.g. positions) during its lifetime called _track_. The main objective of MTT is to collect sensor data containing multiple potential targets of interest and to then find the tracks of all targets and filter out the false alarm measurements. If the sequence of measurements associated with each target is known (i.e. id-linked measurements) then the MTT reduces to a state estimation problem (e.g. distinct Kalman/particle filters can be used to follow each target). However, when the target-to-measurements association is unknown (for example, anonymous measurements from cameras, radars and sonars are used) the data association problem must be solved in addition to state estimation. Essentially, the data association problem seeks to find which measurements correspond to each target.
### _Rao-Blackwellized Particle Filtering_
The main idea of Rao-Blackwellized particle filtering (RBPF) [2, 3] is to reduce the number of variables that are sampled by evaluating some parts of the filtering equations analytically. This reduction makes RBPF computationally more efficient than the standard particle filter, especially in high dimensional state-spaces.
The Rao-Blackwellized Monte Carlo Data Association filter (RBMCDA) [4, 5] is a sequential Monte Carlo MTT method that uses Rao-Blackwellized particle filtering (RBPF) to estimate the posterior distribution of states and data associations efficiently. More specifically, instead of using a pure particle representation of the joint posterior distribution of states and data associations RBMCDA proceeds by decomposing the problem into two parts: a) estimation of the data-association posterior distribution and b) estimation of the posterior distribution of target states. The first part is estimated by particle filtering and the second part is computed analytically using Kalman filtering The aforementioned decomposition is possible, since in RBMCDA the dynamic and measurement model of the targets are modeled as linear Gaussian conditioned on the data association thus can be handled efficiently by the Kalman filter.
```
1:Input:\(N\) particles, a measurement vector \(y_{t}\).
2:Output:\(p(x_{t},\lambda_{t}|y_{1:t})\): the joint distribution of target states and target-to-measurement associations at time \(t\) given measurements up to time \(t\).
3:for each particle \(i\in(1..N)\)do
4: For all targets run Kalman filter prediction step.
5: Form the importance distribution as: For all association events \(j\) calculate the unnormalized association probabilities: \(\hat{\pi}_{j}^{(i)}=\hat{p}(y_{t}|\lambda_{t}^{(i)}=j,y_{1:t-1},\lambda_{1:t- 1}^{(i)})p(\lambda_{t}^{(i)}=j|\lambda_{1:t-1}^{(i)})\)
6: Normalize the importance distribution.
7: Draw new \(\lambda_{t}^{(i)}\) from the importance distribution.
8: Update target \(\lambda_{t}^{(i)}\) with \(y_{t}\) using Kalman correction step.
9: Update particle weight.
10:endfor
11: Normalize particle weights.
12: Resample.
13: Approximate \(p(x_{t},\lambda_{t}|y_{1:t})\) as: \(p(x_{t},\lambda_{t}|y_{1:t})\approx\sum_{i=1}^{N}w_{t}^{(i)}\delta(\lambda_{t }-\lambda_{t}^{(i)})\mathcal{N}(x_{t}|M_{t}^{(i)},P_{t}^{(i)})\) where \((M_{t}^{(i)},P_{t}^{(i)})\) are the means and covariances of the target states of the \(i_{\text{th}}\) particle.
```
**Algorithm 1** A high-level description of the RBMCDA filter
A high level overview of the RBMCDA algorithm is shown in Alg. (1). The algorithm maintains a set of \(N\) particles and each particle corresponds to a possible association of anonymous measurements (\(y_{t}\)) to tracks. Each particle maintains for each target its current state \(x_{t}\) (e.g. location) and state uncertainty (i.e. posterior distribution \(p(x_{t}|y_{1:t})\)). In the first step (line 4), a Kalman filter is used to predict the next state of a target based on its previous state (\(p(x_{t}|y_{1:t-1})\)). Then, the algorithm considers associating each anonymous measurement with each one of the targets in the particle and estimates the probability of each candidate association event (lines 5-6). The association events are modeled with the association indicator \(\lambda_{t}\) (e.g. \((\lambda_{t}=0)\implies\) clutter association at time \(t\), \((\lambda_{t}=j)\implies\) target \(j\) association at time \(t\), etc). The association probability \(\hat{r}_{t}\) for target \(j\) is computed from the measurement likelihood \(\hat{p}(y_{t}|\lambda_{t})\) and the prior probability of data associations \(p(\lambda_{t}|\lambda_{t-1})\). By sampling the resulting importance distribution, the algorithm selects only one of the candidate associations (line 7) and updates the state of the respective target with the anonymous measurement (line 8). This is repeated for each anonymous measurement (e.g. for each camera detection in the camera frame). The particle's weight is then updated taking into account its previous weight and the probabilities of selected associations (line 9). Once all particles have been updated and their weights normalized (line 11), they are re-sampled based on their normalized weights (line 12). At the end of each iteration, the positions of the targets are estimated as a weighted average (i.e. mixture of Gaussians) across all particles
Fig. 3: Overview of the proposed system architecture.
(line 13). Note that the algorithm above allows us to enforce data association constraints. For instance, we can express that each track is updated by at most one visual measurement, by suitably modeling association priors in line 5. The existing RBMCDA algorithm is designed to work with anonymous observations. In the next section we point out how we extend it in order to exploit radio and inertial observations that are inherently linked to unique device IDs (i.e. MAC addresses).
## 5 Proposed Approach
We are now in a position to describe how we extend the RBMCDA framework to address the identification and tracking problem in a construction site setting. The key difference here is that we introduce id-linked observations in addition to the anonymous camera observations This impacts a number of steps in the algorithm above as explained in this section.
### _State Prediction and Update_
As in the original algorithm, each particle uses a set of Kalman filters to track targets; however, in our case, we are not interested in tracking all targets within FOV; we only track people equipped with mobile devices and we continue to do so when they temporarily come out of the FOV. We extend the framework in [4, 5], in order to use id-linked observations in the prediction and correction steps of the Kalman filter. In particular, we use inertial sensor measurements to predict the next state of a person (instead of only relying on the previous state as in line 4). Furthermore, we use WiFi/BTLE and camera measurements to correct the person's state (instead of only anonymous camera measurements as in line 8). More specifically, the target's dynamics in our system are modeled by the following linear equation:
\[x_{t}=x_{t-1}+B_{t}\begin{bmatrix}d_{\Delta t}\text{ cos}(\theta_{\Delta t})\\ d_{\Delta t}\text{ sin}(\theta_{\Delta t})\end{bmatrix}+w_{t} \tag{1}\]
where \(t\) denotes the time index, \(x_{t}=[x,y]^{\text{T}}\) is the system state i.e. a 2-D vector of the target's position on the ground plane and the pair (\(d_{\Delta t}\),\(\theta_{\Delta t}\)) represents the target's step-length and heading respectively calculated within the tracker's cycle time (\(\Delta t\)). Finally, \(B_{t}\) is a control input indicating whether a step has been taken or not and \(w_{t}\) is the process noise which is assumed to be normally distributed with mean zero and covariance matrix \(\Lambda\) (i.e. \(w_{t}\sim\mathcal{N}(0,\Lambda)\)). In order to calculate the step-length of a person we use an empirical model that takes into account the step frequency obtained from the accelerometer data (see Section. 6)). In addition, the control input \(B_{t}\) is the output of a HMM-based step classifier which takes as input the accelerometer data from the user's device and returns a step indicator that shows whether a step has been taken or not. A low-pass Butterworth filter (8th order) is being used to smooth out the accelerometer data prior to step classification step. We should also note here that the aforementioned step classifier has classification error of 8.4% for our dataset. As we already mentioned in Section 2, our objective is to track all people that carry mobile devices. Thus, once we associate a camera measurement to a person ID (i.e. device ID), Eqn. (1) is used as the predictive distribution of a Kalman filter to model the motion of the identified person using his/her inertial measurements.
Compared with existing techniques (i.e [6]) that use heuristics to model the human motion, we will show in the evaluation section that the use of inertial measurements in our approach results in more accurate tracking. In addition we have observed that in a construction site workers do not walk regularly, instead they often make big, small and irregular steps depending on the task performed. This makes motion prediction even more challenging since it makes it harder for the step detector/classifier to detect some of the steps correctly. It is worth noting here that the proposed system can correct these step misclassification errors in many situations with the help of visual observations. For instance, when our step classifier predicts wrong for a specific target that a step has been taken, our system can still correct the final estimated position using the location of the camera measurement. Under the assumption of unambiguous tracks the proposed technique can handle similar situations very efficiently. Figure (4) illustrates the scenario discussed above.
Unlike the original RBMCDA filter that only uses anonymous observations to update the target's state (line 8), in our system a measurement \(y_{t}\) at time \(t\) is a vector containing an anonymous location measurement (2D image coordinates transformed to the world plane via a projective transformation [7]) from the camera system and multiple id-linked radio signal strength measurements from people's mobile devices. More formally the measurement vector is defined as \(y_{t}=[\hat{c}_{t},\;\;\mathit{rss}_{t}^{1},\;\;\mathit{...},\;\mathit{rss}_{ t}^{m}]^{\text{T}}\) where \(\hat{c}_{t}\) is a camera observation which contains the 2D target coordinates on the ground plane and \(\mathit{rss}_{t}^{1},\;\mathit{...},\mathit{rss}_{t}^{m}\) denote the received radio
Fig. 4: Fusing camera and inertial measurements. The dotted circles show the predicted location using inertial measurements (i.e. a step classifier, shown in the top picture, indicates if a step has been taken or not.) Square boxes indicate a camera detection (i.e. the location of a person). When a step is classified correctly the predicted location is collocated with the camera detection (picture on the left). The tracking accuracy can be decreased significantly when the step detector misclassifies a step. However, in the proposed system, the fusion with camera measurements allows to navigate towards the right path in cases where we have unambiguous trajectories (picture on the right).
signal measurements from \(m\) access-points of a particular mobile device. Thus, the state vector \(x_{t}\) of a target is related to the system measurements \(y_{t}\) according to the following model:
\[y_{t}=f(x_{t})+v_{t}=\left[\begin{array}{c}x_{t}\\ \text{RSS}_{1}\left(x_{t}\right)\\ \text{RSS}_{2}\left(x_{t}\right)\\ \vdots\\ \text{RSS}_{m}\left(x_{t}\right)\end{array}\right]+v_{t} \tag{2}\]
where \(f\) is a non-linear function that translates the true system state vector to the measurement domain and \(v_{t}\) is the measurement noise which follows a normal distribution with zero mean and covariance matrix \(R\) (\(v_{t}\sim\mathcal{N}(0,R)\)). The function \(\text{RSS}_{i}\) is given by:
\[\text{RSS}_{i}(x_{t})=P_{i}-10n_{i}log_{10}\|A_{i}-x_{t}\|_{2}\,\ i\in[1..m] \tag{3}\]
where \(m\) is the total number of WiFi/BTLE access points and \(\text{RSS}_{i}(x_{t})\) is the expected signal strength at location \(x_{t}\) with respect to transmitter \(A_{i}\). \(P_{i}\) is the received power at the reference distance of 1 meter and \(n_{i}\) is the path loss exponent. In order to meet the requirements of the RBMCDA filter, i.e. calculate analytically the posterior distribution of the target states with a Kalman filter, Eqn. (2) must be linear Gaussian. The non-linearity of the measurement model in our case is handled via the unscented transformation [8]. Thus, the state estimation can be computed analytically using the unscented Kalman filter (UKF) and each particle contains a bank of UKFs; one filter for each target.
```
1:Input:\(N\) particles, camera (\(C_{t}\)), radio (\(R_{t}\)) and inertial (\(S_{t}\)) measurements.
2:Output:\(p(x_{t},\lambda_{t}|y_{1:t})\).
3:Apply Eqn. (4) to \(C_{t}\) and \(R_{t}\) to create \(y_{t}\).
4:for each measurement \(m\in(1..|y_{t}|)\)do
5:for each particle \(i\in(1..N)\)do
6: For all targets in \(i\) run prediction step (Eqn. (1)).
7: Form the importance distribution and draw new association event (\(\lambda_{t}^{(i)}\)).
8: Update target \(\lambda_{t}^{(i)}\) with \(m\) using UKF correction step. Update particle weight.
9:endfor
10:endfor
11:Normalize particle weights.
12:Resample.
13:Approximate \(p(x_{t},\lambda_{t}|y_{1:t})\) as in Algorithm 1
```
**Algorithm 2**A high-level work-flow of the proposed system.
### _Tracking and Identification_
In this section, we show how we modified the association steps in lines 5-7 to leverage id-linked measurements.
Suppose for instance that at time \(t\) we receive camera detections \(C_{t}=\{c_{t}^{j}\},\ 1\leq j\leq|C_{t}|\) and radio measurements \(R_{t}=\{r_{t}^{k}\},\ 1\leq k\leq K\) where \(K\) is the number of people with a mobile device. Each one of the \(|C_{t}|\) anonymous camera detections could be one of the following three types: (a) a person with a device, (b) a person without a device or (c) clutter (e.g false camera detection caused by illumination changes). Our objective is to associate the type (a) camera detections with the correct radio measurements. In order to do that we follow the following procedure. We enumerate all possible combinations \(\Omega=|C_{t}|\times K\) between the camera detections and the id-linked measurements and we create new measurements \(y_{t}^{i},i\in[1..\Omega]\) with the following structure:
\[y_{t}^{i}=\{\hat{c}_{t}^{m},r_{t}^{j}\},\ m\in[1..|C_{t}|],\ j\in[1..K] \tag{4}\]
where \(\hat{c}_{t}^{m}\) is the camera measurement \(c_{t}^{m}\) projected into the ground plane. Now, a measurement \(y_{t}^{i}\) which contains a correct association will have the following property \(\text{RSS}(\hat{c}_{t}^{m})\approx r_{t}^{j}\) for the correct \((m,j)\) pair, where \(\text{RSS}()\) is the function in Eqn. (3). In other words, if a person is detected by the camera, then his/her radio measurements (i.e. received signal strength) at that location should match the predicted radio measurements at the same location. Camera detections of type (b) and (c) would normally not exhibit the same property. From our experiments in a real construction site, we have observed that the radio measurements are reasonably stable but only for short periods of time depending on the environmental dynamics. As we discuss in Section (6) by periodically re-learning the radio model, we make our system adaptive to the changing environment and thus we can use the procedure above to track and identify the people in the scene.
Moreover, the proposed algorithm can handle the creation and termination of tracks. For instance when a new person (i.e. a new mobile device) is entering the FOV, we initiate a new track by initializing the system state with the camera location that best matches the received radio measurements. Additionally, we allow a target to die when for a fixed period of time no camera observation has been used to update its state. The above procedure runs continuously thus new tracks are created and others are terminated dynamically as people are entering and leaving the FOV.
As we have already mentioned the association probability is computed as the product of the measurement likelihood and association prior. The measurement likelihood of associating \(y_{t}^{i}\) with target \(j\), \(\hat{p}(y_{t}^{i}|\lambda_{t}=j)\) is computed as \(\hat{p}(y_{t}^{i}|\lambda_{t}=j)=\mathcal{N}(y_{t}^{i};\hat{y}_{t},V_{t})\) where \(\hat{y}_{t}\) is the expected measurement of target \(j\) at the predicted state and \(V_{t}\) is the innovation covariance obtained from the UKF.
Given \(m\) simultaneous measurements within a scan the predictive distribution of data associations can be defined as an \(m_{\text{th}}\) order Markov-chain \(p(\lambda_{t}^{m}|\lambda_{t}^{m-1},...,\lambda_{t}^{1})\) which allows us to enforce certain association restrictions. In our system this predictive distribution is defined (i.e. assigns zero probability to unwanted events) so that the following conditions are met:
1. A track can be updated with at most one measurement.
2. A measurement can only be used to update at most one track.
3. An already established track (with a specific sensor ID) can only be updated with a measurement of the same sensor ID.
4. Once a camera detection is assigned to a track all other measurements which include the latter camera detection are classified as clutter.
5. A new target is not born if there is an existing target with the same sensor ID as the newborn target. This means that each particle maintains only targets with unique sensor IDs.
Some of the above restrictions can be relaxed depending on the application scenario. For instance, when two people are close to each other they can be detected as one object. In this case the 4th restriction can be relaxed in order to allow two tracks (i.e.
two people with different sensor IDs) to be updated with the same camera detection.
To summarize, a particle represents states only for people carrying mobile devices - not for all people in the field of view. Inertial data of each person's device are used to predict their next state. Anonymous camera data are associated with a person's track only if they _agree_ with both their inertial and radio data. At first a foreground detector is used to detect the moving people in the scene and then the 2D image coordinates of the detected people are projected into the world plane (i.e. ground plane) via a projective transformation (i.e. homography). Given a set of points \(p_{i}\) in the projective plane \(\mathrm{I\!P}^{2}\) and a corresponding set of points \(\hat{p}_{i}\) likewise in \(\mathrm{I\!P}^{2}\) we would like to compute the projective transformation that takes each \(p_{i}\) to \(\hat{p}_{i}\). In our case we consider a set of point correspondences \(p_{i}\leftrightarrow\hat{p}_{i}\) between the image plane and the world ground plane and we need to compute the projective transformation \(H_{3\times 3}\) such that \(Hp_{i}=\hat{p}_{i}\) for each \(i\). The matrix \(H\) can be computed using the Direct Linear Transformation (DLT) algorithm [7] which requires at least 4 point correspondences. Additional points can improve the estimation by minimizing a suitable cost function such as the geometric distance between where the homography maps a point and where the point's correspondence was originally found, i.e. we would like to find the matrix \(H\) which minimizes \(\sum_{i}d(\hat{p}_{i},Hp_{i})^{2}\) where \(d(.,.)\) is the Euclidean distance between the two points.
Once we calculate \(H\) we can use it to project the targets from the image plane into the ground plane and obtain their location on the ground plane. We can then use inertial and radio data using Eqns. (1) and (2) as explained earlier in this section.
Finally, we should note here that when at some time-step a particular target does not receive radio measurements then if the target is a new target the identification and creation of a new track is postponed until radio measurements are available. Otherwise, if the target is an existing target, tracking proceeds by only considering the motion model of the target (Eqn. (1)). A high-level work-flow of the proposed technique is shown in Alg. 2).
## 6 Cross-modality Learning
In this section we will show how our framework is capable of cross-modality learning, i.e. how a subset of sensor modalities is used by the _Adaptive Learner_ (Fig. (3)) to train the internal parameters of the system.
### _Track Quality Estimation_
As we have briefly mentioned in the introduction the output (i.e. track) of our _Positioning and Identification filter_ can be used to learn the parameters of various internal components of our system. Once we have identified a track (i.e. we have linked a visual trajectory with radio and inertial measurements), we can use it to learn, for example, the radio propagation model since this track contains all the necessary information (i.e. location-RSS data points) for this purpose. In a similar manner we can learn radio and magnetic maps, train the foreground detector and improve the step-length estimation. All the these will be discussed in more detail later in this section. However, in order to achieve all of the above objectives, we first need to assess the quality of output tracks to make sure that they qualify for the training process. Thus, the goal of the _Track Quality Estimation_ phase, is to find candidate tracks which can be used for cross-modality training.
Let us assume that at time-step (or scan) \(t\) we receive \(m\) measurements \(\{y_{t}^{1},y_{t}^{2},...,y_{t}^{m}\}\). In addition \(y_{t}^{0}\) is defined for each time-step to be a dummy variable indicating the possibility of a missed detection. Then the incremental quality score of a track \(j\) during this time-step is defined as:
\[\Delta L_{t}^{j}=\begin{cases}\log\left(\frac{\hat{p}(y_{t}^{i}| \lambda_{t}=j)p_{d}}{\hat{p}(y_{t}^{i}|\lambda_{t}=0)}\right)&,\text{ if }\exists i\in[1..m]\text{ s t }\lambda_{t}=j\\ \log\left(1-p_{d}\right)&,\text{ otherwise}\end{cases}\]
where the quantity \(\hat{p}(y_{t}^{i}|\lambda_{t}=j)\) is the likelihood of the measurement assigned to track \(j\). The term \(\hat{p}(y_{t}^{i}|\lambda_{t}=0)=p(clutter)\) is the likelihood of the measurement originating from clutter which has a uniform probability density over the measurement space of volume \(V\) (i.e. \(p(clutter)=V^{-1}\)) and finally \(p_{d}\) is the probability of detection. Then, the cumulative quality score of track \(j\) is given by:
\[Q_{j}=\sum_{t=1}^{T}\Delta L_{t}^{j} \tag{5}\]
where \(T\) is the total length of the track. As we can see the quality score \(Q\) of a track penalizes the non-assignments due to missing detections while favoring the correct measurement-to-track associations. Fig. (5) shows that the quality score is negatively correlated with the root mean square error. Finally, in order to mark a track as a high confidence track that _qualifies_ for cross-modal training its quality score is tested against a pre-determined threshold \(Q_{\text{Th}}\). If \(Q_{j}\geq Q_{\text{Th}}\) then the track is qualified (i.e. _high quality track_) and it can be used for cross-modality training, otherwise the track is rejected (Fig. (5)).
### _Foreground Detector Training_
The mixture of Gaussians (MoG) [9] foreground detection which is used by our system is one of the most popular approaches for detecting moving targets from a static camera. This approach maintains a statistical representation of the background and can handle multi-modal background models and slow varying illumination changes.
Fig. 5: Track quality estimation: The figure shows the quality score of 16 tracks along with their RMSE. Tracks with quality score above the horizontal dotted line are considered qualifying and can be used for cross-modality training
In the original algorithm the history of each pixel is modeled by a mixture of \(K\) (typically 3-5) Gaussian distributions with parameters \((\beta_{k},\mu_{k},\sigma_{k}I)\) for the mixture weight, mean and covariance matrix of the \(k_{\text{th}}\) Gaussian component. In order to find the pixels that belong to the background, the Gaussian distributions are ordered in decreasing order according to the ratio \((\beta_{k}/\sigma_{k})\); background pixels exhibit higher weights and lower variances than the foreground moving pixels. The background model is obtained as \(B^{*}=\arg\min_{B}\left(\sum_{k=1}^{B}\beta_{k}>P_{b}\right)\) where \(P_{b}\) is the prior probability of the background. The remaining \(K-B^{*}\) distributions represent the foreground model.
On the arrival of a new frame each pixel is tested against the Gaussian mixture model and if a match is found the pixel is classified as a background or foreground depending on which Gaussian component it was matched with. If no match is found the pixel is classified as a foreground and it is added to the mixture model by evicting the component with the lowest weight. When a pixel is matched, the weight of that \(k_{th}\) Gaussian component is updated using an exponential weighting scheme with learning rate \(\alpha\) as \(\beta_{t+1}=(1-\alpha)\beta_{t}+\alpha\), and the weights of all other components are changed to \(\beta_{t+1}=(1-\alpha)\beta_{t}\). A similar procedure is used to update the mean and covariance of each component in the mixture.
The learning rate (\(\alpha\)) controls the adaptation rate of the algorithm to changes (i.e. illumination changes, speed of incorporating static targets into the background) and is the most critical parameter of the algorithm. Fast learning rates will give greater weight to recent changes and make the algorithm more responsive to sudden changes. However, this can cause the MoG model to become quickly dominated by a single component which affects the algorithm's stability. On the other hand slow learning rates will cause a slower adaptation change which often results in pixel misclassification. Over the years many improvements have been suggested by the research community that allow for automatic initialization and better maintenance of the MoG parameters [10]. More recent techniques [11, 12] address challenges like sudden illumination variations, shadow detection and removal, automatic parameter selection, better execution time, etc.
In this section we propose a novel method for obtaining the optimum learning rate \(\alpha^{*}\) of the foreground detector using the _high-quality_ tracks of our filter. Suppose we are given a track \(X_{1:T}^{j}=\{x_{1}^{j},x_{2}^{j},...,x_{T}^{j}\}\) of length \(T\) where \(x_{t}^{j},t\in[1..T]\) denotes the state of the track at time \(t\). Since, both camera and inertial measurements could have been used to estimate track \(X_{1:T}^{j}\) then its states \(x_{t}^{j},t\in[1..T]\) are of two types: type (a) states that have been estimated using camera and inertial measurements and type (b) states that have been estimated only using inertial measurements. A high-quality track ensures that \(X_{1:T}^{j}\) contains the right mixture of type (a) and type (b) states and thus does not deviate significantly from the ground truth trajectory. This is possible, since propagating a track by only using inertial measurements is accurate enough for short periods of time. This key property of the inertial measurements allows us to use a high quality track as if it was the ground truth trajectory to train the learning rate of the foreground detector. In other words the type (b) states of a high quality track tells us that the target is moving to specific locations and the foreground detector does not detect any target at those locations.
The quality score of tracks (Eqn. (5)) can be used to find the optimum learning rate by solving the following optimization problem: _Given a time window \(\mathcal{T}\) find a learning rate \(\alpha^{*}\) so that the cumulative quality score (CQS) \(\sum_{j}Q_{j}\) of all high quality tracks \(j\in\mathcal{T}\) is maximized_.
### _Optimizing the Step Length Estimation_
Similar to the foreground detector training procedure, _high quality_ tracks can also be used to learn the step-length model of each person being tracked. More specifically, the step-length of a user can be obtained from the universal model proposed in [13] as:
\[s=h(a^{\prime}f_{step}+b^{\prime})+c^{\prime} \tag{6}\]
where \(s\) is the estimated step-length, \(h\) denotes the user's height, \(f_{step}\) is the step frequency obtained from the device's accelerometer and \((a^{\prime},b^{\prime},c^{\prime})\) are the model parameters. The model above describes a linear relationship between step-length and step frequency weighted by the user's height. Since the heights of people that we need to track are not known a priori every time a new track is initialized that contains a sensor ID which has not been recorded before, the step-length estimator uses Eqn. (6) to provide an initial estimate of the target's step-length. At this point the height value is set to the country's average for men of ages between 25 and 34 years old. The parameters \((a^{\prime},b^{\prime},c^{\prime})\) have been pre-computed with a training set of 8 people of known heights using foot mounted IMUs.
As the tracking process proceeds high quality tracks are obtained periodically for each target. From these tracks the following IMU data are extracted for each step: a) step frequency, b) step start-time and c) step end-time. The start/end times of each step obtained from the IMU data are then matched to camera detections in order to obtain the position of the target during those times which are essentially the step-lengths measured from the camera system. Thus, for each target we obtain a collection of \(n\) calibration points \(\{Sv^{i},f^{i}_{step}\}_{i=1}^{n}\) where \(Sv^{i}\) is the visual step-length of the \(i_{\text{th}}\) step and \(f^{i}_{step}\) its frequency obtained from the
Fig. 6: The figure shows the occlusion maps learned during a period of 10 minutes for each map. (a) Areas that appear to have no human activity are marked as occlusions, (b) As the constuction site evolves new occlusions are created. In this case the installation of a new wall creates a new occlusion. These changes are detected automatically by our system and are used to improve the tracking accuracy via the use of social forces.
IMU. The calibration set of each target is then used to train a personal step-length model of the form \(S\mathrm{v}=\rho_{1}f_{\text{step}}+\rho_{0}\) using the least squares fitting. Finally, the step-length estimator can switch to the trained model once the least squares goodness of fit \(\left(R^{2}=1-\frac{\text{residual sum squares}}{\text{total sum squares}}\right)\) exceeds a pre-defined threshold.
### _Radio Model/Maps Learning_
_High quality tracks_ are also being used in order to learn the parameters of the radio propagation model which our system uses as explained in Section 5. More specifically, from a high quality track \(X_{1:T}^{j}=\{x_{1}^{j},x_{2}^{j},...,x_{T}^{j}\}\) of length \(T\), the type (a) states are extracted. Let us call a type (a) state as \(\tilde{x}_{t}^{j}\); this state has been estimated using camera, radio and inertial measurements. Thus a collection of type (a) states \(S=\{\tilde{x}_{t}^{j}:j\in K,t\in\mathcal{T}\}_{n}\) of length \(n\) where \(K\) is the total number of people with smartphones and \(\mathcal{T}\) is the running time of our filter, contains \(n\) pairs of (location, RSS) measurements. Now, this collection of (location, RSS) points can be used to estimate the parameters of the log-normal radio propagation model [14] given by Eqn. (3) for each access point using least squares fitting. At regular intervals we re-estimate the radio model parameters based on the most recent portion of collected data. We should note here that the parameters of the radio model are initialized empirically based on a number of studies for different environments [14].
Additionally, we can follow similar procedure to learn radio, magnetic and occlusion maps. The radio and the magnetic maps can be combined and used for localization in situations where the camera is occluded by an obstacle or they can be used in conjunction with the radio model to improve the system's accuracy. Additionally, the occlusion map, which is derived from the camera detections provides statistics about the environment (i.e. frequent visited areas, inaccessible areas, etc) which our system can use to improve its performance. For instance, suppose that a particular person is not detected by the camera during some time and our filter reverts to IMU tracking; the occlusion map can help us filter out impossible trajectories.
In order to learn the occlusion map we use the following procedure: We first discretize the world plane creating a 2D grid. During a time-window we then project the camera detections into the world plane and we count the number of hits in each cell creating a 2D histogram. The normalized histogram is then thresholded and the cells that are found to be below a predefined threshold are marked as occlusions/obstacles; this is shown in Fig. (6). The set of occlusions found \(O=\{o_{j}\}_{j=1}^{N_{a}}\) are also used to model repulsive forces exerted from the environment onto people; this will be discussed in Section (7).
## 7 Integration of Social Forces
In this section we describe how we have modified our system to make use of the Social Force Model (SFM) [15, 16] for accurate motion prediction. More specifically the Social Force Model assumes that the behavior of human motion is affected by the motion of other people and also by obstacles from the environment. Thus the SFM aims to describe and predict the behavior of human motion with the introduction of repulsive forces exerted on people by modeling the interactions between people and the influence of the environment on human motion. As we have already mentioned in the previous section our system is able to automatically learn the occlusion map witch contains the location of obstacles and other environmental constrains. This occlusion map is now integrated into the social force model which help us improve the prediction of human motion.
### _The Social Force Model_
More formally in the Social Force Model a person \(p_{i}\) with mass \(m_{i}\) aims to move with a certain desired speed \(\hat{\nu}_{i}\) in a desired direction \(\hat{\epsilon}_{i}\). In our system the desired direction is taken from the IMU measurements (i.e. heading) so that \(\hat{\epsilon}_{i}=\theta^{i}\) and the desired speed \(\hat{\nu}_{i}\) is calculated as \(d_{\Delta t}/\Delta t\) where \(d_{\Delta t}\) is the step-length from the IMU and \(\Delta t\) the tracker's cycle time. At each time step the motion of people is described by the superposition of repulsive and physical forces exerted from other people and the environment.
#### 7.1.1 Repulsive Forces
As we already mentioned the human motion is affected by environmental constrains (i.e. obstacles) and from the motion of other people. Thus in the presence of other people or obstacles a person might not be able to keep the desired direction and speed. These disturbances are described by repulsive forces which prevent a person from moving along the desired direction. More specifically the repulsive force \(F_{i}^{R}\) is modeled as the sum of social forces \(f_{i,k}^{\text{soc}}\) exerted by other people or obstacles according to:
\[F_{i}^{\text{R}}=\sum_{j\in P\backslash\{i\}}f_{i,j}^{\text{soc}}+\sum_{j\in O }f_{i,j}^{\text{soc}} \tag{7}\]
where \(P=\{p_{j}\}_{j=1}^{N_{p}}\) is the set of all people (i.e tracks) and \(O=\{o_{j}\}_{j=1}^{N_{a}}\) is the set of all environmental constraints (i.e. obstacles). The above social repulsive forces are described as:
\[f_{i,j}^{\text{soc}}=\alpha_{j}e^{\left(\frac{\tau_{i,j}-d_{i,j}}{k_{j}} \right)}n_{i,j}\gamma(\lambda,\phi_{i,j}) \tag{8}\]
where \(j\in P\cup O\) and \(a_{j}\), \(b_{j}\) denotes the magnitude and range of the force respectively. People and obstacles are assumed to be circular objects with certain radii, thus \(r_{i,j}\) denotes the sum of
Fig. 7: Illustrative example showing the position estimate with and without social forces. The figure shows that repulsive physical forces from the environment improve the position estimate by taking into account obstacles and other environmental constraints.
radii of entities \(i\) and \(j\) and \(d_{i,j}\) is the Euclidean distance between their centers. The term \(n_{i,j}\) describes the direction of the force, (normalized vector ) pointing from entity \(j\) to entity \(i\). Finally, the social forces are limited to the field of view of humans, therefore the anisotropic factor \(\gamma(\lambda,\phi_{i,j})\) is added to the model and is given by:
\[\gamma(\lambda,\phi_{i,j})=\lambda+(1-\lambda)\frac{1+\text{cos}(\phi_{i,j})}{2} \tag{9}\]
where \(\lambda\) denotes the strength of the anisotropic factor and \(\text{cos}(\phi_{i,j})=-n_{i,j}\cdot\hat{\epsilon}_{i}\) is the cosine of the angle between the desired direction and the direction of the force.
#### 7.1.2 Physical Forces
Finally, environmental constraints (i.e. walls, obstructions, etc) define the walkable area by restricting human motion in certain locations. These hard constraints can be modeled as physical forces exerted from the environment onto people and can be defined as follows:
\[F_{i}^{\text{phys}}= \sum_{j\in O}f_{i,j}^{\text{phys}} \tag{10a}\] \[f_{i,j}^{\text{phys}}= c_{j}g(r_{i,j}-d_{i,j})n_{i,j} \tag{10b}\]
where \(c_{j}\) denotes the magnitude of the force and \(g(x)\) is defined as \(g(x)=x\) if \(x\geq 0\) and \(0\) otherwise, making \(g(x)\) a contact force. We should note here that physical forces can also be applied between people if desired (i.e. so that different people would not occupy the same space). This can be done by adding an additional term in Eqn. (10a) to account for forces between people as we did in Eqn. (7).
### _Social Forces for Motion Prediction_
The total force \(F_{i}^{\text{rot}}\) exerted on a particular person \(p_{i}\) is the superposition of all repulsive and physical forces given by:
\[F_{i}^{\text{rot}}=F_{i}^{\text{R}}+F_{i}^{\text{phys}} \tag{11}\]
We can now include \(F_{i}^{\text{rot}}\) to our motion model (Eqn. (1)) by making use of Newton's second law given by \(F_{i}^{\text{ret}}=m_{i}\frac{\text{det}_{i}}{\text{det}}\) so that Eqn. (1) becomes:
\[x_{t}=x_{t-1}+B_{t}\begin{bmatrix}d_{\Delta t}\ cos(\theta_{\Delta t})\\ d_{\Delta t}\ sin(\theta_{\Delta t})\end{bmatrix}+\frac{1}{2}\frac{F^{\text{ rot}}}{m}\Delta t^{2}+w_{t} \tag{12}\]
As we can see from Eqn. (12) the predicted motion of a person is calculated by taking account the previous position, inertial measurements (i.e. step-length and heading), and the forces exerted to this person by other people and the environment. Equation (12) can now be used in our tracking framework as the predictive distribution of the Kalman filter. This predictive distribution is given by:
\[p(x_{t}|x_{t-1},S_{t}^{\prime},P_{t},O_{t})=\\ \mathcal{N}(x_{t};\psi(x_{t-1},S_{t}^{\prime},P_{t},O_{t}),J_{ \psi}\Sigma_{t-1}J_{\psi}^{\text{T}}+\Lambda) \tag{13}\]
where \(S_{t}^{\prime}=[d_{\Delta t}cos(\theta_{\Delta t}),\ d_{\Delta t}sin(\theta_{ \Delta t})]^{\text{T}}\) is a vector that contains the step-length and heading at time \(t\), \(P_{t}\) is the set of all people tracked and \(O_{t}\) is the set of all obstacles from the environment. The function \(\psi(x_{t-1},S_{t}^{\prime},P_{t},O_{t})=x_{t-1}+B_{t}S_{t}^{\prime}+\frac{1} {2}\frac{F^{\text{rot}}}{m}\Delta t^{2}\) is the mean of the predicted location, \(\Sigma_{t-1}\) is the covariance matrix of the estimate, \(\Lambda\) is the covariance matrix of process noise and \(J_{\psi}=\frac{\partial\psi(J_{\psi})}{\partial x}\) is the Jacobian of \(\psi(\cdot)\).
With the above motion model, in each time step in addition to the inertial measurements we can now use repulsive and physical forces exerted on targets in order to improve the predicted location estimates (Fig. (7)). We will show in the evaluation section that the integration of social forces in our motion model allows us to make better motion predictions and improve the accuracy of our tracking system.
## 8 System Evaluation
### _Experimental Setup_
In order to evaluate the performance of the proposed approach we have conducted two real world experiments in a construction site (Fig. (2)). In both experiments we placed two cameras with non-overlapping FOV at approximately 8 meters above the ground facing down. In the first experiment the two cameras were covering an area of approximately 11m \(\times\) 9m each and in the second experiment an area of 14m \(\times\) 4m each. The duration of each of the experiments was approximately 45 minutes with the cameras recording video at 30fps with a resolution of 960 \(\times\) 720 px. We should also mention here that each camera was processed separately (i.e. we do not consider the multi-camera system scenario). The area of the site was outfitted with 12 WiFi and 8 BTLE access points and 5 workers were supplied with smartphone devices. The total number of people in the scene was varying from 3 to 12 as workers were entering and exiting the field of view. The objective of the experiment was to identify and track the workers who were carrying a smartphone device using camera, radio and inertial measurements. The radio measurements were obtained by their smartphones receiving WiFi and BTLE beacons at 1Hz and 10Hz respectively. The inertial measurements (i.e accelerometer and magnetometer) obtained from their smartphones had a sampling rate of 100Hz.
To obtain the ground truth of people's trajectories we followed the same approach proposed in [6]. We supplied all people to be tracked with helmets of different colors and their ground truth trajectories were obtained using a mean-shift tracker [17] to track the colored helmets. We have decided to use the procedure above for obtaining the ground truth trajectories since with GPS we could not get the required accuracy (i.e. GPS achieved a room-level accuracy during our experiments at the construction site) for this specific task.
In our implementation we have used RGB images as input to the MoG foreground detector, however we have not used any color features for people identification and our filter tracks only the position of targets. Any target detector which outputs target coordinates can be used with the proposed technique without any changes to the algorithm. It is also worth mentioning that the proposed system can also be extended to utilize visual features (i.e. color) for target identification, however these features are not always available and therefore cannot be relied on to uniquely identify the workers. Finally, Table (1) shows all the empirical values and thresholds that we have used in our implementation. These values have been obtained experimentally unless otherwise stated.
### _Results_
**Accuracy and learning:** The first set of experiments evaluates the tracking accuracy of our system (i.e. how well we can identify and track people with smartphone devices among all people in the
FOV). Moreover, we examine what is the effect of cross-modal training on the performance of our system. Our performance metric in this experiment is the root mean square error (RMSE) between the ground-truth and the estimated trajectory. In all the experiments shown here we have used 100 particles. In addition, instead of using line 13 of Alg. (1) to estimate the filtering distribution, in each step the location of each target is estimated using the particle with the highest weight. For this test we used 30 minutes worth of data running our filter on time-windows of one minute (i.e. 1800 frames). Figure (8a) shows the error CDF over this period over all targets for different settings. More specifically, our approach achieves a 90 percentile error of 2.0m when the system is untrained, which improves to 1.8m when the foreground detector is trained. The error decreases further as the parameters of the radio propagation model are learned, achieving a 90 percentile error of 1 meter. Finally, once the optimum step-length of each person is learned the accuracy increases further to approximately 0.8 meters. As we can see the error decreases significantly once both the foreground detector and the radio model are learned. This is expected since our system requires both camera and radio measurements in order to determine the correct measurement to track association and update the target states. In the case of excessive missing camera detections, the trajectory of a target is estimated only by inertial measurements which is the main cause of the low accuracy. On the other hand, if the radio model was not trained, camera detections would not be able to be linked with radio measurements, which would also cause identification and tracking errors. Once the foreground detector and the radio model are trained Fig. (8a) does not show any significant improvement after learning the step-length model. This is reasonable since, in this case most of the time the targets are updated with camera observations which are used to correct the predicted by the IMU states. However, from our experiments we have observed that once the camera becomes unavailable, the difference in accuracy between a trained and a universal step-length model is significant.
Figure (9) shows how our approach can find the optimum learning rate \((\alpha^{*})\) of the foreground detector by solving the optimization problem discussed in Section 6.2. In the example above we used 5 minutes of data, running the foreground detector for different values of \((\alpha)\) and calculating the cumulative quality score (CQS) for that period. Our intuition is that the optimum learning rate will reduce the number of missing detections, thus increasing the number of high quality tracks as well as their quality score. This is shown in Fig. (9) where the optimum learning rates achieve a high CQS, also evident by the low RMSE.
**Comparison with other techniques:** In our second test we compare the proposed approach with the original RBMCDA algorithm (referred to as vision-only tracker in this section) which uses only visual observations for tracking. In this test we used the same experimental setup as described in the previous paragraph. Both techniques use the same foreground detector settings and in addition the proposed method uses a learned radio model. Figure (8b) shows the error CDF for the two methods. As we can observe the proposed technique achieves a 90 percentile error of 1 meter as opposed to vision-only tracking which has a 90 percentile error of 1.8 meters. The main source of error for the vision-only tracking is due to data association ambiguities which the proposed technique reduces significantly with the help of radio and inertial measurements. Moreover, the proposed technique supports target identification which is not possible when pure visual tracking techniques are used. In addition, Figure (8b) shows how the proposed technique stacks up against WiFi fingerprinting. For comparison we have implemented the continuous space estimator
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline
**Symbol** & **Description** & **Values, [Units]** \\ \hline \hline \(\Lambda\) & Process noise covariance & \(\text{diag}\{0.3^{2},0.3^{2}\}\), \\ \hline \(R\) & Measurement noise covariance & \(\text{diag}\{0.2^{2},0.2^{2},3.2^{2},\ldots\}\), \(\text{[m}^{2},\text{m}^{2},\text{dBm}^{2},\ldots]\) \\ \hline \(\alpha\) & MoG learning rate & 0.0032 (learned) \\ \hline \(P_{\text{b}}\) & MoG prior probability of background & 0.82 \\ \hline \(K\) & MoG number of Gaussians & 5 \\ \hline \(Q_{\text{n}}\) & Quality Score threshold & 300 \\ \hline \((a^{\prime},b^{\prime},c^{\prime})\) & Step-length empirical model & (0.1244, 0.066, 0.2000) \\ \hline \((a_{j},b_{j},c_{j})\) & Social Forces parameters & (50, 0.5, 250), [N, m, N/m] \\ \hline \end{tabular}
\end{table} TABLE I: Empirical Values and Thresholds used in our Implementation
Fig. 8: (a) Cumulative distribution function of RMSE for different learning settings. (b) Accuracy comparison of the proposed approach and the original RBMCDA (vision only) algorithm.
Fig. 9: The figure shows the cumulative quality score (CQS) over a period of time as a function of the foreground detector learning rate (\(\alpha\)). The optimum learning rate according to RMSE maximizes CQS, thus this metric can be used to train the foreground detector.
of the Horus [18] fingerprinting system (termed as Radio only) by taking into account the 12 WiFi access points in the construction site environment. Figure (8b) gives us a quite good idea of how the WiFi fingerprinting approach performs compared to the proposed system. The 90 percentile error of the radio only technique is approximately 2.5 meters compared to the 1 meter accuracy that the proposed technique achieves.
The next step is to compare our technique with the recently proposed RAVEL system [6] which is also a multiple hypothesis tracking and identification system. RAVEL which is discussed in more detail in Section 8 exploits the smoothness of motion and radio signal strength data in order to track and identify targets. Unlike our technique, RAVEL is more of a reconstruction technique (i.e. performs off-line tracking) as it requires to observe all measurements over a time window (\(W\)) in order to provide the trajectories of each target. We have tested RAVEL using time windows of sizes 30 and 60 seconds over a period of 10 minutes and we have compared it with the proposed online system. Both systems are capable of learning the radio model parameters, thus we performed these tests using the learned radio model for both systems. In Fig. (10) _RAVEL(30s)_ and _RAVEL(60s)_ shows the accuracy of RAVEL for window sizes of 30 and 60 seconds respectively. _Proposed_ denotes the proposed system with learned radio model, _PropTr1_ is the proposed system optimized one level further i.e. foreground detector training and _PropTr2_ denotes the proposed approach when the step-length model is also learned. Fig. (10) shows that the average error of RAVEL decreases from 1.2m to 0.9m as we increase the window size. Our approach with a trained radio model is slightly worse than RAVEL(60). However, once our system trains the foreground detector, the average error decreases significantly and continues to decrease as the step-length model is also learned. Unlike our system, RAVEL estimates the trajectory of a target using only visual data thus it becomes easily susceptible to errors due to missing camera detections. Our system without training achieves a similar performance but in real-time.
**Robustness:** This set of experiments aims to demonstrate the robustness of the proposed technique. First we wanted to see how our technique performs on difficult trajectories (i.e. various amounts of occlusions and missing detections). In order to simulate occlusions we remove a specific area of the field of view (FOV) by disabling the camera detections inside that area. More specifically, we generated occlusions at random locations that occupy a rectangular area of specific size inside the FOV. Then we evaluated the accuracy of the proposed approach compared to the vision-only tracker on 50 trajectories of variable length generated from our ground truth data. Fig. (11a) shows the RMSE over all trajectories between the proposed system and the vision-only tracker for different configurations of occlusions (i.e. shown as the percentage of occluded FOV). For each configuration we run the test 10 times; each time the occlusion was positioned to a different location. The two methods achieve a comparable performance when there are no occlusions. However, the proposed approach significantly outperforms the vision-only tracking in scenarios with long-term occlusions and large amounts of missing detections. In the presence of long-term occlusions the constant
Fig. 11: (a) The figure shows the RMSE between the proposed technique and the vision-only tracking for different amounts of occlusion. The use of inertial measurements by the proposed technique improves tracking significantly in noisy scenarios. (b) Illustrative example showing the difference between vision-only tracking (red line) and the proposed approach (blue line) in the presence of occlusions (gray area). In cases of prolonged missing camera detections (green squares) the constant velocity model of the vision-only tracker is not sufficient enough to maintain tracking. On the other hand the proposed technique with the aid of inertial measurements is capable of closely following the target despite the presence of long-term occlusions.
Fig. 12: The RMSE of the proposed technique under different amounts of injected heading error.
Fig. 10: Tracking accuracy between the proposed approach and RAVEL. _Proposed_ denotes our approach where the foreground detector and step-length model are not trained. _PropTr1_ is our approach after the foreground detector has been trained and further in _PropTr2_ the step-length model is also trained. _RAVEL(30s)_ and _RAVEL(60s)_ is the competing technique evaluated at window sizes of 30 and 60 seconds respectively.
velocity/acceleration motion model utilized by most visual tracking techniques fails and cannot be used to reliably model the inherently complex human motion. On the other hand Fig. (11a) shows that the use of inertial measurements by the proposed technique provides a more accurate model of human motion. An illustrative example is shown in Fig. (11b).
Additionally in order to study how our approach can cope with variable noise from the inertial sensors we followed a similar procedure as in the previous paragraph and we generated 50 trajectories from our ground truth data. At each time-step and for each trajectory we inject a random bias error to the heading estimator. More specifically we sample a heading error uniformly from a specific range of the form \([a..b]\ degrees\) and we add it to the output of the heading estimator. By doing this we can get an idea of how our approach performs in environments with disturbed magnetic fields. Fig. (12) illustrates the results of this experiment for different amounts of injected noise. As we can see the proposed technique can cope with moderate amounts of inertial noise; achieving a sub-meter accuracy for bias up to 30 degrees.
Moreover, we wanted to see how the number of people in the scene affects the performance of our system and in addition what is the impact of visual noise on the tracking accuracy. In order to study this, we used 10 minutes worth of data (i.e. 18000 frames) from our construction site dataset. For each frame in this dataset we have superimposed visual objects from future timestamps in order to increase the visual noise and the number of people in the scene. We have split the dataset in windows of 1 minute each (i.e. 1800 frames) and we recorded the RMS error for different number of visual objects as shown in Fig. (13). It is worth noting that, the number of people that we track includes only the people which carry mobile devices (i.e. 5 people). As we can see from Fig. (13) as we increase the number of visual objects in the scene the accuracy drops. More specifically when we have relatively small number of objects in the scene (i.e. 3-4 per frame) the error is approximately 0.7 meters and increases to approximately 1.9 meters when the number of objects increases to 13-15 per frame. The reason behind this is due to the fact that the WiFi cannot distinguish close-spaced targets and despite the use of IMU data for motion prediction a track can incorrectly be updated with the wrong visual observation (i.e. visual noise). Possible solutions to this problem, is to consider the evolution of the WiFi signal over multiple frames as opposed to the on-line filtering approach that we have currently implemented. Additionally multiple overlapping cameras can also help but this will increase the system's cost and complexity.
**Impact of social forces:** The last set of experiments aims to investigate the impact of social forces on the performance of our system. For this experiment we used our improved motion model given by Eqn. (12) that takes into account the influences of people and the environment on human motion. Two tests were conducted; first we have investigated the scenario where the visual detector (i.e. foreground detector in our case) does not perform optimally and so the missing camera detection rate is high. The second test deals with a trained visual detector. In the first case, our system will rely mostly on inertial measurements. Our intuition is that the addition of social forces will improve the motion prediction; thus increasing the overall tracking accuracy. Social forces essentially help us avoiding predictions through obstacles (i.e. walls) and also help us model the interactions between people. Figure (14a) shows the results of this test which is based on 15 minutes worth of data, where we compare the impact of social forces on two different settings of the foreground detector (i.e. not trained and trained). In this test we assume that people have a mass of 70Kg and a radius of 0.2m. The rest of SFM parameters are as follows \(a_{j}=50\)N, \(b_{j}=0.5\)m, \(\lambda=0.5\) and \(c_{j}=250\)N/m. As we can see the social forces improve the overall accuracy by approximately 20% on a non-trained foreground detector and the improvement on a trained foreground detector is roughly 10%. The reason behind these improvements is due to the more accurate motion prediction which allows a person to move more accurately in the environment even without camera detections. As the environment is populated with more constraints (i.e. walls, corridors) the gain of using the SFM is increasing. A second reason for these improvements is due to the fact that now the predicted locations are more aligned with the actual observations which improves the final position estimates and in addition reduces the data association errors.
Finally, we should note that selecting correctly the parameters of the social force model is very important if we would like the SFM to be beneficial and improve the tracking accuracy. Figure. (14b) shows the impact of the force magnitude (\(c_{j}\)) from Eq. (10a) on the accuracy of the system in the case of an erroneous obstacle (i.e. we have incorrectly estimated the presence of an obstacle, when in fact it does not exist). More specifically, in this example we assume that an erroneous obstacle is blocking the trajectory of a person. This obstacle exerts physical forces to this person in order to restrict his motion. During time steps 1 to 5 the obstacle is far away and the social force has no effect on the human motion. However, when the person is close enough (e.g. time step 7), the social force exerted onto him is opposite to the direction of his motion (this is to prevent a person to go through the obstacle). As we increase the force magnitude (\(c_{j}\)) the error from the ground truth (i.e. \(c_{j}=0\) N/m) increases since
Fig. 13: The RMSE of the proposed technique under different amounts of visual noise. (a) Camera snapshot without visual noise where we track the people in red rectangles. (b) Visual noise is injected in the scene (i.e. objects in blue rectangles). (c) Additional visual noise is injected in the scene. (d) The impact of visual noise on the performance of the proposed approach.
this increasing force is pushing the person further away. Now, for some values of \(c_{j}\) (e.g. 150-250 N/m) the acting force has the right magnitude and allows a person to go through the obstacle in cases where we have measurements on and beyond the obstacle area. However, when the force is too large (i.e. 450 N/m), a person cannot go through the obstacle and in the scenario of an erroneous obstacle the correct path (i.e. \(c_{j}=0\) N/m) cannot be recovered as shown in the graph. We have found experimentally that the SFM works best if it is tuned so that it would point towards the right direction but without causing significant repulsion. This strategy allows us to have improved location predictions that align better with the actual observations but also allows targets to go through obstacles/occlusions in cases of incorrect obstacle inference.
## 9 Related Work
A variety of positioning systems have been proposed by the research community over the past ten years. Recent surveys outlining the different techniques and their accuracies can be found in [19, 20]. In this section we will give a brief overview on the most recent positioning systems that make use of radio-, inertial- and visual- sensing ( i.e. using a stationary camera) to track multiple people. The positioning systems to be described here can be divided into two categories: a) systems that combine visual and radio measurements and b) those that combine visual and inertial measurements.
**Vision+Radio positioning systems:** The Radio And Vision Enhanced Localization (RAVEL) system [6] fuses anonymous visual detections captured by a stationary camera with WiFi readings to track multiple people moving inside an area with CCTV coverage. The WiFi measurements of each person are used to add context to the trajectories obtained by the camera in order to resolve visual ambiguities (e.g. split/merge paths) and increase the accuracy of visual tracking. RAVEL operates in two phases namely tracklet generation and WiFi-aided tracklet merging. In the first phase visual detections collected over a period of time are used to form unambiguous small trajectories (i.e. tracklets). In the second phase, RAVEL uses the aforementioned tracklets to create tracklet trees for each person (i.e. probable trajectory hypotheses). Then, the WiFi measurements of each person are used to search through the tracklet tree in order to find their most likely trajectory. The most likely trajectory is the one that agrees the most with the WiFi measurements. Unlike our technique, RAVEL performs off-line tracking, i.e. the trajectory of each person is reconstructed after all camera detections and WiFi measurements for a period of time have been observed. In addition, RAVEL does not make use of inertial measurements and thus it is more susceptible to positioning errors due to missing detections (i.e. static people that become part of the background).
In a similar setting the EV-Loc system [21] estimates the position of multiple people using both WiFi and camera measurements. More specifically, EV-Loc estimates the distance of each person from a number of access points first using camera measurements and then using WiFi readings. The Hungarian algorithm [22, 23] is then used to find the best mapping between camera and WiFi measurements. After this optimization problem is solved, the camera and WiFi locations of each person are fused to form a weighted average final location. Unlike our work, EV-Loc concentrates on the problem of finding the best matching between camera and WiFi traces (i.e. the matching process is performed after the visual tracking is completed ) and does not provide a general tracking framework that incorporates multiple sensor modalities. The more recent RGB-W system [24] also uses wireless signals emitted by people's mobile phones in combination with cameras to track and identify people. The authors show how the wireless signals can be used as a rough proxy for depth information which allows them to achieve better localization accuracy.
Mandeljc et al. presented in [25, 26] a fusion scheme that extends the probabilistic occupancy map (POM) [27] with radio measurements. In [25] the POM is extended so that the cell occupancy probabilities are estimated using ultra-wideband (UWB) radio sensors in addition to the cameras. This additional radio information increases the accuracy and robustness of the algorithm. Later in [26], the POM is extended further so that the anonymous camera detections are augmented with identity information from radio tags. The augmentation of anonymous detections with identity information is done on a frame-by-frame basis where at each time instant the optimal assignment between radio and camera locations is obtained using the Hungarian algorithm. The fusion scheme of [25, 26] was evaluated using only UWB radios which exhibit sub-meter accuracy and there is no indication of how this method will perform with radios of lower accuracy (i.e. WiFi). Finally, in [28] Goller et al. presents a hybrid RFID and computer vision system for localization and tracking of RFID tags. The authors show increased accuracy by combining the two complimentary sensor modalities in a probabilistic manner.
**Vision+Inertial positioning systems:** Instead of using radio measurements for identification the methods in this category use inertial measurements. For instance, the system in [29] fuses motion traces obtained from one stationary camera mounted on the ceiling and facing down with motion information from wearable accelerometer nodes to uniquely identify multiple people in the FOV using their accelerometer node IDs. Background subtraction is used to detect people from the video footage and then their floor-plane acceleration is extracted by double differentiation. The camera acceleration traces are then compared against the overall body acceleration obtained from the accelerometer nodes using the Pearson's correlation coefficient. The acceleration correlation scores among all possible combinations of camera-accelerometer pairs are then used to form an assignment matrix. Finally, the assignment problem is solved using the Hungarian algorithm. The initial algorithm of [29] is extended in [30] to allow for better path disambiguation based on people's acceleration patterns by keeping track of multiple trajectory hypotheses.
Fig. 14: (a) Impact of Social Forces on the performance of our system. (b) Tuning the parameters of the social force model. The graph shows the impact of the force magnitude (\(c_{j}\)) from Eq. (10a) on the accuracy of the system.
## 10 Future Work
In this paper we have presented a novel tracking system that uses three different sensor modalities (i.e. visual, radio and inertial) to accurately track and identify multiple people in a construction site setting. In addition we have developed learning techniques that make the proposed system able to adapt to the highly dynamic environment of the construction site. So far in our system we used a single stationary camera in order to monitor and track the people in the scene. Our next step is to extend our system to use multiple cameras in order to provide location services to larger areas.
Since the proposed technique is able not only to track but also to identify the people a simple approach would be to replicate and deploy the existing system in different areas (i.e. each deployment would use a single camera). However, we believe that better performance can be achieved by considering the collaboration between different cameras. The next step is to extend the proposed technique to a multi-camera multi-target tracking system by taking into account transition probabilities between multiple non-overlapping cameras. So far we have covered the case of 2D tracking in large unconstrained/open areas. As a future step we will also consider extending the current system to cover tracking in 3D.
Furthermore, building a stable network that can support such a system is also a challenging task. We need to think about the required network bandwidth, efficient communication between the different sub-systems and synchronization. However, all the above are going to be investigated in our future work.
## 11 Conclusion
In this paper we proposed a multi-modal positioning system for highly dynamic environments. We showed that it is possible to adapt Rao-Blackwellised particle filters - traditionally used to discern tracks using anonymous measurements - in order to both identify and track people being monitored by CCTV and holding mobile devices. We further showed that there is significant scope for automatically training the various sensor modalities, and this proved particularly useful in rapidly changing environments. Additionally, we showed that the use of social forces in dynamic industrial environments is highly beneficial and improves the tracking accuracy. Our experiments showed that even without training, our online approach achieves similar positioning accuracy to the existing off line RAVEL approach; with training the positioning error is decreased by a further 50%. We also showed that the proposed technique is robust in scenarios with visual and inertial noise. Lastly, with the integration of social forces we improved the accuracy by 10-20%.
## 12 Acknowledgments
We would like to thank Laing O'Rourke for allowing us to conduct our experiments in their construction site and also for funding this research.
|
2304.13197 | A phase-field model for hydraulic fracture nucleation and propagation in
porous media | Many geo-engineering applications, e.g., enhanced geothermal systems, rely on
hydraulic fracturing to enhance the permeability of natural formations and
allow for sufficient fluid circulation. Over the past few decades, the
phase-field method has grown in popularity as a valid approach to modeling
hydraulic fracturing because of the ease of handling complex fracture
propagation geometries. However, existing phase-field methods cannot
appropriately capture nucleation of hydraulic fractures because their
formulations are solely energy-based and do not explicitly take into account
the strength of the material. Thus, in this work, we propose a novel
phase-field formulation for hydraulic fracturing with the main goal of modeling
fracture nucleation in porous media, e.g., rocks. Built on the variational
formulation of previous phase-field methods, the proposed model incorporates
the material strength envelope for hydraulic fracture nucleation through two
important steps: (i) an external driving force term, included in the damage
evolution equation, that accounts for the material strength; (ii) a properly
designed damage function that defines the fluid pressure contribution on the
crack driving force. The comparison of numerical results for two-dimensional
(2D) test cases with existing analytical solutions demonstrates that the
proposed phase-field model can accurately model both nucleation and propagation
of hydraulic fractures. Additionally, we present the simulation of hydraulic
fracturing in a three-dimensional (3D) domain with various stress conditions to
demonstrate the applicability of the method to realistic scenarios. | Fan Fei, Andre Costa, John E. Dolbow, Randolph R. Settgast, Matteo Cusini | 2023-04-25T23:39:50Z | http://arxiv.org/abs/2304.13197v1 | # A phase-field model for hydraulic fracture nucleation and propagation in porous media
###### Abstract
Many geo-engineering applications, _e.g.,_ enhanced geothermal systems, rely on hydraulic fracturing to enhance the permeability of natural formations and allow for sufficient fluid circulation. Over the past few decades, the phase-field method has grown in popularity as a valid approach to modeling hydraulic fracturing because of the ease of handling complex fracture propagation geometries. However, existing phase-field methods cannot appropriately capture nucleation of hydraulic fractures because their formulations are solely energy-based and do not explicitly take into account the strength of the material. Thus, in this work, we propose a novel phase-field formulation for hydraulic fracturing with the main goal of modeling fracture nucleation in porous media, _e.g.,_ rocks. Built on the variational formulation of previous phase-field methods, the proposed model incorporates the material strength envelope for hydraulic fracture nucleation through two important steps: (i) an external driving force term, included in the damage evolution equation, that accounts for the material strength; (ii) a properly designed damage function that defines the fluid pressure contribution on the crack driving force. The comparison of numerical results for two-dimensional (2D) test cases with existing analytical solutions demonstrates that the proposed phase-field model can accurately model both nucleation and propagation of hydraulic fractures. Additionally, we present the simulation of hydraulic fracturing in a three-dimensional (3D) domain with various stress conditions to demonstrate the applicability of the method to realistic scenarios.
keywords: phase-field methods, hydraulic fracturing, fracture nucleation, fracture propagation, strength envelope
## 1 Introduction
Hydraulic fracturing consists in enhancing the permeability of a natural formation by injecting a fracturing fluid (_e.g.,_ water) at a high pressure. This technique has been widely used in many geo-engineering applications, such as unconventional oil and gas production [1; 2; 3] and enhanced geothermal systems (EGS) [4; 5; 6], in which rock masses are nearly impermeable. For EGS, the effectiveness of the stimulation treatment
determines the performance of a target site. Thus, being able to thoroughly understand the hydraulic fracturing process is crucial to operate such systems safely and effectively. As a consequence, there has been a growing interest in the development of numerical approaches to model hydraulic fracturing.
A popular modeling choice is to consider fractures as sharp interfaces, represented by lower dimensional manifolds embedded in a higher dimensional domain (_e.g.,_ surfaces in a 3D domain). To this end, there exist several approaches to include discontinuities in both finite element (FE) and finite volume (FV) methods for flow, mechanics, and poromechanics models. These approaches can be grouped into two main classes: (i) conforming methods, in which fractures are represented by 2D elements that coincide with the boundaries of 3D cells, _e.g.,_[7; 8]; (ii) embedded approaches, in which fractures are meshed independently of the rock matrix domain and the formulation is enriched to account for their effects, _e.g.,_[9; 10; 11; 12]. Both these approaches suffer from some limitations and involve several practical challenges when employed to model hydraulic fracturing. For example, conforming approaches only allow fractures to propagate along element boundaries (faces in 3D), which prevents fractures from growing in an arbitrary direction. While embedded approaches can be used to overcome such limitation, they often face challenges in defining general methods to identify the direction of fracture propagation and handling complex geometries (_e.g.,_ crack branching).
Over the past few decades, the phase-field model for fractures has been identified as a promising alternative to sharp interface approaches to model fracture propagation in rocks and rock-like materials [13; 14; 15; 16]. The fracture, instead of being explicitly represented as an interface, is approximated by a diffuse variable (_i.e.,_ damage). The main advantage of this diffuse crack representation is the simplicity of representing complex geometries. There have also been several efforts to extend the phase-field method to model hydraulic fracturing [17; 18; 19; 20; 21] in poroelastic media. Hybrid methods that combine sharp and diffuse crack representations to leverage the strengths of both approaches have also been proposed recently [22; 23].
Most of the methods developed so far, based on either sharp or diffuse crack representations, have focused on the propagation of preexisting hydraulic fractures, while little attention has been given to modeling fracture nucleation in bulk materials. However, understanding hydraulic fracture nucleation in the near-wellbore region can provide crucial information about the in-situ stress [24] and drive the design of more efficient hydraulic fracturing operations [25]. Existing phase-field methods for hydraulic fracturing are mostly based on a variational formulation that casts the fracture propagation problem in terms of the minimization of total potential energy of the system [17; 20; 21; 26; 27]. This formulation can accurately model fracture propagation, in good agreement with the classic fracture mechanics theory as originally described in [28]. However, casting the problem solely in terms of energy minimization completely neglects the material strength and how fractures may nucleate in the bulk due to a stress-induced failure. Additionally, this purely energetic formulation results in a dependency of the material strength on the phase-field regularization length, which is a parameter that governs the width of the diffuse fracture region. This length dependency issue has been thoroughly discussed in the literature (see _e.g.,_[15; 29; 30; 31]). As a consequence, the regularization length becomes a material-specific parameter that should be calibrated based on the tensile and compressive strengths of a given material [14; 32]. This length dependency issue has been addressed in the recent works on phase-field models for quasi-brittle materials [15; 29; 30], which are inspired by a gradient damage model introduced by Lorentz and Godard [33]. Unfortunately, these models are still derived through an energy minimization process, and as such damage nucleation is governed by a thresh
old that is energetic. In a recent work, Kumar _et al._[34] have proposed a novel phase-field formulation to address these issues. In essence, they have included a well-designed external driving force term in the phase-field formulation to account for the assumption of a strength envelope governing nucleation. When correctly designed, this additional term allows the resulting phase-field model to approximate the strength surface of the material. To date, the work of Kumar _et al._[34] has been restricted to traction-free crack surfaces. An analogous model that accounts for pressurized cracks and nucleation in porous media has yet to be developed.
In this work, we present a novel phase-field formulation to model nucleation and propagation of hydraulic fractures in porous materials. Specifically, we extend the formulation of the previous phase-field model for hydraulic fracturing by including the external driving force term proposed by Kumar _et al._[34]. To ensure that the model correctly reproduces the material strength even in the presence of fluid pressure, we devise a special damage function for the pressure terms in the phase-field formulation. The proposed model accurately predicts hydraulic fracture nucleation in an intact porous material while retaining the ability to model fracture propagation.
The paper is organized as follows. In Section 2, we review the variational phase-field method and its formulation for hydraulic fracturing. In Section 3 we extend the phase-field formulation presented in Section 2 by adding an external driving force and devising the proper damage function for pressure-dependent terms. The discretization and the solution strategy are described in Section 4 whereas, in Section 5, we present a series of two-dimensional (2D) and three-dimensional (3D) numerical examples to demonstrate the model correctly represents hydraulic fracture nucleation and propagation. The paper is concluded in Section 6.
## 2 The phase-field model for hydraulic fracture propagation
The goal of this section is to review the phase-field formulation for hydraulic fracturing in saturated porous materials (_e.g._, rocks).
### Problem statement
Let us consider a porous medium \(\Omega\in\mathbb{R}^{n_{\text{dim}}}\) in Figure 1, with \(n_{\text{dim}}\) denoting the spatial dimension. The external boundary, \(\partial\Omega\), is divided into non-overlapping portions \(\partial_{u}\Omega\cup\partial_{i}\Omega=\partial_{p}\Omega\cup\partial_{q} \Omega=\partial\Omega\), identifying where Dirichlet and Neumann boundary conditions for mechanics and flow problems will be applied. The continuous domain encloses a set of fractures identified by the lower dimensional domain \(\Gamma\). For simplicity, we assume that fractures do not intersect the external boundary. Both the porous medium and fractures are fully saturated by a single-phase Newtonian fluid. Given the initial state, the goal is to model the system evolution in terms of fluid pressure (\(p\)), mechanical deformation (\(\mathbf{u}\)), and fracture geometry (\(\Gamma\)) over the time domain \(\mathbb{T}=(0,t_{\text{max}}]\). Based on the variational approach for fractures in poroelastic media [20; 27], the system evolves such that the total potential energy of the system is minimized. The total potential energy is a function of the deformation field, the fluid pressure in both rock matrix and fractures, and the fracture geometry, _i.e._,
\[\Psi(\mathbf{u},p,\Gamma)=\Psi^{e}(\mathbf{\varepsilon},\Gamma)+\Psi^{p}(\mathbf{u},p, \Gamma)+\Psi^{d}(\Gamma)-\Psi^{s}(\mathbf{u},\Gamma). \tag{1}\]
Here, \(\Psi^{e}\) is the elastic strain energy of the bulk solid, \(\Psi^{p}\) is the energy stored in the fluid, \(\Psi^{d}\) is the fracture dissipation, and \(\Psi^{s}\) is the work of external forces. The minimization of the total potential energy is coupled with the mass balance equation describing single-phase flow in a porous medium.
### The phase-field regularization
The phase-field method approximates the fracture geometry, \(\Gamma\), by introducing a continuous variable, the damage (\(d\)), bounded between 0 (intact rock) and 1 (fully fractured) as shown in Figure 2, and a regularization length, \(L\), that defines the width of the diffuse region. As such, the total energy of the system is approximated by
\[\Psi(\mathbf{u},p,\Gamma)\approx\bar{\Psi}(\mathbf{u},p,d)=\bar{\Psi}^{e}(\mathbf{\varepsilon },d)+\bar{\Psi}^{p}(\mathbf{u},p,d)+\bar{\Psi}^{d}(d)-\bar{\Psi}^{s}(\mathbf{u},d). \tag{2}\]
Given the phase-field approximation, the elastic strain energy \(\Psi^{e}\) reads
\[\Psi^{e}(\mathbf{\varepsilon},\Gamma)\approx\bar{\Psi}^{e}(\mathbf{\varepsilon},d)= \int_{\Omega}g(d)W^{e}(\mathbf{\varepsilon})\,\mathrm{d}V. \tag{3}\]
Figure 1: Problem geometry.
Figure 2: Phase-field approximation of hydraulic fractures.
Here, \(W^{e}(\mathbf{\varepsilon})\) is the strain energy of the intact material, _i.e.,_
\[W^{e}(\mathbf{\varepsilon})=\frac{1}{2}\mathbf{\varepsilon}:\mathbb{C}^{e}:\mathbf{ \varepsilon}, \tag{4}\]
where \(\mathbf{\varepsilon}:=\mathbf{\nabla}^{\rm s}\mathbf{u}\) is the strain tensor and \(\mathbb{C}^{e}\) is a fourth-order tensor of elastic moduli. \(g(d)\) is, instead, a smooth function that has the effect of degrading the material stiffness whenever damage is non-zero. In this work, we employ a quadratic form of \(g(d)\), as done in several other works [35, 36, 37], _i.e.,_
\[g(d)=(1-d)^{2}. \tag{5}\]
The potential energy of the fluid \(\Psi^{p}\) can be approximated by a volume integral [20, 27], _i.e.,_
\[\Psi^{p}(\mathbf{u},p,\Gamma)\approx\bar{\Psi}^{p}(\mathbf{u},p,d)=\int_{\Omega}-m(d)pb \,\nabla\cdot\mathbf{u}\ \mathrm{d}V, \tag{6}\]
where \(b\) is the Biot's coefficient and \(m(d)\) is a function that satisfies the following constraints,
\[m(0)=1;\,m(1)=0;\,m^{\prime}(1)=0;\,m(d)<0,\,\,\,\text{when}\,\,0<d<1. \tag{7}\]
The energy dissipated by the fracture \(\Psi^{d}\) is defined as the integral of the critical fracture energy \(\mathcal{G}_{c}\) over the fracture surface, and we approximate it as follows in the phase-field model [15, 29, 30],
\[\Psi^{d}=\int_{\Gamma}\mathcal{G}_{c}\ \mathrm{d}A\approx\bar{\Psi}^{d}=\int_{ \Omega}\mathcal{G}_{c}\frac{1}{c_{0}L}\left[\omega(d)+L\mathbf{\nabla}d\cdot\mathbf{ \nabla}d\right]\ \mathrm{d}V\ \text{with}\ c_{0}=4\int_{0}^{1}\sqrt{\omega(s)}\ \mathrm{d}s, \tag{8}\]
where \(\omega(d)\) is a local dissipation function. In most existing phase-field models for hydraulic fracturing, the local dissipation function is \(\omega(d)=d^{2}\), _e.g.,_[17, 19, 20, 26, 27, 38, 39]. However, this choice leads to damage growth as soon as the strain energy is non-zero. Therefore, given the goal of modeling strength-based fracture nucleation, we choose a linear form \(\omega(d)=d\), as introduced in Pham _et al._[40] and previously employed in other phase-field studies [15, 30].
Finally, the total work done by external forces reads
\[\Psi^{s}=\int_{\Omega}\rho\mathbf{g}\cdot\mathbf{u}\ \mathrm{d}V+\int_{\partial_{t} \Omega\cup\Gamma}\mathbf{t}\cdot\mathbf{u}\ \mathrm{d}A, \tag{9}\]
where \(\rho\) is the mass density and \(\mathbf{g}\) is the gravitational vector. Here, the first integral represents the work done by the body force, and the second one is the work of the traction \(\mathbf{t}\) on the external boundary and on the fracture surface. The traction on the fracture surface is exerted by the fluid pressure, _i.e.,_
\[\int_{\Gamma}\mathbf{t}\cdot\mathbf{u}\ \mathrm{d}A=-\int_{\Gamma}p\mathbf{n}\cdot\mathbf{u} \ \mathrm{d}A, \tag{10}\]
where \(\mathbf{n}\) is the unit normal vector pointing outward. By applying the divergence theorem and assuming a
homogeneous displacement boundary condition, Eq. (10) can be transformed into
\[\int_{\Gamma}pn\cdot\mathbf{u}\;\mathrm{d}A=\int_{\Omega\setminus\Gamma}\nabla\cdot( p\mathbf{u})\;\mathrm{d}V-\int_{\partial_{t}\Omega}pn\cdot\mathbf{u}\;\mathrm{d}A. \tag{11}\]
Then, introducing the phase-field regularization, the work of external forces is approximated by
\[\Psi^{s}(\mathbf{u},p,\Gamma)\approx\bar{\Psi}^{s}(\mathbf{u},p,d)=\int_{\Omega}\rho \mathbf{g}\cdot\mathbf{u}\;\mathrm{d}V-\int_{\Omega}m(d)\mathbf{\nabla\cdot(p\mathbf{u})}\; \mathrm{d}V+\int_{\partial_{t}\Omega}pn\cdot\mathbf{u}\;\mathrm{d}A+\int_{\partial _{t}\Omega}\mathbf{t}\cdot\mathbf{u}\;\mathrm{d}A. \tag{12}\]
### Governing equations
Given the phase-field regularization introduced in the previous subsection, the momentum balance and the evolution equation for the damage field can be obtained from the minimization of the total potential energy, _i.e.,_
\[(\mathbf{u},d)=\operatorname*{argmin}_{\mathbf{u},d}\left[\bar{\Psi}(\mathbf{ u},p,d)\right]\quad\text{in }\Omega\times\mathbb{T} \tag{13}\] \[\text{with}\quad d\in[0,1]\text{ and }\dot{d}\geq 0.\]
These equations are further coupled with the mass conservation equation describing single-phase flow in a porous medium. Thus, the strong form of the initial boundary value problem is to find the displacement (\(\mathbf{u}\)), the damage (\(d\)), and the fluid pressure (\(p\)) that satisfy
\[\nabla\cdot\left[\mathbf{\sigma}^{\prime}(\mathbf{\varepsilon},d)-m(d)(b-1)p\mathbf{1} \right]-m(d)\mathbf{\nabla}p+\rho\mathbf{g}=\mathbf{0}\quad\text{in }\Omega\times\mathbb{T}, \tag{14}\]
\[\left\{\begin{array}{ll}2\left(d-1\right)W^{e}(\mathbf{\varepsilon})+m^{\prime} (d)\left[(1-b)p\nabla\cdot\mathbf{u}+\mathbf{\nabla}p\cdot\mathbf{u}\right]+\frac{3\mathcal{ G}_{c}}{8L}\left(1-2L^{2}\nabla^{2}d\right)=0\quad\text{if }\dot{d}>0,\\ 2\left(d-1\right)W^{e}(\mathbf{\varepsilon})+m^{\prime}(d)\left[(1-b)p\nabla\cdot \mathbf{u}+\mathbf{\nabla}p\cdot\mathbf{u}\right]+\frac{3\mathcal{G}_{c}}{8L}\left(1-2L^{ 2}\nabla^{2}d\right)\leq 0\quad\text{if }\dot{d}=0,\end{array}\right. \tag{15}\]
\[\frac{\partial}{\partial t}\left(\phi\rho_{f}\right)+\nabla\cdot\left(\rho_{f }\mathbf{v}\right)-s=0\quad\text{in }\Omega\times\mathbb{T}, \tag{16}\]
subject to the boundary conditions
\[\mathbf{u} =\mathbf{0}\] on \[\partial_{u}\Omega\times\mathbb{T}, \tag{17}\] \[\mathbf{n} =\hat{\mathbf{t}}(\mathbf{x},t)\] on \[\partial_{t}\Omega\times\mathbb{T},\] (18) \[\mathbf{\nabla}d\cdot\mathbf{n} =0\] on \[\partial\Omega\times\mathbb{T},\] (19) \[p =\hat{p}(\mathbf{x},t)\] on \[\partial_{p}\Omega\times\mathbb{T},\] (20) \[-\mathbf{v}\cdot\mathbf{n} =\hat{q}(\mathbf{x},t)\] on \[\partial_{q}\Omega\times\mathbb{T}, \tag{21}\]
and initial conditions
\[\mathbf{u}(\mathbf{x},0)=\bar{\mathbf{u}}(\mathbf{x})\quad\text{in }\Omega, \tag{22}\] \[d(\mathbf{x},0)=\bar{d}(\mathbf{x})\quad\text{in }\Omega, \tag{23}\]
\[p(\mathbf{x},0)=\bar{p}(\mathbf{x})\quad\text{in }\Omega. \tag{24}\]
Here, \(\mathbf{\sigma}^{\prime}(\mathbf{\varepsilon},d):=g(d)\,\mathbb{C}^{e}:\mathbf{\varepsilon}\) is the degraded effective stress tensor, \(\phi\) is the rock porosity, \(\rho_{f}\) is the fluid density, \(\mathbf{v}\) is the fluid velocity, and \(s\) is the source/sink term (_e.g.,_ wells). Additionally, \(\mathbf{\hat{t}}\), \(\hat{p}\), and \(\hat{q}\) are the prescribed values of traction, pressure, and fluid flux on the boundary, respectively. Finally, \(\bar{\mathbf{u}}\), \(\bar{d}\), and \(\bar{p}\) denote the initial values of the displacement, damage, and pressure, respectively.
In the equations presented above, the following constitutive relationships are considered.
_Porosity._ The porosity is a weighted average of the porosity at the intact rock and that of the fracture, _i.e.,_
\[\phi=(1-d)\phi_{b}+d\phi_{f}, \tag{25}\]
where \(\phi_{b}\) and \(\phi_{f}\) are the porosity of the intact rock and that of the fracture, respectively. According to the Biot's poroelasticity theory, \(\phi_{b}\) is a function of the volumetric strain (\(\varepsilon_{\text{vol}}:=\mathbf{\varepsilon}:\mathbf{1}\)) and the fluid pressure \(p\), _i.e.,_
\[\phi_{b}=\phi_{0}+b(\varepsilon_{\text{vol}}-\varepsilon_{\text{vol,0}})+ \frac{b-\phi_{0}}{\kappa_{s}}(p-p_{0}). \tag{26}\]
Here, \(\phi_{0}\), \(p_{0}\), and \(\varepsilon_{\text{vol,0}}\) are the reference values for porosity, pressure, and volumetric strain. From now on, we will assume the porosity of the fracture to be \(\phi_{f}=1\).
_Fluid velocity._ The fluid velocity \(\mathbf{v}\) is computed based on Darcy's law,
\[\mathbf{v}=\frac{k}{\mu_{d}}\left(\mathbf{\nabla}p-\rho_{f}\mathbf{g}\right), \tag{27}\]
where \(k\) is the rock permeability and \(\mu_{d}\) is the dynamic viscosity of the fluid.
_Fluid properties._ The fluid density is computed as
\[\rho_{f}=\rho_{0}e^{\frac{(p-p_{0})}{K_{f}}}, \tag{28}\]
where \(\rho_{0}\) is the density at the reference pressure \(p_{0}\), and \(K_{f}\) is the fluid bulk modulus. The fluid viscosity is considered constant.
_Permeability._ The rock permeability \(k\) is computed as
\[k(d)=k_{0}e^{\alpha_{k}d}. \tag{29}\]
Here, \(k_{0}\) is the permeability of the intact material and \(\alpha_{k}\) is an empirical coefficient. This relation is introduced based on an experimental measurement of the permeability in diffusely damaged concrete [41], and has been previously applied in phase-field simulations [42, 43, 44]. A more accurate approach to model the fluid flow in the fracture is to adopt the Reynolds lubrication equation, which relies on the computation
of the crack aperture (opening). Unfortunately, existing methods for calculating the fracture aperture have various drawbacks (see [45]). Therefore, we adopt Eq. (29) here for simplicity.
We note that the governing equations presented in this section are derived solely based on the minimization of the total potential energy and do not include any fracture nucleation criterion related to the material strength. As a consequence, these equations are unsuited to model hydraulic fracture nucleation in the bulk material that has no pre-existing fractures. In the next section, we will present how the formulation is extended to include a strength-based fracture nucleation criterion.
## 3 A phase-field model for hydraulic fracture nucleation and propagation
In this section, we extend the phase-field formulation for hydraulic fractures derived in Section 2 to include a stress-based fracture nucleation criterion that takes into account the material strength surface. In this way, the phase-field method can accurately model not only propagation of preexisting fractures but also hydraulic fracture nucleation in the intact material.
Following the idea introduced by Kumar _et al._[34], we incorporate the strength surface of the rock material into the model by adding an external driving force term \(\hat{c}_{e}(\mathbf{\sigma}^{\prime},L)\) into the damage evolution equation (15), which gives
\[2(d-1)W^{\epsilon}(\mathbf{\varepsilon})+m^{\prime}(d)\left[(1-b)p\mathbf{\nabla} \mathbf{\cdot}\mathbf{u}+\mathbf{\nabla}p\mathbf{\cdot}\mathbf{u}\right]+\hat{c}_{e}(\mathbf{\sigma}^ {\prime},L)+\frac{3\mathcal{G}_{c}}{8L}\left[1-2L^{2}\nabla^{2}d\right]=0. \tag{30}\]
As thoroughly described by Kumar _et al._[34], the external driving force needs to be designed to reproduce the specific strength surface of the material of interest. Here, for sake of simplicity, we consider the form proposed in [34] for the Drucker-Prager yield surface [46], _i.e.,_
\[\hat{c}_{e}(\mathbf{\sigma}^{\prime},L)\equiv\hat{c}_{e}(I_{1},J_{2},L)=\frac{1}{1 +\beta_{3}I_{1}^{2}}\left(\beta_{2}\sqrt{J_{2}}+\beta_{1}I_{1}+\beta_{0}\right), \tag{31}\]
with
\[\beta_{0} =\frac{3\mathcal{G}_{c}}{8L}\delta^{L}, \tag{32}\] \[\beta_{1} =-\frac{3(\sigma_{\rm cs}-\sigma_{\rm ts})(1+\delta^{L})\mathcal{ G}_{c}}{16\sigma_{\rm ts}\sigma_{\rm cs}L}-\frac{8\mu+24\kappa-27\sigma_{\rm ts}}{144 \mu\kappa}(\sigma_{\rm cs}-\sigma_{\rm ts})-\frac{\mu+3\kappa}{18\mu^{2}\kappa ^{2}\mathcal{G}_{c}}\sigma_{\rm ts}(\sigma_{\rm cs}^{3}-\sigma_{\rm ts}^{3})L,\] (33) \[\beta_{2} =-\frac{9(\sigma_{\rm cs}+\sigma_{\rm ts})(1+\delta^{L})\mathcal{ G}_{c}}{16\sqrt{3}\sigma_{\rm ts}\sigma_{\rm cs}L}+\frac{8\mu+24\kappa-27\sigma_{ \rm ts}}{48\sqrt{3}\mu\kappa}(\sigma_{\rm ts}+\sigma_{\rm cs})+\frac{\mu+3 \kappa}{6\sqrt{3}\mu^{2}\kappa^{2}\mathcal{G}_{c}}\sigma_{\rm ts}(\sigma_{\rm ts }^{3}+\sigma_{\rm cs}^{3})L,\] (34) \[\beta_{3} =\frac{L\sigma_{\rm ts}}{\mu\kappa\mathcal{G}_{c}}. \tag{35}\]
Here, \(\sigma_{\rm ts}\) and \(\sigma_{\rm cs}\) are the tensile and compressive strengths of a material under uniaxial loading, and \(I_{1}\) and \(J_{2}\) are the stress invariants. The parameter \(\delta^{L}\) in Eq. (31) is a calibration constant that varies with the material properties, the regularization length, and the mesh spacing. It can be calibrated by simulating crack propagation from existing cracks and ensuring that the results exhibit Griffith-like behavior [34].
### Design of the m(d) function
For the phase-field model to accurately reproduce the material strength surface, the damage evolution equation for the intact material (\(d=0\)) should be asymptotically equivalent to the yield surface for \(L\to 0\). This is the case for the damage equation below at \(d=0\) for modeling non-pressurized fracture in a nonporous material, as mathematically proved in Kumar _et al._[34],
\[2W^{e}(\mathbf{\varepsilon})-\hat{c}_{e}(\mathbf{\sigma}^{\prime},L)-\frac{3\mathcal{G }_{c}}{8L}=0. \tag{36}\]
In our case for modeling hydraulic fractures, however, the damage equation (30) at \(d=0\) as shown below includes pressure terms due to the presence of the fluid,
\[2W^{e}(\mathbf{\varepsilon})-m^{\prime}(0)\left[(1-b)p\mathbf{\nabla\cdot u}+\mathbf{ \nabla}p\cdot\mathbf{u}\right]-\hat{c}_{e}(\mathbf{\sigma}^{\prime},L)-\frac{3\mathcal{ G}_{c}}{8L}=0. \tag{37}\]
Thus, to ensure that the damage evolution equation \(d=0\) can still asymptotically represent the material strength surface, the damage function \(m(d)\) must be chosen such that it satisfies not only Eq. (7) but also \(m^{\prime}(0)=0\), so that Eq.(37) becomes identical to Eq.(36). Note that this additional condition (\(m^{\prime}(0)=0\)) is also considered for a similar purpose in Jiang _et al._[47] to model pressurized cracks in nonporous solids. To fulfill all these requirements, we choose the following smooth form for \(m(d)\) in this work,
\[m(d)=\frac{1}{2}\left[1+\cos(\pi d)\right]. \tag{38}\]
This choice of \(m(d)\) ensures that the strength surface predicted by the phase field model is equivalent to the prescribed one as long as the external driving force is correctly chosen. Figure 3 shows, for example, a comparison of the strength surface of the Drucker-Prager model and the one predicted by the phase-field model with different regularization lengths, when using the external driving force in Eq. (31). Here, the phase-field predictions are obtained by analytically solving Eq.(37) with \(m(d)\) defined in Eq. (38).
### Enforcement of damage constraints
For the damage evolution equation (30) to provide physically meaningful results, it must be subject to two constraints: (1) the boundedness of the damage; (2) irreversibility. In this subsection, we describe how these constraints are enforced in this work.
Boundedness of the damage fieldWhile the damage is guaranteed to be strictly lower than 1.0 [39], it is necessary to ensure that the constraint \(d\geq 0\). Note that, given an intact material (\(d=0\)), Eq. (37) is only satisfied whenever the stress state reaches the material strength. As a consequence, whenever the stress state of the material is within the strength surface, only a negative damage value satisfies Eq. (37). To avoid such negative damage, we propose to replace the strain energy term in the damage equation by an effective crack driving force term, \(\mathcal{D}\), defined as
\[\mathcal{D}=\max\left\{\frac{1}{2}\hat{c}^{e}(\mathbf{\sigma}^{\prime},L)+\frac{3 \mathcal{G}_{c}}{16L},\;W^{e}(\mathbf{\varepsilon})\right\}. \tag{39}\]
It can easily be seen that this guarantees that an always nonnegative damage solution is obtained even for an intact material.
Crack irreversibilityMost phase-field models, to avoid crack healing, either employ a history field of the crack driving force [15; 16; 36] or an augmented Lagrangian method [26; 30] to ensure that the damage is a monotonically increasing function of time, _i.e._, \(\dot{d}\geq 0\) condition in Eq. (13). However, as pointed out by Kumar _et al._[34], the introduction of the external driving force results in a larger diffuse area around the tip of propagating fractures due to the difference between the length-scales of the crack tip and the crack body. Thus, in order to obtain the optimal fracture profile once the damage has fully localized, we only enforce monotonicity once the damage reaches a threshold value, _e.g._, \(d=0.95\)[34]. More details of how this constraint is imposed are given in the solution algorithm in Section 4.
### Updated governing equations
Given the above modifications, the strong form of the initial boundary value problem is to find the the displacement \(\mathbf{u}(\mathbf{x},t)\), the damage \(d(\mathbf{x},t)\), and the fluid pressure \(p(\mathbf{x},t)\) fields that satisfy
\[\nabla\cdot\left(\mathbf{\sigma}^{\prime}(\mathbf{\varepsilon},d)-m(d)(b -1)p\mathbf{1}\right)-m(d)\mathbf{\nabla}p+\rho\mathbf{g} =\mathbf{0}\ \ \text{in}\ \ \Omega\times\mathbb{T}, \tag{40}\] \[2(d-1)\mathcal{D}+m^{\prime}(d)\mathcal{D}_{p}+\hat{c}_{e}(\bm {\sigma}^{\prime},L)+\frac{3\mathcal{G}_{c}}{8L}\left[1-2L^{2}\nabla^{2}d \right] =0\ \ \text{in}\ \ \Omega\times\mathbb{T},\] (41) \[\frac{\partial}{\partial t}\left(\phi\rho_{f}\right)+\nabla\cdot \left(\rho_{f}\mathbf{v}\right) =s\ \ \text{in}\ \ \Omega\times\mathbb{T}, \tag{42}\]
with \(\mathcal{D}_{p}=(1-b)p\mathbf{\nabla\cdot u}+\mathbf{\nabla}p\cdot\mathbf{u}\). These equations are subject to the boundary conditions in Eqs. (17) - (21), the initial conditions in Eqs (22) - (24), and \(\dot{d}\geq 0\) when \(d\,\in\,[0.95,1]\).
Figure 3: Comparison between the Drucker–Prager model and the strength surfaces predicted by the proposed phase-field method with different regularization lengths.
## 4 Discretization and solution strategy
In this section, we describe the numerical discretization of the governing equations and the solution algorithm.
### Discretization
Let us define a mesh \(\mathcal{T}\) formed by nonoverlapping cells \(K_{i}\) such that \(\Omega\approx\bigcup_{i}K_{i}\) and let \(\mathcal{F}\) be the set of all faces and \(\mathcal{F}_{\alpha}\) the subset of faces located on \(\partial\Omega_{\alpha}\) with \(\alpha=u,t,p,q\). Given this mesh, the main unknowns are approximated by their discrete counterparts, _i.e.,_\(\mathbf{u}^{h}\), \(p^{h}\) and \(d^{h}\). Additionally, the system of governing equations, Eqs. (40) - (42), is time-dependent and discretized using discrete time steps \(t_{i}\in\{t_{0},t_{1},...,t_{\max}\}\) and \(\Delta t\) indicates the current time interval at \(t_{n+1}\), _i.e.,_\(\Delta t=t_{n+1}-t_{n}\). In general, the subscripts \((.)_{n+1}\) and \((.)_{n}\) indicate a quantity evaluated at the current time step and the last time step, respectively. From now on, for simplicity, we drop the subscript \((.)_{n+1}\) for the quantity at the current time step. For the spatial discretization, a low order finite element method is employed to discretize the momentum balance (40) and the damage evolution equation (41), while the flow equation (42) is discretized by a hybrid mimetic finite difference method [48].
Without loss of generality, let us assume homogeneous boundary conditions and define the following three discrete function spaces for the displacement, the damage, and the pressure field,
\[\mathcal{V}_{u}^{h} :=\left\{\mathbf{\eta}^{h}\mid\mathbf{\eta}^{h}\in[C^{0}(\overline{\Omega })]^{\text{dim}},\ \mathbf{\eta}^{h}=\mathbf{0}\text{ on }\partial_{u}\Omega,\ \mathbf{\eta}^{h}_{|K}\in[\mathbb{Q}_{1}(K)]^{ \text{dim}}\ \forall K\in\mathcal{T}\right\}, \tag{43}\] \[\mathcal{V}_{d}^{h} :=\left\{\psi^{h}\mid\psi^{h}\in C^{0}(\overline{\Omega}),\ \psi^{h}_{|K}\in\mathbb{Q}_{1}(K)\ \forall K\in\mathcal{T}\right\},\] (44) \[\mathcal{V}_{p}^{h} :=\left\{\chi^{h}\mid\chi^{h}\in\mathcal{L}^{2}(\Omega),\ \chi^{h}_{|K}\in\mathbb{P}_{0}(K)\ \forall K\in\mathcal{T}\right\}. \tag{45}\]
Here, \(C^{0}(\overline{\Omega})\) is the space of continuous functions on the closed domain \(\overline{\Omega}:=\Omega\cup\partial\Omega\), and \(\mathbb{Q}_{1}(K)\) is the space of multivariate polynomials on \(K\). Additionally, \(\mathcal{L}^{2}(\Omega)\) is the space of square Lebesgue-integrable functions on \(\Omega\) and \(\mathbb{P}_{0}(K)\) the space of piece-wise constant functions on \(K\). To discretize the mass balance equation (42), we also require discretization of the fluxes between neighboring elements. So we define another discrete space \(\underline{\mathcal{L}}^{h}\) containing the discrete approximation of the face pressure average, \(\pi^{h}=(\pi_{f})_{f\in\mathcal{F}}\in\underline{\mathcal{L}}^{h}\), where
\[\pi_{f}\approx\frac{1}{|f|}\int_{f}p, \tag{46}\]
for all \(f\in\mathcal{F}\). Note that \(\pi_{f}\) is a face-centered degree of freedom approximating the average pressure on a face. Thus, we can compute the one-sided face flux, \(F_{K,f}\), on the face \(f\in\mathcal{F}_{K}\), where \(\mathcal{F}_{K}\) is the set of faces in the element \(K\), as
\[F_{K,f}=\frac{\rho_{f}^{\text{upw}}}{\mu_{d}^{\text{upw}}}\sum_{f^{\prime}\in \mathcal{F}_{K}}\Upsilon_{ff^{\prime}}\left[p_{K}-\pi_{f}-\rho_{f,K}\mathbf{g} \cdot(\mathbf{x}_{K}-\mathbf{x}_{f})\right]. \tag{47}\]
Here, \(\rho_{f}^{\text{upw}}\) and \(\mu_{d}^{\text{upw}}\) are the upwinded fluid density and viscosity. Additionally, \(\mathbf{x}_{K}\) and \(\mathbf{x}_{f}\) are the locations
of the element and face centers, respectively, and \(\Upsilon\) is the local transmissibility matrix, evaluated using the quasi two-point flux approximation (TPFA) (see Chapter 6 of [49] for more details).
Thus, the discrete weak form of the problem is: find \(\{\mathbf{u}^{h},d^{h},p^{h},\pi^{h}\}_{n+1}\in\mathcal{V}_{u}^{h}\times\mathcal{V} _{d}^{h}\times\mathcal{V}_{p}^{h}\times\mathcal{L}^{h}\) such that
\[\mathcal{R}_{u}^{h} =\int_{\Omega^{h}}\mathbf{\nabla}^{s}\mathbf{\eta}^{h}:\mathbf{\sigma}\;\mathrm{ d}V+\int_{\Omega^{h}}m(d)\mathbf{\eta}^{h}\cdot\mathbf{\nabla}p\;\mathrm{d}V-\int_{ \Omega^{h}}\rho\mathbf{\eta}^{h}\cdot\mathbf{g}\;\mathrm{d}V+\int_{\partial_{\Omega^{h} }}p\mathbf{\eta}^{h}\cdot\mathbf{n}\;\mathrm{d}A=0, \tag{48}\] \[\mathcal{R}_{d}^{h} =\int_{\Omega^{h}}\psi^{h}\left[2(d-1)\mathcal{D}+m^{\prime}(d) \mathcal{D}_{p}+\partial_{e}(\mathbf{\sigma}^{\prime},L)\right]\;\mathrm{d}V+\int_ {\Omega^{h}}\frac{3\mathcal{G}_{c}}{8L}\left(\psi^{h}+2L^{2}\mathbf{\nabla}\psi^{ h}\cdot\mathbf{\nabla}d^{h}\right)\;\mathrm{d}V=0,\] (49) \[\mathcal{R}_{p}^{h} :=\int_{\Omega^{h}}\chi^{h}\frac{\phi\rho_{f}-\phi_{n}\rho_{f,n} }{\Delta t}\;\mathrm{d}V+\sum_{K\in\mathcal{T}}\chi_{K}^{h}\left(\sum_{f\in \mathcal{T}_{K}}|f|F_{K,f}\right)-\int_{\Omega^{h}}\chi^{h}s\;\mathrm{d}V=0,\] (50) \[\mathcal{R}_{c}^{h} :=-\sum_{K\in\mathcal{T}}\sum_{f\in\mathcal{T}_{K}}|f|F_{K,f} \lambda_{f}+\sum_{f\in\mathcal{T}_{q}}|f|\hat{q}\rho\lambda_{f}=0, \tag{51}\]
for all \(\{\mathbf{\eta}^{h},\psi^{h},\chi^{h},\lambda^{h}\}\in\mathcal{V}_{u}^{h}\times \mathcal{V}_{d}^{h}\times\mathcal{V}_{p}^{h}\times\mathcal{L}^{h}\). Note that the discretized continuity equation (51) is introduced to ensure continuity of fluxes across faces. The reader may refer to Borio _et al._[48] for more details on the MFD discretization. The choice of the MFD method over a more common finite-volume (FV) scheme, is dictated by the need of evaluating the pressure gradient in both the momentum balance (40) and the damage evolution equation (41), which is challenging in the standard FV discretization. Here, we take advantage of the additional face-centered pressures to locally approximate the pressure field within an element by using least squares fitting.
### Solution strategy
Equations (48) - (51) form a coupled system of nonlinear equations. To solve these equations, we employ a sequentially-coupled approach, originally proposed by Miehe _et al._[36] and widely applied in other phase-field literature [16; 30; 37]. The solution algorithm is summarized in Algorithm 1. Given the displacement, pressure, and damage fields at the previous time step \(t_{n}\), \(\{\mathbf{u}^{h}_{n},p^{h}_{n},\pi^{h}_{n},d^{h}_{n}\}\), we enter staggered interations, in which the poromechanics system (Eqs. (48), (50), and (51)) and the phase-field equation (49) are solved sequentially. Specifically, at each staggered iteration, Eqs. (48), (50) and (51) are solved first by freezing the damage so as to find \(\{\mathbf{u}^{h},p^{h},\pi^{h}\}\) using a fully-coupled approach. Subsequently, the pressure- and displacement-dependent terms are updated followed by solving Eq. (49) to find the damage field \(d^{h}\). The updated damage field is then used in the poromechanics solve for displacement and pressure fields at a new iteration step until the nonlinear solvers of both poromechanics and phase-field converge in just one step. Note that the nonlinear equations of poromechanics and phase-field are solved using a Newton-Raphson method. Finally, the crack irreversibility constraint is enforced by imposing \(d=1\) if damage reaches or exceeds 0.95.
```
1:Initialize the initial condition \(\{\mathbf{u}^{h},p^{h},\pi^{h}\}_{n+1}\in\mathcal{V}_{u}^{h}\times\mathcal{V}_{d}^ {h}\times\mathcal{V}_{p}^{h}\times\mathcal{L}^{h}\), \(\{\mathbf{u}^{h},p^{h},\pi^{h}\}_{n+1}\in\mathcal{V
relevant physical scenarios. The first two 2D examples are designed to demonstrate that the proposed method is able to (i) correctly model fracture propagation, given a properly calibrated \(\delta^{L}\), and (ii) accurately predict hydraulic fracture nucleation in the bulk due to stress-induced failure. In the last example, we apply the proposed phase-field method to model hydraulic fracturing in more realistic 3D settings.
The material parameters used in all numerical examples are presented in Table 1. These are consistent with those presented in the literature [50; 51] for a rock-analog concrete sample. Note that the gravitational force is neglected in all simulations.
### Example 1: pressurized crack propagation in a nonporous solid
As a first numerical example, we consider the test case proposed by Sneddon and Lowengrub [52] and extend it to the propagation of a uniformly pressurized crack in a nonporous elastic solid. This test case has been previously presented in the literature [18; 26; 27; 38; 47]. The purpose of this numerical example is to demonstrate how to calibrate the coefficient \(\delta^{L}\) to accurately predict fracture propagation consistently with Griffith's theory [28] and to investigate how the calibrated value varies with the mesh resolution. The geometry and the boundary conditions are presented in Figure 4. We consider a \(50\,\mathrm{mm}\,\times\,100\,\mathrm{mm}\) rectangular nonporous domain (fluid flow is not considered in the bulk). An horizontal crack with length \(a=5\,\mathrm{mm}\) is present at the center of the left boundary. Note that the ratio between the initial crack length and the domain size is sufficiently small to approximate an infinite medium. A uniformly distributed pressure,
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Parameter** & **Symbol** & **Value** & **Unit** \\ \hline Bulk modulus & \(\kappa\) & 16.7 & GPa \\ Shear modulus & \(\mu\) & 12.5 & GPa \\ Critical fracture energy & \(\mathcal{G}_{c}\) & 4 & J/m\({}^{2}\) \\ Tensile strength & \(\sigma_{\mathrm{ts}}\) & 5.5 & MPa \\ Compressive strength & \(\sigma_{\mathrm{cs}}\) & 40 & MPa \\ \hline \hline \end{tabular}
\end{table}
Table 1: Material properties employed in all numerical examples.
\(\hat{p}\), linearly increasing as a function of time (_i.e.,_\(\hat{p}(t)=0.1568\,[\text{MPa/s}]\cdot t[\text{s}]\)) is applied to the fracture until propagation begins. The time step size is set to be \(\Delta t=0.1\) s.
The critical pressure that triggers fracture propagation can be computed analytically based on the benchmark solution in Sneddon and Lowengrub [52] and linear-elastic fracture mechanics (LEFM) theory [53; 28], _i.e.,_
\[p_{\text{crit}}^{\text{ref}}=\sqrt{\frac{E\mathcal{G}_{c}}{(1-\nu^{2})\pi a}}= 2.8218\text{ MPa}, \tag{52}\]
where \(E\) is Young's modulus and \(\nu\) is Poisson's ratio. We employ this analytical solution as a reference to calibrate \(\delta^{L}\). The calibrated values of \(\delta^{L}\) are listed in Table 2 for two regularization lengths and various levels of mesh refinement \(L/h\), where \(h\) denotes the element size. Remark that with the properly calibrated \(\delta^{L}\) values, the proposed phase-field method retains the capability to correctly model the energy-based fracture propagation process.
In Kumar _et al._[34], it was observed that \(\delta^{L}\) grows monotonically as a functions of the phase field regularization length. The same trend is observed in Table 2 for a pressurized crack. Additionally, given a fixed regularization length, the calibrated value of \(\delta^{L}\) increases as the mesh is refined and it converges to a unique value for highly refined meshes.
Since the calibrated value of \(\delta^{L}\) is a function of the mesh resolution, undesirable errors may be introduced if a constant \(\delta^{L}\) is employed with meshes with nonuniform spacing. Figure 5 shows the percentage error, computed as \(\text{Error}=\frac{p_{\text{crit}}-p_{\text{crit}}^{\text{ref}}}{p_{\text{ crit}}^{\text{ref}}}\), obtained for each calibrated value of \(\delta^{L}\) as a function of the mesh
Figure 4: Pressurized crack propagation in a nonporous solid: geometry and boundary conditions.
resolution. Note that larger errors are obtained as the prescribed \(\delta^{L}\) deviates from the calibrated value for a given mesh level. These observation will support the choice of the appropriate value of \(\delta^{L}\) in the next two numerical examples which employ nonuniform meshes.
### Example 2: 2D near-wellbore nucleation and propagation of hydraulic fractures
As a second example, we consider a 2D \(200\,\mathrm{mm}\times 200\,\mathrm{mm}\) porous domain that contains a circular wellbore at the center with \(8\,\mathrm{mm}\) in diameter. The domain is fully saturated and we simulate the nucleation and propagation of hydraulic fractures in the near-wellbore region due to fluid injection. The dimension and boundary conditions of the problem are illustrated in Figure 6. Roller boundary conditions are considered at all outer boundaries of the domain along with a Dirichlet pressure condition, \(p=0\). An in-situ stress field aligned with the \(x\) and \(y\) axes is considered. The maximum, \(\sigma_{H}\), and minimum, \(\sigma_{h}\), horizontal stresses are aligned with the \(x\) and \(y\) axes, respectively.
The fluid injection in the wellbore is modeled by a prescribed pressure boundary condition, \(\hat{p}_{\mathrm{inj}}(t)\), on the wellbore surface with \(\hat{p}_{\mathrm{inj}}(t)=1\) [MPa/s] \(\cdot t\)[s]. According to Eq. (19), we also impose a normal traction with the same magnitude as the injection pressure \(\hat{p}_{\mathrm{inj}}(t)\) on the inner surface of the wellbore to account for
\begin{table}
\begin{tabular}{l|c c c c|c c c c} \hline \hline Regularization length \(L\) (mm) & \multicolumn{4}{c|}{0.5} & \multicolumn{4}{c}{1.0} \\ \hline Mesh level \(L/h\) & 4 & 5 & 8 & 10 & 4 & 5 & 8 & 10 \\ \hline Calibrated \(\delta^{L}\) & 3.31 & 4.15 & 6.62 & 6.85 & 3.28 & 3.65 & 4.16 & 4.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pressurized crack propagation in a nonporous solid: calibrated values of \(\delta^{L}\) for each regularization length \(L\) and mesh level \(L/h\) employed in the simulation.
Figure 5: Pressurized crack propagation in a nonporous solid: errors of the predicted critical pressure when a fixed \(\delta^{L}\) is used at each mesh level for the case with (a) \(L=0.5\,\mathrm{mm}\), and (b) \(L=1.0\,\mathrm{mm}\).
the compression applied on the wellbore surface due to fluid injection. The poroelastic and fluid properties considered are provided in Table 3. The initial values of material deformation and pore pressure are assumed to be zero, _i.e.,_\(\varepsilon_{\text{vol},0}=0\) and \(p_{0}=0\) MPa. Here, we only run one staggered iteration for each loading step to save computational cost. To ensure accuracy and numerical stability, the initial time step size is \(\Delta t=0.5\) s and it is then reduced to \(\Delta t=0.02\) s when fracture nucleation starts (\(d\) becomes larger than a threshold, _e.g.,_\(0.95\)). We employ the phase-field regularization length \(L\) of \(0.5\) mm and discretize the domain by around \(2\) million hexahedral elements as shown in Figure 6. Specifically, we ensure \(L/h>4\) in the region where the fracture will propagate to, and the discretization level increases radially to \(L/h=10\) around the wellbore. For this problem with varying element sizes, the selection of \(\delta^{L}\) is challenging as its calibrated value is mesh dependent according to Table 2. Here, for simplicity, we employ a constant \(\delta^{L}=3.31\). Note that, as shown in Figure 5, this choice results in relatively small errors for all mesh resolutions considered. Additionally, it results in a wider diffuse area of the crack body [34] which ensures sufficient mesh resolution for the entire crack.
As a base case, we consider \(\sigma_{h}=8\) MPa and \(\sigma_{H}=12\) MPa. Figure 7 shows the damage field at different simulation stages. It is noted that the phase-field method presented in this paper can well capture the nucleation of hydraulic fractures from the smooth wellbore boundary with no preexisting crack/flaws. Fracture nucleation occurs due to failure in the bulk material according to the material tensile strength. As the injection pressure increases, the nucleated fractures keep propagating, as expected, in the direction normal to the minimum horizontal stress. Also as shown in Figure 8, the fluid pressure distribution is
Figure 6: 2D near-wellbore nucleation and propagation of hydraulic fractures: geometry, boundary conditions, and near-wellbore mesh.
generally aligned with the fracture propagation direction. This is consistent with the fact that the hydraulic fractures have a higher permeability than the bulk material. As a result, the fluid flow into the fractures is faster than pressure diffusion in the rock matrix. Thus, the fluid pressure builds up inside the fracture, further driving the fracture propagation.
We then solve the same test case using a standard phase-field formulation, which does not include the external driving force term. Figure 9 shows a comparison of the damage field obtained with the two phase-field formulations. Remark that the original phase-field method does not provide the correct direction of damage growth. This can be attributed to its purely energetic formulation, which predicts damage growth whenever the strain energy reaches a certain threshold, while not distinguishing between compressive and tensile strengths. Note that a strain energy decomposition (_e.g.,_ spectral decomposition [36]) that only considers the non-compressive component of the strain in the damage equation (15). This allows to accurately predict the correct fracture propagation pattern. However, even with this strain energy decomposition, the phase-field method requires the calibration of the regularization length \(L\) to match the material strength. This calibrated regularization length can be significantly smaller than the problem size, forcing the use of mesh resolution that can easily lead to intractable problem sizes. The phase-field formulation proposed in this paper does not suffer from this limitation due to the \(L\)-convergence of the strength surface shown in Figure 3. As such, it is an essential extension of the phase-field method for its modeling of near-wellbore hydraulic fracturing.
Next, we compare the fracture initiation pressure predicted by the phase-field method with that calculated using an analytical model. Here, the fracture initiation pressure in the phase-field simulation is measured as the injection pressure at which the damage reaches the threshold value, 0.95. The simulation is performed under five different confining stresses as given in Table 4. The reference analytical solution is computed as follows. First, we compute the maximum tangential stress on the wellbore surface based on the analytical model by Haimson and Fairhurst [54], _i.e.,_
\[\sigma_{\theta\theta,\text{max}}=\left[2-\frac{b(1-2\nu)}{1-\nu}\right]\hat{p }_{\text{inj}}(t)-3\sigma_{h}+\sigma_{H}+\frac{b(1-2\nu)}{1-\nu}p_{0}. \tag{53}\]
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Parameter** & **Symbol** & **Value** & **Unit** \\ \hline Bulk modulus of solid grains & \(\kappa_{s}\) & 0.8 & - \\ Fluid density & \(\rho_{f}\) & \(10^{3}\) & kg/m\({}^{3}\) \\ Fluid viscosity & \(\mu_{d}\) & \(10^{-9}\) & MPa\(\cdot\) s \\ Initial porosity & \(\phi_{0}\) & 0.1 & - \\ Matrix permeability & \(k_{0}\) & \(10^{-9}\) & mm\({}^{2}\) \\ Damage coefficient for permeability & \(\alpha_{k}\) & 7 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: 2D near-wellbore nucleation and propagation of hydraulic fractures: parameters for modeling poroelastic effects and fluid flow.
Figure 7: 2D near-wellbore nucleation and propagation of hydraulic fractures: phase-field damage evolution.
Figure 8: 2D near-wellbore nucleation and propagation of hydraulic fractures: pressure evolution.
Figure 9: 2D near-wellbore nucleation and propagation of hydraulic fractures: comparison of the result produced by the previous phase-field method that has no external driving force (left) and that by the proposed method with the external driving force (right).
Then, given the Drucker-Prager yield function [46] and the maximum tangential stress \(\sigma_{\theta\theta,\max}\), we find the radial stress \(\hat{\sigma}_{rr}\) that equates the yield function to zero, _i.e.,_ the material strength is reached. Since the radial stress \(\sigma_{rr}=-\hat{p}_{\text{inj}}\) at all times, we can consider that the fracture initiation pressure is \(p_{\text{frac}}=-\hat{\sigma}_{rr}\). As shown in Figure 10, the phase-field results of the fracture initiation pressure compares very favorably with the analytical solutions for all confining stress combinations, which proves that the proposed method can accurately capture the material strength. Note that the fracture initiation pressure predicted by the phase-field model is not significantly influenced by the choice of \(\delta^{L}\).
### Example 3: 3D near-wellbore nucleation and propagation of hydraulic fractures
In the third example, we simulate a fully 3D near-wellbore hydraulic fracturing problem. We consider a \(200\,\text{mm}\times 200\,\text{mm}\times 200\,\text{mm}\) porous domain subject to an in-situ stress field in which principal stresses are aligned with the \(x\), \(y\), and \(z\) directions, as shown in Figure 11. A vertical (aligned with the \(z\)-axis) cylindrical wellbore with a diameter of \(26\,\text{mm}\) is located in the middle of the domain. Roller boundary conditions are considered on all boundary faces along with a Dirichlet pressure boundary condition, \(p=0\). Additionally, fluid injection is represented by a pressure boundary condition \(\hat{p}_{\text{inj}}(t)\) in the middle section of the wellbore surface (_i.e.,_ shaded area in Figure 11) with \(\hat{p}_{\text{inj}}(t)=1\,[\text{MPa/s}]\cdot t[\text{s}]\). The material properties and the time
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \(\sigma_{h}\) (MPa) & 8.0 & 9.0 & 10.0 & 9.0 & 10.0 \\ \hline \(\sigma_{H}\) (MPa) & 12.0 & 12.0 & 12.0 & 15.0 & 15.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: 2D near-wellbore nucleation and propagation of hydraulic fractures: confining stresses adopted in the simulation.
Figure 10: 2D near-wellbore nucleation and propagation of hydraulic fractures: comparison between the simulation results and analytical solutions of the fracture initiation pressure under different confining stresses.
stepping scheme are same as those of the previous 2D wellbore example. Single staggered iteration is still adopted for sake of computational time. No initial deformation or fluid pressure \(p_{0}\) is considered. The phase-field regularization length is chosen to be \(L=1\) mm with an element size satisfying \(L/h>4\) near the wellbore, which gives approximately 20 million elements for this problem. Here, we employ a constant \(\delta^{L}\) of 3.28 for the same considerations presented for the previous 2D example.
We consider two scenarios with different in-situ stress conditions: (i) a uniform minimum horizontal stress \(\sigma_{h}=10\) MPa, and (ii) an anisotropic minimum horizontal stress \(\sigma_{h}\), layered along the vertical direction. This kind of stress distribution has been observed in sedimentary formations due to varying amounts of viscoplastic stress relaxation [55; 56; 57; 58]. Previous studies have shown that the variation of \(\sigma_{h}\) with depth significantly influences the hydraulic fracture propagation pattern [56; 59; 58]. Therefore, we employ the proposed phase-field model to numerically investigate the effects of this layered stress condition on the hydraulic fractures pattern. In both scenarios considered, the vertical stress is \(\sigma_{v}=17.5\) MPa, and the maximum horizontal stress (aligned with the \(x\)-axis) is \(\sigma_{H}=15\) MPa. This corresponds to a normal fault stress regime (\(\sigma_{v}>\sigma_{H}>\sigma_{h}\)).
Figures 12 and 13 present a 2D view (at the plane of \(z=100\) mm) and a 3D view, respectively, of the phase-field damage field at different simulation stages for the uniform minimum horizontal stress scenario. Remark that, the nucleation and propagation of bi-wing hydraulic fractures are well captured by the proposed phase-field method. As expected, fractures grow in the \(x\)-axis which is the direction normal to the minimum horizontal stress. Finally, since injection only occurs in the middle section of the wellbore,
Figure 11: 3D near-wellbore nucleation and propagation of hydraulic fractures: geometry and boundary conditions.
hydraulic fractures have an elliptical shape with the maximum propagation distance occuring at the middle plane.
Figure 13(a) shows the layered stress distribution considered for the second simulation scenario. This is inspired by the test case presented in Fu _et al._[59]. The domain is divided into eight 25 mm high vertical layers, with alternating high-stress (\(\sigma_{h}=12\) MPa) and low-stress (\(\sigma_{h}=8\) MPa) layers. Figures 13(b) and 14 show a 2D view (at the plane \(y=100\) mm) and 3D view of the damage field at different stages of the simulation. Hydraulic fractures first nucleate and propagate in the low-stress layer closer to the injection location. Then, they gradually penetrate into the two neighboring layers with a higher stress magnitude.
Figure 12: 3D near-wellbore nucleation and propagation of hydraulic fractures: 2D view of the phase-field damage evolution at \(z=100\) mm for the case under a uniform \(\sigma_{h}=10\) MPa.
Subsequently, hydraulic fractures enter the lowest layer within the injection region (the third layer from the bottom to the top). As injection continues, hydraulic fractures keep propagating at a faster rate in the low-stress layers than in the high-stress ones. These hydraulic fracture propagation patterns are consistent with those presented in other numerical investigations [59].
## 6 Conclusions
We have proposed a new phase-field approach to hydraulic fracturing in saturated porous media (_e.g._, rocks), with a speciality in predicting the stress-based characteristics of fracture nucleation. Extended from the previous variational formulation, the proposed phase-field method has relied on two key components to incorporate the rock strengths at fracture nucleation in the presence of fluid pressure. They are: (i) an external driving force term in the damage evolution equation to account for the material strength, (ii) a special damage function on the crack driving force that is contributed by fluid pressure. As verified in numerical examples, the proposed method can accurately capture both energy-based fracture propagation process and stress-based nucleation behavior of hydraulic fractures. Additionally, we have demonstrated that the phase-field approach can simulate 3D hydraulic fracture nucleation and propagation that involve complex geometries, without requiring sophisticated remeshing algorithms or element enrichment. We thus believe that the proposed method is an accurate and efficient tool for analyzing and predicting the hydraulic fracturing process in subsurface energy systems.
Future research will focus on involving more complex features in the proposed phase-field model. Examples include the consideration of natural fractures and the coupling with the heat transfer under a non-isothermal condition. Such model advancements will undoubtedly foster the numerical investigation of real enhanced geothermal systems.
Figure 13: 3D near-wellbore nucleation and propagation of hydraulic fractures: 3D view of the phase-field damage evolution for the case under a uniform \(\sigma_{h}=10\) MPa.
Figure 14: 3D near-wellbore nucleation and propagation of hydraulic fractures: (a) stress profile for the case with alternating high and low \(\sigma_{h}\) layers; (b) 2D view of the phase-field damage evolution at \(y=100\) mm for the case under alternating \(\sigma_{h}\).
## Acknowledgements
This work relied on the GEOS simulation framework, and the authors wish to thank the GEOS development team for their contributions. Funding provided by DOE EEEE Geothermal Technologies Office to Utah FORGE and the University of Utah under Project DE-EE0007080 Enhanced Geothermal System Concept Testing and Development at the Milford City, Utah Frontier Observatory for Research in Geothermal Energy (Utah FORGE) site. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The partial support of J. E. Dolbow by NSF grant CMMI-1933367 is also gratefully acknowledged.
## Contribution statement
**Fan Fei**: Conceptualization, Methodology, Software, Validation, Formal Analysis, Writing (Original Draft), Visualization. **Andre Costa**: Methodology, Software, Formal Analysis, Writing (Review & Editing). **John E. Dolbow**: Conceptualization, Methodology, Formal Analysis, Writing (Review & Editing), Funding Acquisition. **Randolph R. Settgast**: Conceptualization, Software, Funding Acquisition. **Matteo Cusini**: Conceptualization, Methodology, Software, Validation, Formal Analysis, Writing (Original Draft), Writing (Review & Editing), Supervision, Project Administration, Funding Acquisition.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.